00:00:00.001 Started by upstream project "autotest-per-patch" build number 132365 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.060 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.061 The recommended git tool is: git 00:00:00.061 using credential 00000000-0000-0000-0000-000000000002 00:00:00.063 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.093 Fetching changes from the remote Git repository 00:00:00.099 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.148 Using shallow fetch with depth 1 00:00:00.148 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.148 > git --version # timeout=10 00:00:00.197 > git --version # 'git version 2.39.2' 00:00:00.197 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.237 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.237 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.640 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.650 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.659 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:03.659 > git config core.sparsecheckout # timeout=10 00:00:03.668 > git read-tree -mu HEAD # timeout=10 00:00:03.684 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:03.701 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:03.702 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:03.791 [Pipeline] Start of Pipeline 00:00:03.805 [Pipeline] library 00:00:03.807 Loading library shm_lib@master 00:00:03.807 Library shm_lib@master is cached. Copying from home. 00:00:03.825 [Pipeline] node 00:00:03.832 Running on CYP9 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:03.835 [Pipeline] { 00:00:03.843 [Pipeline] catchError 00:00:03.844 [Pipeline] { 00:00:03.856 [Pipeline] wrap 00:00:03.865 [Pipeline] { 00:00:03.874 [Pipeline] stage 00:00:03.876 [Pipeline] { (Prologue) 00:00:04.053 [Pipeline] sh 00:00:04.340 + logger -p user.info -t JENKINS-CI 00:00:04.356 [Pipeline] echo 00:00:04.357 Node: CYP9 00:00:04.361 [Pipeline] sh 00:00:04.665 [Pipeline] setCustomBuildProperty 00:00:04.677 [Pipeline] echo 00:00:04.679 Cleanup processes 00:00:04.684 [Pipeline] sh 00:00:04.972 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.972 1043996 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.986 [Pipeline] sh 00:00:05.271 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.271 ++ grep -v 'sudo pgrep' 00:00:05.271 ++ awk '{print $1}' 00:00:05.271 + sudo kill -9 00:00:05.271 + true 00:00:05.288 [Pipeline] cleanWs 00:00:05.300 [WS-CLEANUP] Deleting project workspace... 00:00:05.300 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.307 [WS-CLEANUP] done 00:00:05.312 [Pipeline] setCustomBuildProperty 00:00:05.327 [Pipeline] sh 00:00:05.616 + sudo git config --global --replace-all safe.directory '*' 00:00:05.700 [Pipeline] httpRequest 00:00:06.073 [Pipeline] echo 00:00:06.075 Sorcerer 10.211.164.20 is alive 00:00:06.082 [Pipeline] retry 00:00:06.083 [Pipeline] { 00:00:06.092 [Pipeline] httpRequest 00:00:06.096 HttpMethod: GET 00:00:06.097 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.098 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.106 Response Code: HTTP/1.1 200 OK 00:00:06.107 Success: Status code 200 is in the accepted range: 200,404 00:00:06.107 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:18.087 [Pipeline] } 00:00:18.104 [Pipeline] // retry 00:00:18.113 [Pipeline] sh 00:00:18.406 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:18.426 [Pipeline] httpRequest 00:00:18.773 [Pipeline] echo 00:00:18.775 Sorcerer 10.211.164.20 is alive 00:00:18.787 [Pipeline] retry 00:00:18.788 [Pipeline] { 00:00:18.803 [Pipeline] httpRequest 00:00:18.808 HttpMethod: GET 00:00:18.808 URL: http://10.211.164.20/packages/spdk_6fc96a60fa896bf51b1b42f73524626c54d3caa6.tar.gz 00:00:18.809 Sending request to url: http://10.211.164.20/packages/spdk_6fc96a60fa896bf51b1b42f73524626c54d3caa6.tar.gz 00:00:18.817 Response Code: HTTP/1.1 200 OK 00:00:18.817 Success: Status code 200 is in the accepted range: 200,404 00:00:18.817 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_6fc96a60fa896bf51b1b42f73524626c54d3caa6.tar.gz 00:02:10.608 [Pipeline] } 00:02:10.627 [Pipeline] // retry 00:02:10.634 [Pipeline] sh 00:02:10.949 + tar --no-same-owner -xf spdk_6fc96a60fa896bf51b1b42f73524626c54d3caa6.tar.gz 00:02:14.269 [Pipeline] sh 00:02:14.558 + git -C spdk log --oneline -n5 00:02:14.558 6fc96a60f test/nvmf: Prepare replacements for the network setup 00:02:14.558 f22e807f1 test/autobuild: bump minimum version of intel-ipsec-mb 00:02:14.558 8d982eda9 dpdk: add adjustments for recent rte_power changes 00:02:14.558 dcc2ca8f3 bdev: fix per_channel data null when bdev_get_iostat with reset option 00:02:14.558 73f18e890 lib/reduce: fix the magic number of empty mapping detection. 00:02:14.569 [Pipeline] } 00:02:14.579 [Pipeline] // stage 00:02:14.585 [Pipeline] stage 00:02:14.587 [Pipeline] { (Prepare) 00:02:14.601 [Pipeline] writeFile 00:02:14.615 [Pipeline] sh 00:02:14.901 + logger -p user.info -t JENKINS-CI 00:02:14.915 [Pipeline] sh 00:02:15.207 + logger -p user.info -t JENKINS-CI 00:02:15.221 [Pipeline] sh 00:02:15.511 + cat autorun-spdk.conf 00:02:15.511 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:15.511 SPDK_TEST_NVMF=1 00:02:15.511 SPDK_TEST_NVME_CLI=1 00:02:15.511 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:15.511 SPDK_TEST_NVMF_NICS=e810 00:02:15.511 SPDK_TEST_VFIOUSER=1 00:02:15.511 SPDK_RUN_UBSAN=1 00:02:15.511 NET_TYPE=phy 00:02:15.520 RUN_NIGHTLY=0 00:02:15.530 [Pipeline] readFile 00:02:15.561 [Pipeline] withEnv 00:02:15.564 [Pipeline] { 00:02:15.578 [Pipeline] sh 00:02:15.867 + set -ex 00:02:15.867 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:02:15.868 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:15.868 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:15.868 ++ SPDK_TEST_NVMF=1 00:02:15.868 ++ SPDK_TEST_NVME_CLI=1 00:02:15.868 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:15.868 ++ SPDK_TEST_NVMF_NICS=e810 00:02:15.868 ++ SPDK_TEST_VFIOUSER=1 00:02:15.868 ++ SPDK_RUN_UBSAN=1 00:02:15.868 ++ NET_TYPE=phy 00:02:15.868 ++ RUN_NIGHTLY=0 00:02:15.868 + case $SPDK_TEST_NVMF_NICS in 00:02:15.868 + DRIVERS=ice 00:02:15.868 + [[ tcp == \r\d\m\a ]] 00:02:15.868 + [[ -n ice ]] 00:02:15.868 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:02:15.868 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:02:15.868 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:02:15.868 rmmod: ERROR: Module irdma is not currently loaded 00:02:15.868 rmmod: ERROR: Module i40iw is not currently loaded 00:02:15.868 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:02:15.868 + true 00:02:15.868 + for D in $DRIVERS 00:02:15.868 + sudo modprobe ice 00:02:15.868 + exit 0 00:02:15.879 [Pipeline] } 00:02:15.896 [Pipeline] // withEnv 00:02:15.903 [Pipeline] } 00:02:15.918 [Pipeline] // stage 00:02:15.929 [Pipeline] catchError 00:02:15.932 [Pipeline] { 00:02:15.947 [Pipeline] timeout 00:02:15.947 Timeout set to expire in 1 hr 0 min 00:02:15.949 [Pipeline] { 00:02:15.963 [Pipeline] stage 00:02:15.967 [Pipeline] { (Tests) 00:02:15.981 [Pipeline] sh 00:02:16.271 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:16.271 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:16.271 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:16.271 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:02:16.271 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:16.272 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:16.272 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:02:16.272 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:16.272 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:16.272 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:16.272 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:02:16.272 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:16.272 + source /etc/os-release 00:02:16.272 ++ NAME='Fedora Linux' 00:02:16.272 ++ VERSION='39 (Cloud Edition)' 00:02:16.272 ++ ID=fedora 00:02:16.272 ++ VERSION_ID=39 00:02:16.272 ++ VERSION_CODENAME= 00:02:16.272 ++ PLATFORM_ID=platform:f39 00:02:16.272 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:16.272 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:16.272 ++ LOGO=fedora-logo-icon 00:02:16.272 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:16.272 ++ HOME_URL=https://fedoraproject.org/ 00:02:16.272 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:16.272 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:16.272 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:16.272 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:16.272 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:16.272 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:16.272 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:16.272 ++ SUPPORT_END=2024-11-12 00:02:16.272 ++ VARIANT='Cloud Edition' 00:02:16.272 ++ VARIANT_ID=cloud 00:02:16.272 + uname -a 00:02:16.272 Linux spdk-cyp-09 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:16.272 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:19.574 Hugepages 00:02:19.574 node hugesize free / total 00:02:19.574 node0 1048576kB 0 / 0 00:02:19.574 node0 2048kB 0 / 0 00:02:19.574 node1 1048576kB 0 / 0 00:02:19.574 node1 2048kB 0 / 0 00:02:19.574 00:02:19.574 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:19.574 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:02:19.574 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:02:19.574 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:02:19.574 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:02:19.574 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:02:19.574 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:02:19.574 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:02:19.574 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:02:19.574 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:02:19.574 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:02:19.574 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:02:19.574 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:02:19.574 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:02:19.574 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:02:19.574 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:02:19.574 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:02:19.574 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:02:19.574 + rm -f /tmp/spdk-ld-path 00:02:19.574 + source autorun-spdk.conf 00:02:19.574 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:19.574 ++ SPDK_TEST_NVMF=1 00:02:19.574 ++ SPDK_TEST_NVME_CLI=1 00:02:19.574 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:19.574 ++ SPDK_TEST_NVMF_NICS=e810 00:02:19.574 ++ SPDK_TEST_VFIOUSER=1 00:02:19.574 ++ SPDK_RUN_UBSAN=1 00:02:19.574 ++ NET_TYPE=phy 00:02:19.574 ++ RUN_NIGHTLY=0 00:02:19.574 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:19.574 + [[ -n '' ]] 00:02:19.574 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:19.574 + for M in /var/spdk/build-*-manifest.txt 00:02:19.574 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:19.574 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:19.574 + for M in /var/spdk/build-*-manifest.txt 00:02:19.574 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:19.574 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:19.574 + for M in /var/spdk/build-*-manifest.txt 00:02:19.574 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:19.574 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:19.574 ++ uname 00:02:19.574 + [[ Linux == \L\i\n\u\x ]] 00:02:19.574 + sudo dmesg -T 00:02:19.574 + sudo dmesg --clear 00:02:19.574 + dmesg_pid=1044976 00:02:19.574 + [[ Fedora Linux == FreeBSD ]] 00:02:19.574 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:19.574 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:19.574 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:19.574 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:02:19.574 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:02:19.574 + [[ -x /usr/src/fio-static/fio ]] 00:02:19.574 + export FIO_BIN=/usr/src/fio-static/fio 00:02:19.574 + FIO_BIN=/usr/src/fio-static/fio 00:02:19.574 + sudo dmesg -Tw 00:02:19.574 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:19.574 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:19.574 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:19.574 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:19.574 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:19.574 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:19.574 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:19.574 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:19.574 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:19.837 09:35:50 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:19.837 09:35:50 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:19.837 09:35:50 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:19.837 09:35:50 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:02:19.837 09:35:50 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:02:19.837 09:35:50 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:19.837 09:35:50 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:02:19.837 09:35:50 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:02:19.837 09:35:50 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:02:19.837 09:35:50 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:02:19.837 09:35:50 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:02:19.837 09:35:50 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:19.837 09:35:50 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:19.837 09:35:50 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:19.837 09:35:50 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:19.837 09:35:50 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:19.837 09:35:50 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:19.837 09:35:50 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:19.837 09:35:50 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:19.838 09:35:50 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:19.838 09:35:50 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:19.838 09:35:50 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:19.838 09:35:50 -- paths/export.sh@5 -- $ export PATH 00:02:19.838 09:35:50 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:19.838 09:35:50 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:19.838 09:35:50 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:19.838 09:35:50 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732091750.XXXXXX 00:02:19.838 09:35:50 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732091750.5G3zhT 00:02:19.838 09:35:50 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:19.838 09:35:50 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:02:19.838 09:35:50 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:02:19.838 09:35:50 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:02:19.838 09:35:50 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:02:19.838 09:35:50 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:19.838 09:35:50 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:19.838 09:35:50 -- common/autotest_common.sh@10 -- $ set +x 00:02:19.838 09:35:50 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:02:19.838 09:35:50 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:19.838 09:35:50 -- pm/common@17 -- $ local monitor 00:02:19.838 09:35:50 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:19.838 09:35:50 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:19.838 09:35:50 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:19.838 09:35:50 -- pm/common@21 -- $ date +%s 00:02:19.838 09:35:50 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:19.838 09:35:50 -- pm/common@21 -- $ date +%s 00:02:19.838 09:35:50 -- pm/common@25 -- $ sleep 1 00:02:19.838 09:35:50 -- pm/common@21 -- $ date +%s 00:02:19.838 09:35:50 -- pm/common@21 -- $ date +%s 00:02:19.838 09:35:50 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732091750 00:02:19.838 09:35:50 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732091750 00:02:19.838 09:35:50 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732091750 00:02:19.838 09:35:50 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732091750 00:02:19.838 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732091750_collect-cpu-load.pm.log 00:02:19.838 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732091750_collect-vmstat.pm.log 00:02:19.838 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732091750_collect-cpu-temp.pm.log 00:02:19.838 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732091750_collect-bmc-pm.bmc.pm.log 00:02:20.782 09:35:51 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:20.782 09:35:51 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:20.782 09:35:51 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:20.782 09:35:51 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:20.782 09:35:51 -- spdk/autobuild.sh@16 -- $ date -u 00:02:20.782 Wed Nov 20 08:35:51 AM UTC 2024 00:02:20.782 09:35:51 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:20.782 v25.01-pre-200-g6fc96a60f 00:02:20.782 09:35:51 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:20.782 09:35:51 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:20.782 09:35:51 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:20.782 09:35:51 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:20.782 09:35:51 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:20.782 09:35:51 -- common/autotest_common.sh@10 -- $ set +x 00:02:21.043 ************************************ 00:02:21.043 START TEST ubsan 00:02:21.043 ************************************ 00:02:21.043 09:35:51 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:21.043 using ubsan 00:02:21.043 00:02:21.043 real 0m0.001s 00:02:21.043 user 0m0.001s 00:02:21.043 sys 0m0.000s 00:02:21.043 09:35:51 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:21.043 09:35:51 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:21.043 ************************************ 00:02:21.043 END TEST ubsan 00:02:21.043 ************************************ 00:02:21.043 09:35:51 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:21.043 09:35:51 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:21.043 09:35:51 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:21.043 09:35:51 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:21.043 09:35:51 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:21.043 09:35:51 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:21.043 09:35:51 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:21.043 09:35:51 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:21.043 09:35:51 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:02:21.043 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:21.043 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:21.615 Using 'verbs' RDMA provider 00:02:37.500 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:49.743 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:50.265 Creating mk/config.mk...done. 00:02:50.265 Creating mk/cc.flags.mk...done. 00:02:50.265 Type 'make' to build. 00:02:50.265 09:36:21 -- spdk/autobuild.sh@70 -- $ run_test make make -j144 00:02:50.265 09:36:21 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:50.265 09:36:21 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:50.265 09:36:21 -- common/autotest_common.sh@10 -- $ set +x 00:02:50.527 ************************************ 00:02:50.527 START TEST make 00:02:50.527 ************************************ 00:02:50.527 09:36:21 make -- common/autotest_common.sh@1129 -- $ make -j144 00:02:50.789 make[1]: Nothing to be done for 'all'. 00:02:52.179 The Meson build system 00:02:52.179 Version: 1.5.0 00:02:52.179 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:52.179 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:52.179 Build type: native build 00:02:52.179 Project name: libvfio-user 00:02:52.179 Project version: 0.0.1 00:02:52.179 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:52.179 C linker for the host machine: cc ld.bfd 2.40-14 00:02:52.179 Host machine cpu family: x86_64 00:02:52.179 Host machine cpu: x86_64 00:02:52.179 Run-time dependency threads found: YES 00:02:52.179 Library dl found: YES 00:02:52.179 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:52.179 Run-time dependency json-c found: YES 0.17 00:02:52.179 Run-time dependency cmocka found: YES 1.1.7 00:02:52.179 Program pytest-3 found: NO 00:02:52.179 Program flake8 found: NO 00:02:52.179 Program misspell-fixer found: NO 00:02:52.179 Program restructuredtext-lint found: NO 00:02:52.179 Program valgrind found: YES (/usr/bin/valgrind) 00:02:52.179 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:52.179 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:52.179 Compiler for C supports arguments -Wwrite-strings: YES 00:02:52.179 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:52.179 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:52.179 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:52.179 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:52.179 Build targets in project: 8 00:02:52.179 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:52.179 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:52.179 00:02:52.179 libvfio-user 0.0.1 00:02:52.179 00:02:52.179 User defined options 00:02:52.179 buildtype : debug 00:02:52.179 default_library: shared 00:02:52.179 libdir : /usr/local/lib 00:02:52.179 00:02:52.179 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:52.752 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:52.752 [1/37] Compiling C object samples/null.p/null.c.o 00:02:52.752 [2/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:52.752 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:52.752 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:52.752 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:52.752 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:52.752 [7/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:52.752 [8/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:52.752 [9/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:52.752 [10/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:52.752 [11/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:52.752 [12/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:52.752 [13/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:52.752 [14/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:52.752 [15/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:52.752 [16/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:52.752 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:52.752 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:52.752 [19/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:52.752 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:52.752 [21/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:52.752 [22/37] Compiling C object samples/server.p/server.c.o 00:02:52.752 [23/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:52.752 [24/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:52.752 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:53.014 [26/37] Compiling C object samples/client.p/client.c.o 00:02:53.014 [27/37] Linking target samples/client 00:02:53.014 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:53.014 [29/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:53.014 [30/37] Linking target test/unit_tests 00:02:53.014 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:02:53.276 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:53.276 [33/37] Linking target samples/lspci 00:02:53.276 [34/37] Linking target samples/null 00:02:53.276 [35/37] Linking target samples/server 00:02:53.276 [36/37] Linking target samples/gpio-pci-idio-16 00:02:53.276 [37/37] Linking target samples/shadow_ioeventfd_server 00:02:53.276 INFO: autodetecting backend as ninja 00:02:53.276 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:53.276 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:53.848 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:53.848 ninja: no work to do. 00:02:59.137 The Meson build system 00:02:59.137 Version: 1.5.0 00:02:59.137 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:02:59.137 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:02:59.137 Build type: native build 00:02:59.137 Program cat found: YES (/usr/bin/cat) 00:02:59.137 Project name: DPDK 00:02:59.137 Project version: 24.03.0 00:02:59.137 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:59.137 C linker for the host machine: cc ld.bfd 2.40-14 00:02:59.137 Host machine cpu family: x86_64 00:02:59.137 Host machine cpu: x86_64 00:02:59.137 Message: ## Building in Developer Mode ## 00:02:59.137 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:59.137 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:59.137 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:59.137 Program python3 found: YES (/usr/bin/python3) 00:02:59.137 Program cat found: YES (/usr/bin/cat) 00:02:59.137 Compiler for C supports arguments -march=native: YES 00:02:59.137 Checking for size of "void *" : 8 00:02:59.137 Checking for size of "void *" : 8 (cached) 00:02:59.137 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:59.137 Library m found: YES 00:02:59.137 Library numa found: YES 00:02:59.137 Has header "numaif.h" : YES 00:02:59.137 Library fdt found: NO 00:02:59.137 Library execinfo found: NO 00:02:59.137 Has header "execinfo.h" : YES 00:02:59.137 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:59.137 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:59.137 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:59.137 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:59.137 Run-time dependency openssl found: YES 3.1.1 00:02:59.137 Run-time dependency libpcap found: YES 1.10.4 00:02:59.137 Has header "pcap.h" with dependency libpcap: YES 00:02:59.137 Compiler for C supports arguments -Wcast-qual: YES 00:02:59.137 Compiler for C supports arguments -Wdeprecated: YES 00:02:59.137 Compiler for C supports arguments -Wformat: YES 00:02:59.137 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:59.137 Compiler for C supports arguments -Wformat-security: NO 00:02:59.137 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:59.137 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:59.137 Compiler for C supports arguments -Wnested-externs: YES 00:02:59.137 Compiler for C supports arguments -Wold-style-definition: YES 00:02:59.137 Compiler for C supports arguments -Wpointer-arith: YES 00:02:59.137 Compiler for C supports arguments -Wsign-compare: YES 00:02:59.137 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:59.137 Compiler for C supports arguments -Wundef: YES 00:02:59.137 Compiler for C supports arguments -Wwrite-strings: YES 00:02:59.137 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:59.137 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:59.137 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:59.137 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:59.137 Program objdump found: YES (/usr/bin/objdump) 00:02:59.137 Compiler for C supports arguments -mavx512f: YES 00:02:59.137 Checking if "AVX512 checking" compiles: YES 00:02:59.137 Fetching value of define "__SSE4_2__" : 1 00:02:59.137 Fetching value of define "__AES__" : 1 00:02:59.137 Fetching value of define "__AVX__" : 1 00:02:59.137 Fetching value of define "__AVX2__" : 1 00:02:59.137 Fetching value of define "__AVX512BW__" : 1 00:02:59.137 Fetching value of define "__AVX512CD__" : 1 00:02:59.137 Fetching value of define "__AVX512DQ__" : 1 00:02:59.137 Fetching value of define "__AVX512F__" : 1 00:02:59.137 Fetching value of define "__AVX512VL__" : 1 00:02:59.137 Fetching value of define "__PCLMUL__" : 1 00:02:59.137 Fetching value of define "__RDRND__" : 1 00:02:59.137 Fetching value of define "__RDSEED__" : 1 00:02:59.138 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:59.138 Fetching value of define "__znver1__" : (undefined) 00:02:59.138 Fetching value of define "__znver2__" : (undefined) 00:02:59.138 Fetching value of define "__znver3__" : (undefined) 00:02:59.138 Fetching value of define "__znver4__" : (undefined) 00:02:59.138 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:59.138 Message: lib/log: Defining dependency "log" 00:02:59.138 Message: lib/kvargs: Defining dependency "kvargs" 00:02:59.138 Message: lib/telemetry: Defining dependency "telemetry" 00:02:59.138 Checking for function "getentropy" : NO 00:02:59.138 Message: lib/eal: Defining dependency "eal" 00:02:59.138 Message: lib/ring: Defining dependency "ring" 00:02:59.138 Message: lib/rcu: Defining dependency "rcu" 00:02:59.138 Message: lib/mempool: Defining dependency "mempool" 00:02:59.138 Message: lib/mbuf: Defining dependency "mbuf" 00:02:59.138 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:59.138 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:59.138 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:59.138 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:59.138 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:59.138 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:59.138 Compiler for C supports arguments -mpclmul: YES 00:02:59.138 Compiler for C supports arguments -maes: YES 00:02:59.138 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:59.138 Compiler for C supports arguments -mavx512bw: YES 00:02:59.138 Compiler for C supports arguments -mavx512dq: YES 00:02:59.138 Compiler for C supports arguments -mavx512vl: YES 00:02:59.138 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:59.138 Compiler for C supports arguments -mavx2: YES 00:02:59.138 Compiler for C supports arguments -mavx: YES 00:02:59.138 Message: lib/net: Defining dependency "net" 00:02:59.138 Message: lib/meter: Defining dependency "meter" 00:02:59.138 Message: lib/ethdev: Defining dependency "ethdev" 00:02:59.138 Message: lib/pci: Defining dependency "pci" 00:02:59.138 Message: lib/cmdline: Defining dependency "cmdline" 00:02:59.138 Message: lib/hash: Defining dependency "hash" 00:02:59.138 Message: lib/timer: Defining dependency "timer" 00:02:59.138 Message: lib/compressdev: Defining dependency "compressdev" 00:02:59.138 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:59.138 Message: lib/dmadev: Defining dependency "dmadev" 00:02:59.138 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:59.138 Message: lib/power: Defining dependency "power" 00:02:59.138 Message: lib/reorder: Defining dependency "reorder" 00:02:59.138 Message: lib/security: Defining dependency "security" 00:02:59.138 Has header "linux/userfaultfd.h" : YES 00:02:59.138 Has header "linux/vduse.h" : YES 00:02:59.138 Message: lib/vhost: Defining dependency "vhost" 00:02:59.138 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:59.138 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:59.138 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:59.138 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:59.138 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:59.138 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:59.138 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:59.138 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:59.138 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:59.138 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:59.138 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:59.138 Configuring doxy-api-html.conf using configuration 00:02:59.138 Configuring doxy-api-man.conf using configuration 00:02:59.138 Program mandb found: YES (/usr/bin/mandb) 00:02:59.138 Program sphinx-build found: NO 00:02:59.138 Configuring rte_build_config.h using configuration 00:02:59.138 Message: 00:02:59.138 ================= 00:02:59.138 Applications Enabled 00:02:59.138 ================= 00:02:59.138 00:02:59.138 apps: 00:02:59.138 00:02:59.138 00:02:59.138 Message: 00:02:59.138 ================= 00:02:59.138 Libraries Enabled 00:02:59.138 ================= 00:02:59.138 00:02:59.138 libs: 00:02:59.138 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:59.138 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:59.138 cryptodev, dmadev, power, reorder, security, vhost, 00:02:59.138 00:02:59.138 Message: 00:02:59.138 =============== 00:02:59.138 Drivers Enabled 00:02:59.138 =============== 00:02:59.138 00:02:59.138 common: 00:02:59.138 00:02:59.138 bus: 00:02:59.138 pci, vdev, 00:02:59.138 mempool: 00:02:59.138 ring, 00:02:59.138 dma: 00:02:59.138 00:02:59.138 net: 00:02:59.138 00:02:59.138 crypto: 00:02:59.138 00:02:59.138 compress: 00:02:59.138 00:02:59.138 vdpa: 00:02:59.138 00:02:59.138 00:02:59.138 Message: 00:02:59.138 ================= 00:02:59.138 Content Skipped 00:02:59.138 ================= 00:02:59.138 00:02:59.138 apps: 00:02:59.138 dumpcap: explicitly disabled via build config 00:02:59.138 graph: explicitly disabled via build config 00:02:59.138 pdump: explicitly disabled via build config 00:02:59.138 proc-info: explicitly disabled via build config 00:02:59.138 test-acl: explicitly disabled via build config 00:02:59.138 test-bbdev: explicitly disabled via build config 00:02:59.138 test-cmdline: explicitly disabled via build config 00:02:59.138 test-compress-perf: explicitly disabled via build config 00:02:59.138 test-crypto-perf: explicitly disabled via build config 00:02:59.138 test-dma-perf: explicitly disabled via build config 00:02:59.138 test-eventdev: explicitly disabled via build config 00:02:59.138 test-fib: explicitly disabled via build config 00:02:59.138 test-flow-perf: explicitly disabled via build config 00:02:59.138 test-gpudev: explicitly disabled via build config 00:02:59.138 test-mldev: explicitly disabled via build config 00:02:59.138 test-pipeline: explicitly disabled via build config 00:02:59.138 test-pmd: explicitly disabled via build config 00:02:59.138 test-regex: explicitly disabled via build config 00:02:59.138 test-sad: explicitly disabled via build config 00:02:59.138 test-security-perf: explicitly disabled via build config 00:02:59.138 00:02:59.138 libs: 00:02:59.138 argparse: explicitly disabled via build config 00:02:59.138 metrics: explicitly disabled via build config 00:02:59.138 acl: explicitly disabled via build config 00:02:59.138 bbdev: explicitly disabled via build config 00:02:59.138 bitratestats: explicitly disabled via build config 00:02:59.138 bpf: explicitly disabled via build config 00:02:59.138 cfgfile: explicitly disabled via build config 00:02:59.138 distributor: explicitly disabled via build config 00:02:59.138 efd: explicitly disabled via build config 00:02:59.138 eventdev: explicitly disabled via build config 00:02:59.138 dispatcher: explicitly disabled via build config 00:02:59.138 gpudev: explicitly disabled via build config 00:02:59.138 gro: explicitly disabled via build config 00:02:59.138 gso: explicitly disabled via build config 00:02:59.138 ip_frag: explicitly disabled via build config 00:02:59.138 jobstats: explicitly disabled via build config 00:02:59.138 latencystats: explicitly disabled via build config 00:02:59.138 lpm: explicitly disabled via build config 00:02:59.138 member: explicitly disabled via build config 00:02:59.138 pcapng: explicitly disabled via build config 00:02:59.138 rawdev: explicitly disabled via build config 00:02:59.139 regexdev: explicitly disabled via build config 00:02:59.139 mldev: explicitly disabled via build config 00:02:59.139 rib: explicitly disabled via build config 00:02:59.139 sched: explicitly disabled via build config 00:02:59.139 stack: explicitly disabled via build config 00:02:59.139 ipsec: explicitly disabled via build config 00:02:59.139 pdcp: explicitly disabled via build config 00:02:59.139 fib: explicitly disabled via build config 00:02:59.139 port: explicitly disabled via build config 00:02:59.139 pdump: explicitly disabled via build config 00:02:59.139 table: explicitly disabled via build config 00:02:59.139 pipeline: explicitly disabled via build config 00:02:59.139 graph: explicitly disabled via build config 00:02:59.139 node: explicitly disabled via build config 00:02:59.139 00:02:59.139 drivers: 00:02:59.139 common/cpt: not in enabled drivers build config 00:02:59.139 common/dpaax: not in enabled drivers build config 00:02:59.139 common/iavf: not in enabled drivers build config 00:02:59.139 common/idpf: not in enabled drivers build config 00:02:59.139 common/ionic: not in enabled drivers build config 00:02:59.139 common/mvep: not in enabled drivers build config 00:02:59.139 common/octeontx: not in enabled drivers build config 00:02:59.139 bus/auxiliary: not in enabled drivers build config 00:02:59.139 bus/cdx: not in enabled drivers build config 00:02:59.139 bus/dpaa: not in enabled drivers build config 00:02:59.139 bus/fslmc: not in enabled drivers build config 00:02:59.139 bus/ifpga: not in enabled drivers build config 00:02:59.139 bus/platform: not in enabled drivers build config 00:02:59.139 bus/uacce: not in enabled drivers build config 00:02:59.139 bus/vmbus: not in enabled drivers build config 00:02:59.139 common/cnxk: not in enabled drivers build config 00:02:59.139 common/mlx5: not in enabled drivers build config 00:02:59.139 common/nfp: not in enabled drivers build config 00:02:59.139 common/nitrox: not in enabled drivers build config 00:02:59.139 common/qat: not in enabled drivers build config 00:02:59.139 common/sfc_efx: not in enabled drivers build config 00:02:59.139 mempool/bucket: not in enabled drivers build config 00:02:59.139 mempool/cnxk: not in enabled drivers build config 00:02:59.139 mempool/dpaa: not in enabled drivers build config 00:02:59.139 mempool/dpaa2: not in enabled drivers build config 00:02:59.139 mempool/octeontx: not in enabled drivers build config 00:02:59.139 mempool/stack: not in enabled drivers build config 00:02:59.139 dma/cnxk: not in enabled drivers build config 00:02:59.139 dma/dpaa: not in enabled drivers build config 00:02:59.139 dma/dpaa2: not in enabled drivers build config 00:02:59.139 dma/hisilicon: not in enabled drivers build config 00:02:59.139 dma/idxd: not in enabled drivers build config 00:02:59.139 dma/ioat: not in enabled drivers build config 00:02:59.139 dma/skeleton: not in enabled drivers build config 00:02:59.139 net/af_packet: not in enabled drivers build config 00:02:59.139 net/af_xdp: not in enabled drivers build config 00:02:59.139 net/ark: not in enabled drivers build config 00:02:59.139 net/atlantic: not in enabled drivers build config 00:02:59.139 net/avp: not in enabled drivers build config 00:02:59.139 net/axgbe: not in enabled drivers build config 00:02:59.139 net/bnx2x: not in enabled drivers build config 00:02:59.139 net/bnxt: not in enabled drivers build config 00:02:59.139 net/bonding: not in enabled drivers build config 00:02:59.139 net/cnxk: not in enabled drivers build config 00:02:59.139 net/cpfl: not in enabled drivers build config 00:02:59.139 net/cxgbe: not in enabled drivers build config 00:02:59.139 net/dpaa: not in enabled drivers build config 00:02:59.139 net/dpaa2: not in enabled drivers build config 00:02:59.139 net/e1000: not in enabled drivers build config 00:02:59.139 net/ena: not in enabled drivers build config 00:02:59.139 net/enetc: not in enabled drivers build config 00:02:59.139 net/enetfec: not in enabled drivers build config 00:02:59.139 net/enic: not in enabled drivers build config 00:02:59.139 net/failsafe: not in enabled drivers build config 00:02:59.139 net/fm10k: not in enabled drivers build config 00:02:59.139 net/gve: not in enabled drivers build config 00:02:59.139 net/hinic: not in enabled drivers build config 00:02:59.139 net/hns3: not in enabled drivers build config 00:02:59.139 net/i40e: not in enabled drivers build config 00:02:59.139 net/iavf: not in enabled drivers build config 00:02:59.139 net/ice: not in enabled drivers build config 00:02:59.139 net/idpf: not in enabled drivers build config 00:02:59.139 net/igc: not in enabled drivers build config 00:02:59.139 net/ionic: not in enabled drivers build config 00:02:59.139 net/ipn3ke: not in enabled drivers build config 00:02:59.139 net/ixgbe: not in enabled drivers build config 00:02:59.139 net/mana: not in enabled drivers build config 00:02:59.139 net/memif: not in enabled drivers build config 00:02:59.139 net/mlx4: not in enabled drivers build config 00:02:59.139 net/mlx5: not in enabled drivers build config 00:02:59.139 net/mvneta: not in enabled drivers build config 00:02:59.139 net/mvpp2: not in enabled drivers build config 00:02:59.139 net/netvsc: not in enabled drivers build config 00:02:59.139 net/nfb: not in enabled drivers build config 00:02:59.139 net/nfp: not in enabled drivers build config 00:02:59.139 net/ngbe: not in enabled drivers build config 00:02:59.139 net/null: not in enabled drivers build config 00:02:59.139 net/octeontx: not in enabled drivers build config 00:02:59.139 net/octeon_ep: not in enabled drivers build config 00:02:59.139 net/pcap: not in enabled drivers build config 00:02:59.139 net/pfe: not in enabled drivers build config 00:02:59.139 net/qede: not in enabled drivers build config 00:02:59.139 net/ring: not in enabled drivers build config 00:02:59.139 net/sfc: not in enabled drivers build config 00:02:59.139 net/softnic: not in enabled drivers build config 00:02:59.139 net/tap: not in enabled drivers build config 00:02:59.139 net/thunderx: not in enabled drivers build config 00:02:59.139 net/txgbe: not in enabled drivers build config 00:02:59.139 net/vdev_netvsc: not in enabled drivers build config 00:02:59.139 net/vhost: not in enabled drivers build config 00:02:59.139 net/virtio: not in enabled drivers build config 00:02:59.139 net/vmxnet3: not in enabled drivers build config 00:02:59.139 raw/*: missing internal dependency, "rawdev" 00:02:59.139 crypto/armv8: not in enabled drivers build config 00:02:59.139 crypto/bcmfs: not in enabled drivers build config 00:02:59.139 crypto/caam_jr: not in enabled drivers build config 00:02:59.139 crypto/ccp: not in enabled drivers build config 00:02:59.139 crypto/cnxk: not in enabled drivers build config 00:02:59.139 crypto/dpaa_sec: not in enabled drivers build config 00:02:59.139 crypto/dpaa2_sec: not in enabled drivers build config 00:02:59.139 crypto/ipsec_mb: not in enabled drivers build config 00:02:59.139 crypto/mlx5: not in enabled drivers build config 00:02:59.139 crypto/mvsam: not in enabled drivers build config 00:02:59.139 crypto/nitrox: not in enabled drivers build config 00:02:59.139 crypto/null: not in enabled drivers build config 00:02:59.139 crypto/octeontx: not in enabled drivers build config 00:02:59.139 crypto/openssl: not in enabled drivers build config 00:02:59.139 crypto/scheduler: not in enabled drivers build config 00:02:59.139 crypto/uadk: not in enabled drivers build config 00:02:59.139 crypto/virtio: not in enabled drivers build config 00:02:59.139 compress/isal: not in enabled drivers build config 00:02:59.139 compress/mlx5: not in enabled drivers build config 00:02:59.139 compress/nitrox: not in enabled drivers build config 00:02:59.139 compress/octeontx: not in enabled drivers build config 00:02:59.139 compress/zlib: not in enabled drivers build config 00:02:59.139 regex/*: missing internal dependency, "regexdev" 00:02:59.139 ml/*: missing internal dependency, "mldev" 00:02:59.139 vdpa/ifc: not in enabled drivers build config 00:02:59.139 vdpa/mlx5: not in enabled drivers build config 00:02:59.139 vdpa/nfp: not in enabled drivers build config 00:02:59.139 vdpa/sfc: not in enabled drivers build config 00:02:59.139 event/*: missing internal dependency, "eventdev" 00:02:59.139 baseband/*: missing internal dependency, "bbdev" 00:02:59.139 gpu/*: missing internal dependency, "gpudev" 00:02:59.139 00:02:59.139 00:02:59.712 Build targets in project: 84 00:02:59.712 00:02:59.712 DPDK 24.03.0 00:02:59.712 00:02:59.712 User defined options 00:02:59.712 buildtype : debug 00:02:59.712 default_library : shared 00:02:59.712 libdir : lib 00:02:59.712 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:59.712 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:59.712 c_link_args : 00:02:59.712 cpu_instruction_set: native 00:02:59.712 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:02:59.712 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:02:59.712 enable_docs : false 00:02:59.712 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:59.712 enable_kmods : false 00:02:59.712 max_lcores : 128 00:02:59.712 tests : false 00:02:59.712 00:02:59.712 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:59.982 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:02:59.982 [1/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:59.982 [2/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:59.982 [3/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:59.982 [4/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:00.249 [5/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:00.249 [6/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:00.249 [7/267] Linking static target lib/librte_kvargs.a 00:03:00.249 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:00.249 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:00.249 [10/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:00.249 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:00.249 [12/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:00.249 [13/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:00.249 [14/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:00.249 [15/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:00.249 [16/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:00.249 [17/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:00.249 [18/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:00.249 [19/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:00.249 [20/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:00.249 [21/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:00.249 [22/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:00.249 [23/267] Linking static target lib/librte_log.a 00:03:00.249 [24/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:00.249 [25/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:00.249 [26/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:00.249 [27/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:00.249 [28/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:00.249 [29/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:00.249 [30/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:00.249 [31/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:00.249 [32/267] Linking static target lib/librte_pci.a 00:03:00.510 [33/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:00.510 [34/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:00.510 [35/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:00.510 [36/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:00.510 [37/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:00.510 [38/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:00.510 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:00.510 [40/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.770 [41/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.770 [42/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:00.770 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:00.770 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:00.770 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:00.770 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:00.770 [47/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:00.770 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:00.770 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:00.770 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:00.770 [51/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:00.770 [52/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:00.770 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:00.771 [54/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:00.771 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:00.771 [56/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:00.771 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:00.771 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:00.771 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:00.771 [60/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:00.771 [61/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:00.771 [62/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:00.771 [63/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:00.771 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:00.771 [65/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:00.771 [66/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:00.771 [67/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:00.771 [68/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:00.771 [69/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:00.771 [70/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:00.771 [71/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:00.771 [72/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:00.771 [73/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:00.771 [74/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:00.771 [75/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:00.771 [76/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:00.771 [77/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:00.771 [78/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:00.771 [79/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:00.771 [80/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:00.771 [81/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:00.771 [82/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:00.771 [83/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:00.771 [84/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:03:00.771 [85/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:00.771 [86/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:00.771 [87/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:00.771 [88/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:00.771 [89/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:00.771 [90/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:00.771 [91/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:00.771 [92/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:00.771 [93/267] Linking static target lib/librte_ring.a 00:03:00.771 [94/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:00.771 [95/267] Linking static target lib/librte_meter.a 00:03:00.771 [96/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:00.771 [97/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:00.771 [98/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:00.771 [99/267] Linking static target lib/librte_telemetry.a 00:03:00.771 [100/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:00.771 [101/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:00.771 [102/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:00.771 [103/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:00.771 [104/267] Linking static target lib/librte_timer.a 00:03:00.771 [105/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:00.771 [106/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:00.771 [107/267] Linking static target lib/librte_mempool.a 00:03:00.771 [108/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:00.771 [109/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:00.771 [110/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:00.771 [111/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:00.771 [112/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:00.771 [113/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:00.771 [114/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:00.771 [115/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:00.771 [116/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:00.771 [117/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:00.771 [118/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:00.771 [119/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:00.771 [120/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:00.771 [121/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:00.771 [122/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:00.771 [123/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:00.771 [124/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:00.771 [125/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:00.771 [126/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:00.771 [127/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:00.771 [128/267] Linking static target lib/librte_cmdline.a 00:03:00.771 [129/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:00.771 [130/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:00.771 [131/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:00.771 [132/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:00.771 [133/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:00.771 [134/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:00.771 [135/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:00.771 [136/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:00.771 [137/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:00.771 [138/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:00.771 [139/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:00.771 [140/267] Linking static target lib/librte_net.a 00:03:00.771 [141/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:00.771 [142/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:00.771 [143/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:00.771 [144/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:00.771 [145/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.771 [146/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:00.771 [147/267] Linking static target lib/librte_compressdev.a 00:03:00.771 [148/267] Linking static target lib/librte_dmadev.a 00:03:00.771 [149/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:00.771 [150/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:00.771 [151/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:00.771 [152/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:00.771 [153/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:01.033 [154/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:01.033 [155/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:01.033 [156/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:01.033 [157/267] Linking target lib/librte_log.so.24.1 00:03:01.033 [158/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:01.033 [159/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:01.033 [160/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:01.033 [161/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:01.033 [162/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:01.033 [163/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:01.033 [164/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:01.033 [165/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:01.033 [166/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:01.033 [167/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:01.033 [168/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:01.033 [169/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:01.033 [170/267] Linking static target lib/librte_eal.a 00:03:01.033 [171/267] Linking static target lib/librte_rcu.a 00:03:01.033 [172/267] Linking static target lib/librte_security.a 00:03:01.033 [173/267] Linking static target lib/librte_power.a 00:03:01.033 [174/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:01.033 [175/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:01.033 [176/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:01.033 [177/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:01.033 [178/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:01.033 [179/267] Linking static target lib/librte_reorder.a 00:03:01.033 [180/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.033 [181/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:01.033 [182/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:01.033 [183/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:01.033 [184/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:01.033 [185/267] Linking static target drivers/librte_bus_vdev.a 00:03:01.033 [186/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.033 [187/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:01.033 [188/267] Linking target lib/librte_kvargs.so.24.1 00:03:01.033 [189/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:01.033 [190/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:01.033 [191/267] Linking static target lib/librte_hash.a 00:03:01.033 [192/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:01.033 [193/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:01.033 [194/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:01.295 [195/267] Linking static target lib/librte_mbuf.a 00:03:01.295 [196/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:01.295 [197/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:01.295 [198/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:01.295 [199/267] Linking static target drivers/librte_bus_pci.a 00:03:01.295 [200/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:01.295 [201/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:01.295 [202/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:01.295 [203/267] Linking static target drivers/librte_mempool_ring.a 00:03:01.295 [204/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.295 [205/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.295 [206/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:01.295 [207/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.295 [208/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:01.295 [209/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.295 [210/267] Linking static target lib/librte_cryptodev.a 00:03:01.567 [211/267] Linking target lib/librte_telemetry.so.24.1 00:03:01.567 [212/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.567 [213/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.567 [214/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:01.567 [215/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.567 [216/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.567 [217/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.832 [218/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:01.832 [219/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:01.832 [220/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.832 [221/267] Linking static target lib/librte_ethdev.a 00:03:02.094 [222/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.094 [223/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.094 [224/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.094 [225/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.355 [226/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.617 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:02.617 [228/267] Linking static target lib/librte_vhost.a 00:03:03.559 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.944 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.530 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.474 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.474 [233/267] Linking target lib/librte_eal.so.24.1 00:03:12.735 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:12.735 [235/267] Linking target lib/librte_timer.so.24.1 00:03:12.735 [236/267] Linking target lib/librte_ring.so.24.1 00:03:12.735 [237/267] Linking target lib/librte_meter.so.24.1 00:03:12.735 [238/267] Linking target drivers/librte_bus_vdev.so.24.1 00:03:12.735 [239/267] Linking target lib/librte_pci.so.24.1 00:03:12.735 [240/267] Linking target lib/librte_dmadev.so.24.1 00:03:12.735 [241/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:12.735 [242/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:12.735 [243/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:12.735 [244/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:12.735 [245/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:12.997 [246/267] Linking target lib/librte_rcu.so.24.1 00:03:12.997 [247/267] Linking target lib/librte_mempool.so.24.1 00:03:12.997 [248/267] Linking target drivers/librte_bus_pci.so.24.1 00:03:12.997 [249/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:12.997 [250/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:12.997 [251/267] Linking target drivers/librte_mempool_ring.so.24.1 00:03:12.997 [252/267] Linking target lib/librte_mbuf.so.24.1 00:03:13.258 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:13.258 [254/267] Linking target lib/librte_compressdev.so.24.1 00:03:13.258 [255/267] Linking target lib/librte_reorder.so.24.1 00:03:13.258 [256/267] Linking target lib/librte_net.so.24.1 00:03:13.258 [257/267] Linking target lib/librte_cryptodev.so.24.1 00:03:13.258 [258/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:13.258 [259/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:13.519 [260/267] Linking target lib/librte_hash.so.24.1 00:03:13.519 [261/267] Linking target lib/librte_cmdline.so.24.1 00:03:13.519 [262/267] Linking target lib/librte_security.so.24.1 00:03:13.519 [263/267] Linking target lib/librte_ethdev.so.24.1 00:03:13.519 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:13.519 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:13.519 [266/267] Linking target lib/librte_power.so.24.1 00:03:13.519 [267/267] Linking target lib/librte_vhost.so.24.1 00:03:13.781 INFO: autodetecting backend as ninja 00:03:13.781 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:03:17.083 CC lib/ut_mock/mock.o 00:03:17.083 CC lib/log/log.o 00:03:17.083 CC lib/log/log_flags.o 00:03:17.083 CC lib/ut/ut.o 00:03:17.083 CC lib/log/log_deprecated.o 00:03:17.344 LIB libspdk_ut_mock.a 00:03:17.344 LIB libspdk_ut.a 00:03:17.344 SO libspdk_ut_mock.so.6.0 00:03:17.344 LIB libspdk_log.a 00:03:17.344 SO libspdk_ut.so.2.0 00:03:17.344 SO libspdk_log.so.7.1 00:03:17.344 SYMLINK libspdk_ut_mock.so 00:03:17.344 SYMLINK libspdk_ut.so 00:03:17.344 SYMLINK libspdk_log.so 00:03:17.916 CXX lib/trace_parser/trace.o 00:03:17.916 CC lib/util/base64.o 00:03:17.916 CC lib/util/bit_array.o 00:03:17.916 CC lib/util/cpuset.o 00:03:17.916 CC lib/ioat/ioat.o 00:03:17.916 CC lib/util/crc16.o 00:03:17.916 CC lib/util/crc32.o 00:03:17.916 CC lib/util/crc32c.o 00:03:17.916 CC lib/dma/dma.o 00:03:17.916 CC lib/util/crc32_ieee.o 00:03:17.916 CC lib/util/crc64.o 00:03:17.916 CC lib/util/dif.o 00:03:17.916 CC lib/util/fd.o 00:03:17.916 CC lib/util/fd_group.o 00:03:17.916 CC lib/util/file.o 00:03:17.916 CC lib/util/hexlify.o 00:03:17.916 CC lib/util/iov.o 00:03:17.916 CC lib/util/math.o 00:03:17.916 CC lib/util/net.o 00:03:17.916 CC lib/util/pipe.o 00:03:17.916 CC lib/util/strerror_tls.o 00:03:17.916 CC lib/util/string.o 00:03:17.916 CC lib/util/uuid.o 00:03:17.916 CC lib/util/xor.o 00:03:17.916 CC lib/util/zipf.o 00:03:17.916 CC lib/util/md5.o 00:03:17.916 CC lib/vfio_user/host/vfio_user_pci.o 00:03:17.916 CC lib/vfio_user/host/vfio_user.o 00:03:18.177 LIB libspdk_dma.a 00:03:18.177 SO libspdk_dma.so.5.0 00:03:18.177 LIB libspdk_ioat.a 00:03:18.177 SYMLINK libspdk_dma.so 00:03:18.177 SO libspdk_ioat.so.7.0 00:03:18.177 SYMLINK libspdk_ioat.so 00:03:18.177 LIB libspdk_vfio_user.a 00:03:18.177 SO libspdk_vfio_user.so.5.0 00:03:18.438 LIB libspdk_util.a 00:03:18.438 SYMLINK libspdk_vfio_user.so 00:03:18.438 SO libspdk_util.so.10.1 00:03:18.438 SYMLINK libspdk_util.so 00:03:18.700 LIB libspdk_trace_parser.a 00:03:18.700 SO libspdk_trace_parser.so.6.0 00:03:18.700 SYMLINK libspdk_trace_parser.so 00:03:18.961 CC lib/json/json_parse.o 00:03:18.961 CC lib/json/json_util.o 00:03:18.961 CC lib/idxd/idxd.o 00:03:18.961 CC lib/json/json_write.o 00:03:18.961 CC lib/idxd/idxd_user.o 00:03:18.961 CC lib/idxd/idxd_kernel.o 00:03:18.961 CC lib/rdma_utils/rdma_utils.o 00:03:18.961 CC lib/env_dpdk/env.o 00:03:18.961 CC lib/env_dpdk/memory.o 00:03:18.961 CC lib/vmd/vmd.o 00:03:18.961 CC lib/conf/conf.o 00:03:18.961 CC lib/env_dpdk/pci.o 00:03:18.961 CC lib/vmd/led.o 00:03:18.961 CC lib/env_dpdk/init.o 00:03:18.961 CC lib/env_dpdk/threads.o 00:03:18.961 CC lib/env_dpdk/pci_ioat.o 00:03:18.961 CC lib/env_dpdk/pci_virtio.o 00:03:18.961 CC lib/env_dpdk/pci_vmd.o 00:03:18.961 CC lib/env_dpdk/pci_idxd.o 00:03:18.961 CC lib/env_dpdk/pci_event.o 00:03:18.961 CC lib/env_dpdk/sigbus_handler.o 00:03:18.961 CC lib/env_dpdk/pci_dpdk.o 00:03:18.961 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:18.961 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:19.222 LIB libspdk_json.a 00:03:19.222 LIB libspdk_conf.a 00:03:19.222 SO libspdk_json.so.6.0 00:03:19.222 SO libspdk_conf.so.6.0 00:03:19.222 LIB libspdk_rdma_utils.a 00:03:19.222 SO libspdk_rdma_utils.so.1.0 00:03:19.222 SYMLINK libspdk_conf.so 00:03:19.222 SYMLINK libspdk_json.so 00:03:19.222 SYMLINK libspdk_rdma_utils.so 00:03:19.222 LIB libspdk_idxd.a 00:03:19.484 SO libspdk_idxd.so.12.1 00:03:19.484 SYMLINK libspdk_idxd.so 00:03:19.484 LIB libspdk_vmd.a 00:03:19.484 SO libspdk_vmd.so.6.0 00:03:19.747 CC lib/jsonrpc/jsonrpc_server.o 00:03:19.747 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:19.747 CC lib/jsonrpc/jsonrpc_client.o 00:03:19.747 SYMLINK libspdk_vmd.so 00:03:19.747 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:19.747 CC lib/rdma_provider/common.o 00:03:19.747 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:20.008 LIB libspdk_jsonrpc.a 00:03:20.008 LIB libspdk_rdma_provider.a 00:03:20.008 SO libspdk_jsonrpc.so.6.0 00:03:20.008 SO libspdk_rdma_provider.so.7.0 00:03:20.008 SYMLINK libspdk_jsonrpc.so 00:03:20.008 SYMLINK libspdk_rdma_provider.so 00:03:20.270 LIB libspdk_env_dpdk.a 00:03:20.270 SO libspdk_env_dpdk.so.15.1 00:03:20.270 SYMLINK libspdk_env_dpdk.so 00:03:20.270 CC lib/rpc/rpc.o 00:03:20.531 LIB libspdk_rpc.a 00:03:20.531 SO libspdk_rpc.so.6.0 00:03:20.792 SYMLINK libspdk_rpc.so 00:03:21.054 CC lib/keyring/keyring.o 00:03:21.054 CC lib/trace/trace.o 00:03:21.054 CC lib/keyring/keyring_rpc.o 00:03:21.054 CC lib/trace/trace_flags.o 00:03:21.054 CC lib/trace/trace_rpc.o 00:03:21.054 CC lib/notify/notify.o 00:03:21.054 CC lib/notify/notify_rpc.o 00:03:21.315 LIB libspdk_notify.a 00:03:21.315 SO libspdk_notify.so.6.0 00:03:21.315 LIB libspdk_keyring.a 00:03:21.315 LIB libspdk_trace.a 00:03:21.315 SO libspdk_keyring.so.2.0 00:03:21.315 SO libspdk_trace.so.11.0 00:03:21.315 SYMLINK libspdk_notify.so 00:03:21.576 SYMLINK libspdk_keyring.so 00:03:21.576 SYMLINK libspdk_trace.so 00:03:21.838 CC lib/thread/thread.o 00:03:21.838 CC lib/thread/iobuf.o 00:03:21.838 CC lib/sock/sock.o 00:03:21.838 CC lib/sock/sock_rpc.o 00:03:22.098 LIB libspdk_sock.a 00:03:22.360 SO libspdk_sock.so.10.0 00:03:22.360 SYMLINK libspdk_sock.so 00:03:22.621 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:22.621 CC lib/nvme/nvme_ctrlr.o 00:03:22.621 CC lib/nvme/nvme_fabric.o 00:03:22.621 CC lib/nvme/nvme_ns_cmd.o 00:03:22.621 CC lib/nvme/nvme_ns.o 00:03:22.621 CC lib/nvme/nvme_pcie_common.o 00:03:22.621 CC lib/nvme/nvme_pcie.o 00:03:22.621 CC lib/nvme/nvme_qpair.o 00:03:22.621 CC lib/nvme/nvme.o 00:03:22.621 CC lib/nvme/nvme_quirks.o 00:03:22.621 CC lib/nvme/nvme_transport.o 00:03:22.621 CC lib/nvme/nvme_discovery.o 00:03:22.621 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:22.621 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:22.621 CC lib/nvme/nvme_tcp.o 00:03:22.621 CC lib/nvme/nvme_opal.o 00:03:22.621 CC lib/nvme/nvme_io_msg.o 00:03:22.621 CC lib/nvme/nvme_poll_group.o 00:03:22.621 CC lib/nvme/nvme_zns.o 00:03:22.621 CC lib/nvme/nvme_stubs.o 00:03:22.621 CC lib/nvme/nvme_auth.o 00:03:22.621 CC lib/nvme/nvme_cuse.o 00:03:22.621 CC lib/nvme/nvme_vfio_user.o 00:03:22.621 CC lib/nvme/nvme_rdma.o 00:03:23.193 LIB libspdk_thread.a 00:03:23.193 SO libspdk_thread.so.11.0 00:03:23.193 SYMLINK libspdk_thread.so 00:03:23.765 CC lib/fsdev/fsdev.o 00:03:23.765 CC lib/fsdev/fsdev_io.o 00:03:23.765 CC lib/fsdev/fsdev_rpc.o 00:03:23.765 CC lib/init/json_config.o 00:03:23.765 CC lib/virtio/virtio.o 00:03:23.765 CC lib/init/subsystem.o 00:03:23.765 CC lib/virtio/virtio_vhost_user.o 00:03:23.765 CC lib/init/subsystem_rpc.o 00:03:23.765 CC lib/virtio/virtio_vfio_user.o 00:03:23.765 CC lib/virtio/virtio_pci.o 00:03:23.765 CC lib/init/rpc.o 00:03:23.765 CC lib/vfu_tgt/tgt_endpoint.o 00:03:23.765 CC lib/vfu_tgt/tgt_rpc.o 00:03:23.765 CC lib/accel/accel.o 00:03:23.765 CC lib/blob/blobstore.o 00:03:23.765 CC lib/blob/request.o 00:03:23.765 CC lib/accel/accel_rpc.o 00:03:23.765 CC lib/blob/zeroes.o 00:03:23.765 CC lib/accel/accel_sw.o 00:03:23.765 CC lib/blob/blob_bs_dev.o 00:03:24.025 LIB libspdk_init.a 00:03:24.025 SO libspdk_init.so.6.0 00:03:24.025 LIB libspdk_vfu_tgt.a 00:03:24.025 LIB libspdk_virtio.a 00:03:24.025 SYMLINK libspdk_init.so 00:03:24.025 SO libspdk_vfu_tgt.so.3.0 00:03:24.025 SO libspdk_virtio.so.7.0 00:03:24.025 SYMLINK libspdk_vfu_tgt.so 00:03:24.025 SYMLINK libspdk_virtio.so 00:03:24.286 LIB libspdk_fsdev.a 00:03:24.286 SO libspdk_fsdev.so.2.0 00:03:24.286 SYMLINK libspdk_fsdev.so 00:03:24.286 CC lib/event/app.o 00:03:24.286 CC lib/event/reactor.o 00:03:24.286 CC lib/event/log_rpc.o 00:03:24.286 CC lib/event/app_rpc.o 00:03:24.286 CC lib/event/scheduler_static.o 00:03:24.547 LIB libspdk_accel.a 00:03:24.808 LIB libspdk_nvme.a 00:03:24.808 SO libspdk_accel.so.16.0 00:03:24.808 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:24.808 SYMLINK libspdk_accel.so 00:03:24.808 LIB libspdk_event.a 00:03:24.808 SO libspdk_nvme.so.15.0 00:03:24.808 SO libspdk_event.so.14.0 00:03:25.069 SYMLINK libspdk_event.so 00:03:25.069 SYMLINK libspdk_nvme.so 00:03:25.069 CC lib/bdev/bdev.o 00:03:25.069 CC lib/bdev/bdev_rpc.o 00:03:25.069 CC lib/bdev/bdev_zone.o 00:03:25.069 CC lib/bdev/part.o 00:03:25.069 CC lib/bdev/scsi_nvme.o 00:03:25.329 LIB libspdk_fuse_dispatcher.a 00:03:25.329 SO libspdk_fuse_dispatcher.so.1.0 00:03:25.329 SYMLINK libspdk_fuse_dispatcher.so 00:03:26.269 LIB libspdk_blob.a 00:03:26.269 SO libspdk_blob.so.11.0 00:03:26.529 SYMLINK libspdk_blob.so 00:03:26.790 CC lib/blobfs/blobfs.o 00:03:26.790 CC lib/blobfs/tree.o 00:03:26.790 CC lib/lvol/lvol.o 00:03:27.361 LIB libspdk_bdev.a 00:03:27.621 SO libspdk_bdev.so.17.0 00:03:27.621 LIB libspdk_blobfs.a 00:03:27.621 SYMLINK libspdk_bdev.so 00:03:27.621 SO libspdk_blobfs.so.10.0 00:03:27.621 LIB libspdk_lvol.a 00:03:27.621 SYMLINK libspdk_blobfs.so 00:03:27.621 SO libspdk_lvol.so.10.0 00:03:27.882 SYMLINK libspdk_lvol.so 00:03:27.882 CC lib/nbd/nbd.o 00:03:27.882 CC lib/nvmf/ctrlr.o 00:03:27.882 CC lib/scsi/dev.o 00:03:27.882 CC lib/nbd/nbd_rpc.o 00:03:27.882 CC lib/nvmf/ctrlr_discovery.o 00:03:27.882 CC lib/scsi/lun.o 00:03:27.882 CC lib/ublk/ublk.o 00:03:27.882 CC lib/nvmf/ctrlr_bdev.o 00:03:27.882 CC lib/scsi/port.o 00:03:27.882 CC lib/ublk/ublk_rpc.o 00:03:27.882 CC lib/nvmf/subsystem.o 00:03:27.882 CC lib/scsi/scsi.o 00:03:27.882 CC lib/nvmf/nvmf.o 00:03:27.882 CC lib/scsi/scsi_bdev.o 00:03:27.882 CC lib/nvmf/nvmf_rpc.o 00:03:27.882 CC lib/scsi/scsi_pr.o 00:03:27.882 CC lib/nvmf/transport.o 00:03:27.882 CC lib/scsi/scsi_rpc.o 00:03:27.882 CC lib/ftl/ftl_core.o 00:03:27.883 CC lib/nvmf/tcp.o 00:03:27.883 CC lib/scsi/task.o 00:03:27.883 CC lib/ftl/ftl_init.o 00:03:27.883 CC lib/nvmf/stubs.o 00:03:27.883 CC lib/ftl/ftl_layout.o 00:03:27.883 CC lib/nvmf/mdns_server.o 00:03:27.883 CC lib/ftl/ftl_debug.o 00:03:27.883 CC lib/nvmf/vfio_user.o 00:03:27.883 CC lib/ftl/ftl_io.o 00:03:27.883 CC lib/nvmf/rdma.o 00:03:27.883 CC lib/ftl/ftl_sb.o 00:03:27.883 CC lib/nvmf/auth.o 00:03:27.883 CC lib/ftl/ftl_l2p.o 00:03:27.883 CC lib/ftl/ftl_l2p_flat.o 00:03:27.883 CC lib/ftl/ftl_nv_cache.o 00:03:27.883 CC lib/ftl/ftl_band.o 00:03:27.883 CC lib/ftl/ftl_band_ops.o 00:03:27.883 CC lib/ftl/ftl_writer.o 00:03:27.883 CC lib/ftl/ftl_rq.o 00:03:27.883 CC lib/ftl/ftl_reloc.o 00:03:27.883 CC lib/ftl/ftl_l2p_cache.o 00:03:27.883 CC lib/ftl/ftl_p2l.o 00:03:27.883 CC lib/ftl/ftl_p2l_log.o 00:03:27.883 CC lib/ftl/mngt/ftl_mngt.o 00:03:27.883 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:27.883 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:27.883 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:27.883 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:27.883 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:27.883 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:27.883 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:27.883 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:27.883 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:27.883 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:28.143 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:28.143 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:28.143 CC lib/ftl/utils/ftl_mempool.o 00:03:28.143 CC lib/ftl/utils/ftl_conf.o 00:03:28.143 CC lib/ftl/utils/ftl_md.o 00:03:28.143 CC lib/ftl/utils/ftl_bitmap.o 00:03:28.143 CC lib/ftl/utils/ftl_property.o 00:03:28.143 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:28.143 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:28.143 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:28.143 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:28.143 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:28.143 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:28.143 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:28.143 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:28.143 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:28.143 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:28.143 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:28.143 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:28.143 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:28.143 CC lib/ftl/base/ftl_base_dev.o 00:03:28.143 CC lib/ftl/base/ftl_base_bdev.o 00:03:28.143 CC lib/ftl/ftl_trace.o 00:03:28.714 LIB libspdk_nbd.a 00:03:28.714 SO libspdk_nbd.so.7.0 00:03:28.714 LIB libspdk_scsi.a 00:03:28.714 SYMLINK libspdk_nbd.so 00:03:28.714 SO libspdk_scsi.so.9.0 00:03:28.975 LIB libspdk_ublk.a 00:03:28.975 SYMLINK libspdk_scsi.so 00:03:28.975 SO libspdk_ublk.so.3.0 00:03:28.975 SYMLINK libspdk_ublk.so 00:03:29.236 LIB libspdk_ftl.a 00:03:29.236 CC lib/iscsi/conn.o 00:03:29.236 CC lib/iscsi/init_grp.o 00:03:29.236 CC lib/iscsi/iscsi.o 00:03:29.236 CC lib/iscsi/param.o 00:03:29.236 CC lib/iscsi/portal_grp.o 00:03:29.236 CC lib/iscsi/tgt_node.o 00:03:29.236 CC lib/vhost/vhost.o 00:03:29.236 CC lib/iscsi/iscsi_subsystem.o 00:03:29.236 CC lib/iscsi/iscsi_rpc.o 00:03:29.236 CC lib/vhost/vhost_rpc.o 00:03:29.236 CC lib/iscsi/task.o 00:03:29.236 CC lib/vhost/vhost_scsi.o 00:03:29.236 CC lib/vhost/vhost_blk.o 00:03:29.236 CC lib/vhost/rte_vhost_user.o 00:03:29.497 SO libspdk_ftl.so.9.0 00:03:29.757 SYMLINK libspdk_ftl.so 00:03:30.019 LIB libspdk_nvmf.a 00:03:30.280 SO libspdk_nvmf.so.20.0 00:03:30.280 LIB libspdk_vhost.a 00:03:30.280 SO libspdk_vhost.so.8.0 00:03:30.280 SYMLINK libspdk_vhost.so 00:03:30.280 SYMLINK libspdk_nvmf.so 00:03:30.541 LIB libspdk_iscsi.a 00:03:30.541 SO libspdk_iscsi.so.8.0 00:03:30.802 SYMLINK libspdk_iscsi.so 00:03:31.376 CC module/env_dpdk/env_dpdk_rpc.o 00:03:31.376 CC module/vfu_device/vfu_virtio.o 00:03:31.376 CC module/vfu_device/vfu_virtio_blk.o 00:03:31.376 CC module/vfu_device/vfu_virtio_scsi.o 00:03:31.376 CC module/vfu_device/vfu_virtio_rpc.o 00:03:31.376 CC module/vfu_device/vfu_virtio_fs.o 00:03:31.376 LIB libspdk_env_dpdk_rpc.a 00:03:31.376 CC module/sock/posix/posix.o 00:03:31.376 CC module/blob/bdev/blob_bdev.o 00:03:31.376 CC module/accel/error/accel_error.o 00:03:31.376 CC module/accel/error/accel_error_rpc.o 00:03:31.376 CC module/accel/ioat/accel_ioat.o 00:03:31.376 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:31.376 CC module/accel/dsa/accel_dsa.o 00:03:31.376 CC module/keyring/linux/keyring.o 00:03:31.376 CC module/keyring/linux/keyring_rpc.o 00:03:31.376 CC module/accel/ioat/accel_ioat_rpc.o 00:03:31.376 CC module/accel/dsa/accel_dsa_rpc.o 00:03:31.376 CC module/keyring/file/keyring.o 00:03:31.376 CC module/scheduler/gscheduler/gscheduler.o 00:03:31.376 CC module/keyring/file/keyring_rpc.o 00:03:31.376 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:31.376 CC module/fsdev/aio/fsdev_aio.o 00:03:31.376 CC module/accel/iaa/accel_iaa.o 00:03:31.376 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:31.376 CC module/accel/iaa/accel_iaa_rpc.o 00:03:31.376 CC module/fsdev/aio/linux_aio_mgr.o 00:03:31.376 SO libspdk_env_dpdk_rpc.so.6.0 00:03:31.636 SYMLINK libspdk_env_dpdk_rpc.so 00:03:31.636 LIB libspdk_keyring_linux.a 00:03:31.636 LIB libspdk_scheduler_dpdk_governor.a 00:03:31.636 LIB libspdk_keyring_file.a 00:03:31.636 LIB libspdk_scheduler_gscheduler.a 00:03:31.636 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:31.636 LIB libspdk_scheduler_dynamic.a 00:03:31.636 LIB libspdk_accel_error.a 00:03:31.636 SO libspdk_keyring_linux.so.1.0 00:03:31.636 LIB libspdk_accel_ioat.a 00:03:31.636 SO libspdk_scheduler_gscheduler.so.4.0 00:03:31.636 SO libspdk_keyring_file.so.2.0 00:03:31.636 LIB libspdk_accel_iaa.a 00:03:31.636 SO libspdk_scheduler_dynamic.so.4.0 00:03:31.636 SO libspdk_accel_error.so.2.0 00:03:31.636 LIB libspdk_blob_bdev.a 00:03:31.636 SO libspdk_accel_ioat.so.6.0 00:03:31.897 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:31.897 SYMLINK libspdk_keyring_linux.so 00:03:31.897 SO libspdk_accel_iaa.so.3.0 00:03:31.897 SO libspdk_blob_bdev.so.11.0 00:03:31.897 SYMLINK libspdk_scheduler_gscheduler.so 00:03:31.897 LIB libspdk_accel_dsa.a 00:03:31.897 SYMLINK libspdk_keyring_file.so 00:03:31.897 SYMLINK libspdk_scheduler_dynamic.so 00:03:31.897 SYMLINK libspdk_accel_error.so 00:03:31.897 SYMLINK libspdk_accel_ioat.so 00:03:31.897 SO libspdk_accel_dsa.so.5.0 00:03:31.897 SYMLINK libspdk_blob_bdev.so 00:03:31.897 SYMLINK libspdk_accel_iaa.so 00:03:31.897 LIB libspdk_vfu_device.a 00:03:31.897 SYMLINK libspdk_accel_dsa.so 00:03:31.897 SO libspdk_vfu_device.so.3.0 00:03:31.897 SYMLINK libspdk_vfu_device.so 00:03:32.158 LIB libspdk_fsdev_aio.a 00:03:32.158 LIB libspdk_sock_posix.a 00:03:32.158 SO libspdk_fsdev_aio.so.1.0 00:03:32.158 SO libspdk_sock_posix.so.6.0 00:03:32.158 SYMLINK libspdk_fsdev_aio.so 00:03:32.419 SYMLINK libspdk_sock_posix.so 00:03:32.419 CC module/bdev/gpt/gpt.o 00:03:32.419 CC module/bdev/gpt/vbdev_gpt.o 00:03:32.419 CC module/bdev/error/vbdev_error.o 00:03:32.419 CC module/bdev/error/vbdev_error_rpc.o 00:03:32.419 CC module/bdev/delay/vbdev_delay.o 00:03:32.419 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:32.419 CC module/blobfs/bdev/blobfs_bdev.o 00:03:32.419 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:32.419 CC module/bdev/malloc/bdev_malloc.o 00:03:32.419 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:32.419 CC module/bdev/nvme/bdev_nvme.o 00:03:32.419 CC module/bdev/split/vbdev_split.o 00:03:32.419 CC module/bdev/null/bdev_null.o 00:03:32.420 CC module/bdev/split/vbdev_split_rpc.o 00:03:32.420 CC module/bdev/aio/bdev_aio.o 00:03:32.420 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:32.420 CC module/bdev/nvme/nvme_rpc.o 00:03:32.420 CC module/bdev/null/bdev_null_rpc.o 00:03:32.420 CC module/bdev/aio/bdev_aio_rpc.o 00:03:32.420 CC module/bdev/nvme/bdev_mdns_client.o 00:03:32.420 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:32.420 CC module/bdev/nvme/vbdev_opal.o 00:03:32.420 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:32.420 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:32.420 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:32.420 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:32.420 CC module/bdev/iscsi/bdev_iscsi.o 00:03:32.420 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:32.420 CC module/bdev/raid/bdev_raid.o 00:03:32.420 CC module/bdev/ftl/bdev_ftl.o 00:03:32.420 CC module/bdev/raid/bdev_raid_rpc.o 00:03:32.420 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:32.420 CC module/bdev/raid/bdev_raid_sb.o 00:03:32.420 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:32.420 CC module/bdev/lvol/vbdev_lvol.o 00:03:32.420 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:32.420 CC module/bdev/passthru/vbdev_passthru.o 00:03:32.420 CC module/bdev/raid/raid0.o 00:03:32.420 CC module/bdev/raid/raid1.o 00:03:32.420 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:32.420 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:32.420 CC module/bdev/raid/concat.o 00:03:32.680 LIB libspdk_blobfs_bdev.a 00:03:32.680 LIB libspdk_bdev_split.a 00:03:32.680 SO libspdk_blobfs_bdev.so.6.0 00:03:32.680 LIB libspdk_bdev_gpt.a 00:03:32.680 LIB libspdk_bdev_error.a 00:03:32.680 SO libspdk_bdev_split.so.6.0 00:03:32.680 LIB libspdk_bdev_null.a 00:03:32.941 SO libspdk_bdev_gpt.so.6.0 00:03:32.941 SYMLINK libspdk_blobfs_bdev.so 00:03:32.941 SO libspdk_bdev_error.so.6.0 00:03:32.941 SO libspdk_bdev_null.so.6.0 00:03:32.941 SYMLINK libspdk_bdev_split.so 00:03:32.941 LIB libspdk_bdev_ftl.a 00:03:32.941 LIB libspdk_bdev_aio.a 00:03:32.941 LIB libspdk_bdev_passthru.a 00:03:32.941 LIB libspdk_bdev_delay.a 00:03:32.941 LIB libspdk_bdev_malloc.a 00:03:32.941 LIB libspdk_bdev_zone_block.a 00:03:32.941 SYMLINK libspdk_bdev_error.so 00:03:32.941 LIB libspdk_bdev_iscsi.a 00:03:32.941 SYMLINK libspdk_bdev_gpt.so 00:03:32.941 SO libspdk_bdev_aio.so.6.0 00:03:32.941 SYMLINK libspdk_bdev_null.so 00:03:32.941 SO libspdk_bdev_passthru.so.6.0 00:03:32.941 SO libspdk_bdev_ftl.so.6.0 00:03:32.941 SO libspdk_bdev_delay.so.6.0 00:03:32.941 SO libspdk_bdev_malloc.so.6.0 00:03:32.941 SO libspdk_bdev_zone_block.so.6.0 00:03:32.941 SO libspdk_bdev_iscsi.so.6.0 00:03:32.941 SYMLINK libspdk_bdev_passthru.so 00:03:32.941 SYMLINK libspdk_bdev_ftl.so 00:03:32.941 SYMLINK libspdk_bdev_aio.so 00:03:32.941 SYMLINK libspdk_bdev_delay.so 00:03:32.941 SYMLINK libspdk_bdev_malloc.so 00:03:32.941 SYMLINK libspdk_bdev_zone_block.so 00:03:32.941 SYMLINK libspdk_bdev_iscsi.so 00:03:32.941 LIB libspdk_bdev_virtio.a 00:03:32.941 LIB libspdk_bdev_lvol.a 00:03:33.202 SO libspdk_bdev_virtio.so.6.0 00:03:33.202 SO libspdk_bdev_lvol.so.6.0 00:03:33.202 SYMLINK libspdk_bdev_virtio.so 00:03:33.202 SYMLINK libspdk_bdev_lvol.so 00:03:33.463 LIB libspdk_bdev_raid.a 00:03:33.463 SO libspdk_bdev_raid.so.6.0 00:03:33.724 SYMLINK libspdk_bdev_raid.so 00:03:34.666 LIB libspdk_bdev_nvme.a 00:03:34.927 SO libspdk_bdev_nvme.so.7.1 00:03:34.927 SYMLINK libspdk_bdev_nvme.so 00:03:35.869 CC module/event/subsystems/iobuf/iobuf.o 00:03:35.869 CC module/event/subsystems/vmd/vmd.o 00:03:35.869 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:35.869 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:35.869 CC module/event/subsystems/sock/sock.o 00:03:35.869 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:35.869 CC module/event/subsystems/scheduler/scheduler.o 00:03:35.869 CC module/event/subsystems/fsdev/fsdev.o 00:03:35.869 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:35.869 CC module/event/subsystems/keyring/keyring.o 00:03:35.869 LIB libspdk_event_sock.a 00:03:35.869 LIB libspdk_event_scheduler.a 00:03:35.869 LIB libspdk_event_vfu_tgt.a 00:03:35.869 LIB libspdk_event_vmd.a 00:03:35.869 LIB libspdk_event_keyring.a 00:03:35.869 LIB libspdk_event_iobuf.a 00:03:35.869 LIB libspdk_event_vhost_blk.a 00:03:35.869 LIB libspdk_event_fsdev.a 00:03:35.869 SO libspdk_event_sock.so.5.0 00:03:35.869 SO libspdk_event_scheduler.so.4.0 00:03:35.869 SO libspdk_event_vfu_tgt.so.3.0 00:03:35.870 SO libspdk_event_iobuf.so.3.0 00:03:35.870 SO libspdk_event_keyring.so.1.0 00:03:35.870 SO libspdk_event_vmd.so.6.0 00:03:35.870 SO libspdk_event_fsdev.so.1.0 00:03:35.870 SO libspdk_event_vhost_blk.so.3.0 00:03:35.870 SYMLINK libspdk_event_sock.so 00:03:35.870 SYMLINK libspdk_event_scheduler.so 00:03:35.870 SYMLINK libspdk_event_iobuf.so 00:03:35.870 SYMLINK libspdk_event_vfu_tgt.so 00:03:35.870 SYMLINK libspdk_event_keyring.so 00:03:35.870 SYMLINK libspdk_event_fsdev.so 00:03:35.870 SYMLINK libspdk_event_vhost_blk.so 00:03:35.870 SYMLINK libspdk_event_vmd.so 00:03:36.442 CC module/event/subsystems/accel/accel.o 00:03:36.442 LIB libspdk_event_accel.a 00:03:36.442 SO libspdk_event_accel.so.6.0 00:03:36.703 SYMLINK libspdk_event_accel.so 00:03:36.963 CC module/event/subsystems/bdev/bdev.o 00:03:37.224 LIB libspdk_event_bdev.a 00:03:37.224 SO libspdk_event_bdev.so.6.0 00:03:37.224 SYMLINK libspdk_event_bdev.so 00:03:37.485 CC module/event/subsystems/ublk/ublk.o 00:03:37.485 CC module/event/subsystems/scsi/scsi.o 00:03:37.485 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:37.485 CC module/event/subsystems/nbd/nbd.o 00:03:37.485 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:37.746 LIB libspdk_event_ublk.a 00:03:37.746 LIB libspdk_event_nbd.a 00:03:37.746 LIB libspdk_event_scsi.a 00:03:37.746 SO libspdk_event_ublk.so.3.0 00:03:37.746 SO libspdk_event_nbd.so.6.0 00:03:37.746 SO libspdk_event_scsi.so.6.0 00:03:37.746 LIB libspdk_event_nvmf.a 00:03:37.746 SYMLINK libspdk_event_ublk.so 00:03:38.008 SYMLINK libspdk_event_nbd.so 00:03:38.008 SYMLINK libspdk_event_scsi.so 00:03:38.008 SO libspdk_event_nvmf.so.6.0 00:03:38.008 SYMLINK libspdk_event_nvmf.so 00:03:38.268 CC module/event/subsystems/iscsi/iscsi.o 00:03:38.268 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:38.530 LIB libspdk_event_vhost_scsi.a 00:03:38.530 LIB libspdk_event_iscsi.a 00:03:38.530 SO libspdk_event_vhost_scsi.so.3.0 00:03:38.530 SO libspdk_event_iscsi.so.6.0 00:03:38.530 SYMLINK libspdk_event_vhost_scsi.so 00:03:38.530 SYMLINK libspdk_event_iscsi.so 00:03:38.849 SO libspdk.so.6.0 00:03:38.849 SYMLINK libspdk.so 00:03:39.198 CC app/trace_record/trace_record.o 00:03:39.198 CXX app/trace/trace.o 00:03:39.198 CC app/spdk_top/spdk_top.o 00:03:39.198 CC app/spdk_nvme_perf/perf.o 00:03:39.198 CC app/spdk_lspci/spdk_lspci.o 00:03:39.198 CC test/rpc_client/rpc_client_test.o 00:03:39.198 TEST_HEADER include/spdk/accel.h 00:03:39.198 CC app/spdk_nvme_identify/identify.o 00:03:39.198 CC app/spdk_nvme_discover/discovery_aer.o 00:03:39.198 TEST_HEADER include/spdk/accel_module.h 00:03:39.198 TEST_HEADER include/spdk/assert.h 00:03:39.198 TEST_HEADER include/spdk/base64.h 00:03:39.198 TEST_HEADER include/spdk/barrier.h 00:03:39.198 TEST_HEADER include/spdk/bdev_module.h 00:03:39.198 TEST_HEADER include/spdk/bdev.h 00:03:39.198 TEST_HEADER include/spdk/bdev_zone.h 00:03:39.198 TEST_HEADER include/spdk/bit_array.h 00:03:39.198 TEST_HEADER include/spdk/bit_pool.h 00:03:39.198 TEST_HEADER include/spdk/blob_bdev.h 00:03:39.198 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:39.198 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:39.198 TEST_HEADER include/spdk/blobfs.h 00:03:39.198 TEST_HEADER include/spdk/blob.h 00:03:39.198 TEST_HEADER include/spdk/conf.h 00:03:39.198 TEST_HEADER include/spdk/config.h 00:03:39.198 TEST_HEADER include/spdk/cpuset.h 00:03:39.198 TEST_HEADER include/spdk/crc16.h 00:03:39.198 TEST_HEADER include/spdk/crc64.h 00:03:39.198 TEST_HEADER include/spdk/crc32.h 00:03:39.198 TEST_HEADER include/spdk/dif.h 00:03:39.198 TEST_HEADER include/spdk/dma.h 00:03:39.198 TEST_HEADER include/spdk/endian.h 00:03:39.198 TEST_HEADER include/spdk/env.h 00:03:39.198 CC app/spdk_dd/spdk_dd.o 00:03:39.198 TEST_HEADER include/spdk/env_dpdk.h 00:03:39.198 TEST_HEADER include/spdk/fd_group.h 00:03:39.198 TEST_HEADER include/spdk/event.h 00:03:39.198 TEST_HEADER include/spdk/file.h 00:03:39.198 TEST_HEADER include/spdk/fd.h 00:03:39.198 TEST_HEADER include/spdk/fsdev.h 00:03:39.198 TEST_HEADER include/spdk/ftl.h 00:03:39.198 TEST_HEADER include/spdk/fsdev_module.h 00:03:39.198 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:39.198 TEST_HEADER include/spdk/gpt_spec.h 00:03:39.198 CC app/nvmf_tgt/nvmf_main.o 00:03:39.198 TEST_HEADER include/spdk/histogram_data.h 00:03:39.198 TEST_HEADER include/spdk/hexlify.h 00:03:39.198 TEST_HEADER include/spdk/idxd.h 00:03:39.198 TEST_HEADER include/spdk/idxd_spec.h 00:03:39.198 TEST_HEADER include/spdk/init.h 00:03:39.198 TEST_HEADER include/spdk/ioat_spec.h 00:03:39.198 TEST_HEADER include/spdk/ioat.h 00:03:39.198 TEST_HEADER include/spdk/iscsi_spec.h 00:03:39.198 TEST_HEADER include/spdk/json.h 00:03:39.198 TEST_HEADER include/spdk/jsonrpc.h 00:03:39.198 TEST_HEADER include/spdk/keyring.h 00:03:39.198 TEST_HEADER include/spdk/keyring_module.h 00:03:39.198 TEST_HEADER include/spdk/likely.h 00:03:39.198 TEST_HEADER include/spdk/log.h 00:03:39.198 CC app/spdk_tgt/spdk_tgt.o 00:03:39.198 TEST_HEADER include/spdk/md5.h 00:03:39.198 TEST_HEADER include/spdk/lvol.h 00:03:39.198 TEST_HEADER include/spdk/memory.h 00:03:39.198 TEST_HEADER include/spdk/mmio.h 00:03:39.198 CC app/iscsi_tgt/iscsi_tgt.o 00:03:39.198 TEST_HEADER include/spdk/nbd.h 00:03:39.198 TEST_HEADER include/spdk/net.h 00:03:39.198 TEST_HEADER include/spdk/notify.h 00:03:39.198 TEST_HEADER include/spdk/nvme_intel.h 00:03:39.198 TEST_HEADER include/spdk/nvme.h 00:03:39.198 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:39.198 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:39.199 TEST_HEADER include/spdk/nvme_spec.h 00:03:39.199 TEST_HEADER include/spdk/nvme_zns.h 00:03:39.199 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:39.199 TEST_HEADER include/spdk/nvmf.h 00:03:39.199 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:39.199 TEST_HEADER include/spdk/nvmf_spec.h 00:03:39.199 TEST_HEADER include/spdk/nvmf_transport.h 00:03:39.199 TEST_HEADER include/spdk/opal.h 00:03:39.199 TEST_HEADER include/spdk/opal_spec.h 00:03:39.199 TEST_HEADER include/spdk/pci_ids.h 00:03:39.199 TEST_HEADER include/spdk/queue.h 00:03:39.199 TEST_HEADER include/spdk/pipe.h 00:03:39.199 TEST_HEADER include/spdk/reduce.h 00:03:39.199 TEST_HEADER include/spdk/rpc.h 00:03:39.199 TEST_HEADER include/spdk/scsi.h 00:03:39.199 TEST_HEADER include/spdk/scheduler.h 00:03:39.486 TEST_HEADER include/spdk/sock.h 00:03:39.486 TEST_HEADER include/spdk/scsi_spec.h 00:03:39.486 TEST_HEADER include/spdk/stdinc.h 00:03:39.486 TEST_HEADER include/spdk/string.h 00:03:39.486 TEST_HEADER include/spdk/thread.h 00:03:39.486 TEST_HEADER include/spdk/trace.h 00:03:39.486 TEST_HEADER include/spdk/trace_parser.h 00:03:39.486 TEST_HEADER include/spdk/tree.h 00:03:39.486 TEST_HEADER include/spdk/ublk.h 00:03:39.486 TEST_HEADER include/spdk/util.h 00:03:39.486 TEST_HEADER include/spdk/uuid.h 00:03:39.486 TEST_HEADER include/spdk/version.h 00:03:39.486 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:39.486 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:39.486 TEST_HEADER include/spdk/vhost.h 00:03:39.486 TEST_HEADER include/spdk/vmd.h 00:03:39.486 TEST_HEADER include/spdk/xor.h 00:03:39.486 TEST_HEADER include/spdk/zipf.h 00:03:39.486 CXX test/cpp_headers/assert.o 00:03:39.486 CXX test/cpp_headers/accel.o 00:03:39.486 CXX test/cpp_headers/accel_module.o 00:03:39.486 CXX test/cpp_headers/barrier.o 00:03:39.486 CXX test/cpp_headers/base64.o 00:03:39.486 CXX test/cpp_headers/bdev.o 00:03:39.486 CXX test/cpp_headers/bdev_module.o 00:03:39.486 CXX test/cpp_headers/bdev_zone.o 00:03:39.486 CXX test/cpp_headers/bit_array.o 00:03:39.486 CXX test/cpp_headers/blobfs_bdev.o 00:03:39.486 CXX test/cpp_headers/blob_bdev.o 00:03:39.486 CXX test/cpp_headers/bit_pool.o 00:03:39.486 CXX test/cpp_headers/blobfs.o 00:03:39.486 CXX test/cpp_headers/blob.o 00:03:39.486 CXX test/cpp_headers/conf.o 00:03:39.486 CXX test/cpp_headers/config.o 00:03:39.486 CXX test/cpp_headers/cpuset.o 00:03:39.486 CXX test/cpp_headers/crc16.o 00:03:39.486 CXX test/cpp_headers/crc32.o 00:03:39.486 CXX test/cpp_headers/crc64.o 00:03:39.486 CXX test/cpp_headers/dif.o 00:03:39.486 CXX test/cpp_headers/dma.o 00:03:39.486 CXX test/cpp_headers/env_dpdk.o 00:03:39.486 CXX test/cpp_headers/endian.o 00:03:39.486 CXX test/cpp_headers/env.o 00:03:39.486 CXX test/cpp_headers/fd_group.o 00:03:39.486 CXX test/cpp_headers/event.o 00:03:39.486 CXX test/cpp_headers/fd.o 00:03:39.486 CXX test/cpp_headers/file.o 00:03:39.486 CXX test/cpp_headers/fsdev.o 00:03:39.486 CXX test/cpp_headers/fsdev_module.o 00:03:39.486 CXX test/cpp_headers/ftl.o 00:03:39.486 CXX test/cpp_headers/fuse_dispatcher.o 00:03:39.486 CXX test/cpp_headers/gpt_spec.o 00:03:39.486 CXX test/cpp_headers/idxd.o 00:03:39.486 CXX test/cpp_headers/hexlify.o 00:03:39.486 CXX test/cpp_headers/histogram_data.o 00:03:39.486 CXX test/cpp_headers/idxd_spec.o 00:03:39.486 CXX test/cpp_headers/init.o 00:03:39.486 CXX test/cpp_headers/ioat_spec.o 00:03:39.486 CXX test/cpp_headers/ioat.o 00:03:39.486 CXX test/cpp_headers/iscsi_spec.o 00:03:39.486 CXX test/cpp_headers/json.o 00:03:39.486 CXX test/cpp_headers/jsonrpc.o 00:03:39.486 CXX test/cpp_headers/likely.o 00:03:39.486 CXX test/cpp_headers/log.o 00:03:39.486 CXX test/cpp_headers/keyring_module.o 00:03:39.486 CC test/app/histogram_perf/histogram_perf.o 00:03:39.486 CXX test/cpp_headers/keyring.o 00:03:39.486 CC test/app/jsoncat/jsoncat.o 00:03:39.486 CXX test/cpp_headers/md5.o 00:03:39.486 CXX test/cpp_headers/lvol.o 00:03:39.486 CXX test/cpp_headers/memory.o 00:03:39.486 CC examples/ioat/perf/perf.o 00:03:39.486 CXX test/cpp_headers/net.o 00:03:39.486 CXX test/cpp_headers/mmio.o 00:03:39.486 CXX test/cpp_headers/nvme.o 00:03:39.486 CXX test/cpp_headers/nbd.o 00:03:39.486 CXX test/cpp_headers/notify.o 00:03:39.486 CXX test/cpp_headers/nvme_ocssd.o 00:03:39.486 CXX test/cpp_headers/nvme_spec.o 00:03:39.486 CXX test/cpp_headers/nvme_zns.o 00:03:39.486 CC examples/ioat/verify/verify.o 00:03:39.486 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:39.486 CXX test/cpp_headers/nvme_intel.o 00:03:39.486 CXX test/cpp_headers/nvmf_transport.o 00:03:39.486 CXX test/cpp_headers/nvmf_cmd.o 00:03:39.486 CXX test/cpp_headers/nvmf.o 00:03:39.486 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:39.486 CXX test/cpp_headers/nvmf_spec.o 00:03:39.486 CXX test/cpp_headers/opal.o 00:03:39.486 CC test/env/vtophys/vtophys.o 00:03:39.486 CC examples/util/zipf/zipf.o 00:03:39.486 CXX test/cpp_headers/pci_ids.o 00:03:39.486 CXX test/cpp_headers/opal_spec.o 00:03:39.486 CXX test/cpp_headers/pipe.o 00:03:39.487 CXX test/cpp_headers/queue.o 00:03:39.487 CXX test/cpp_headers/reduce.o 00:03:39.487 CXX test/cpp_headers/rpc.o 00:03:39.487 CC test/app/stub/stub.o 00:03:39.487 LINK spdk_lspci 00:03:39.487 CC test/env/memory/memory_ut.o 00:03:39.487 CXX test/cpp_headers/scheduler.o 00:03:39.487 CC app/fio/nvme/fio_plugin.o 00:03:39.487 CXX test/cpp_headers/scsi_spec.o 00:03:39.487 CXX test/cpp_headers/scsi.o 00:03:39.487 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:39.487 CXX test/cpp_headers/sock.o 00:03:39.487 CXX test/cpp_headers/stdinc.o 00:03:39.487 CXX test/cpp_headers/string.o 00:03:39.487 CC test/env/pci/pci_ut.o 00:03:39.487 CXX test/cpp_headers/trace_parser.o 00:03:39.487 CXX test/cpp_headers/thread.o 00:03:39.487 CC test/thread/poller_perf/poller_perf.o 00:03:39.487 CXX test/cpp_headers/trace.o 00:03:39.487 CC test/dma/test_dma/test_dma.o 00:03:39.487 CXX test/cpp_headers/ublk.o 00:03:39.487 CXX test/cpp_headers/tree.o 00:03:39.487 CXX test/cpp_headers/util.o 00:03:39.487 CXX test/cpp_headers/uuid.o 00:03:39.487 CXX test/cpp_headers/version.o 00:03:39.487 CXX test/cpp_headers/vfio_user_pci.o 00:03:39.487 CXX test/cpp_headers/vfio_user_spec.o 00:03:39.487 CXX test/cpp_headers/vhost.o 00:03:39.487 CC test/app/bdev_svc/bdev_svc.o 00:03:39.487 CXX test/cpp_headers/zipf.o 00:03:39.487 CXX test/cpp_headers/vmd.o 00:03:39.487 CXX test/cpp_headers/xor.o 00:03:39.487 CC app/fio/bdev/fio_plugin.o 00:03:39.487 LINK rpc_client_test 00:03:39.755 LINK spdk_nvme_discover 00:03:39.755 LINK spdk_trace_record 00:03:39.755 LINK interrupt_tgt 00:03:39.755 LINK nvmf_tgt 00:03:40.025 LINK jsoncat 00:03:40.025 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:40.025 LINK iscsi_tgt 00:03:40.025 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:40.025 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:40.025 LINK spdk_tgt 00:03:40.025 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:40.025 LINK spdk_trace 00:03:40.286 CC test/env/mem_callbacks/mem_callbacks.o 00:03:40.286 LINK histogram_perf 00:03:40.286 LINK ioat_perf 00:03:40.286 LINK verify 00:03:40.546 LINK spdk_dd 00:03:40.546 LINK vtophys 00:03:40.546 LINK poller_perf 00:03:40.546 LINK stub 00:03:40.546 LINK bdev_svc 00:03:40.546 LINK env_dpdk_post_init 00:03:40.546 LINK zipf 00:03:40.806 LINK spdk_top 00:03:40.806 LINK spdk_nvme_perf 00:03:40.806 LINK test_dma 00:03:40.806 CC app/vhost/vhost.o 00:03:40.806 LINK nvme_fuzz 00:03:40.806 LINK vhost_fuzz 00:03:40.806 LINK spdk_nvme_identify 00:03:40.806 LINK pci_ut 00:03:41.067 LINK spdk_nvme 00:03:41.067 LINK spdk_bdev 00:03:41.067 CC test/event/reactor_perf/reactor_perf.o 00:03:41.067 LINK vhost 00:03:41.067 CC test/event/event_perf/event_perf.o 00:03:41.067 CC test/event/reactor/reactor.o 00:03:41.067 CC test/event/app_repeat/app_repeat.o 00:03:41.067 CC test/event/scheduler/scheduler.o 00:03:41.067 LINK mem_callbacks 00:03:41.067 CC examples/sock/hello_world/hello_sock.o 00:03:41.067 CC examples/vmd/lsvmd/lsvmd.o 00:03:41.067 CC examples/idxd/perf/perf.o 00:03:41.067 CC examples/vmd/led/led.o 00:03:41.067 CC examples/thread/thread/thread_ex.o 00:03:41.329 LINK reactor_perf 00:03:41.329 LINK reactor 00:03:41.329 LINK event_perf 00:03:41.329 LINK app_repeat 00:03:41.329 LINK lsvmd 00:03:41.329 CC test/nvme/simple_copy/simple_copy.o 00:03:41.329 CC test/nvme/startup/startup.o 00:03:41.329 CC test/nvme/e2edp/nvme_dp.o 00:03:41.329 CC test/nvme/compliance/nvme_compliance.o 00:03:41.329 CC test/nvme/sgl/sgl.o 00:03:41.329 CC test/nvme/boot_partition/boot_partition.o 00:03:41.329 LINK led 00:03:41.329 CC test/nvme/err_injection/err_injection.o 00:03:41.329 CC test/nvme/connect_stress/connect_stress.o 00:03:41.329 CC test/nvme/overhead/overhead.o 00:03:41.329 CC test/nvme/aer/aer.o 00:03:41.329 CC test/nvme/reset/reset.o 00:03:41.329 CC test/nvme/cuse/cuse.o 00:03:41.329 CC test/nvme/fused_ordering/fused_ordering.o 00:03:41.329 CC test/nvme/reserve/reserve.o 00:03:41.329 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:41.329 LINK scheduler 00:03:41.329 CC test/nvme/fdp/fdp.o 00:03:41.329 CC test/accel/dif/dif.o 00:03:41.329 CC test/blobfs/mkfs/mkfs.o 00:03:41.591 LINK hello_sock 00:03:41.591 LINK idxd_perf 00:03:41.591 LINK thread 00:03:41.591 CC test/lvol/esnap/esnap.o 00:03:41.591 LINK startup 00:03:41.591 LINK boot_partition 00:03:41.591 LINK memory_ut 00:03:41.591 LINK connect_stress 00:03:41.591 LINK err_injection 00:03:41.591 LINK reserve 00:03:41.591 LINK simple_copy 00:03:41.591 LINK fused_ordering 00:03:41.591 LINK doorbell_aers 00:03:41.591 LINK reset 00:03:41.591 LINK mkfs 00:03:41.591 LINK aer 00:03:41.591 LINK nvme_dp 00:03:41.591 LINK sgl 00:03:41.851 LINK overhead 00:03:41.851 LINK nvme_compliance 00:03:41.852 LINK fdp 00:03:42.114 LINK iscsi_fuzz 00:03:42.114 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:42.114 CC examples/nvme/hello_world/hello_world.o 00:03:42.114 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:42.114 CC examples/nvme/reconnect/reconnect.o 00:03:42.114 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:42.114 CC examples/nvme/arbitration/arbitration.o 00:03:42.114 CC examples/nvme/hotplug/hotplug.o 00:03:42.114 CC examples/nvme/abort/abort.o 00:03:42.114 LINK dif 00:03:42.114 CC examples/accel/perf/accel_perf.o 00:03:42.114 CC examples/blob/cli/blobcli.o 00:03:42.114 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:42.114 CC examples/blob/hello_world/hello_blob.o 00:03:42.114 LINK pmr_persistence 00:03:42.376 LINK cmb_copy 00:03:42.376 LINK hello_world 00:03:42.376 LINK hotplug 00:03:42.376 LINK reconnect 00:03:42.376 LINK arbitration 00:03:42.376 LINK abort 00:03:42.376 LINK hello_blob 00:03:42.376 LINK nvme_manage 00:03:42.376 LINK hello_fsdev 00:03:42.637 LINK cuse 00:03:42.637 LINK accel_perf 00:03:42.637 LINK blobcli 00:03:42.637 CC test/bdev/bdevio/bdevio.o 00:03:43.209 LINK bdevio 00:03:43.209 CC examples/bdev/hello_world/hello_bdev.o 00:03:43.209 CC examples/bdev/bdevperf/bdevperf.o 00:03:43.470 LINK hello_bdev 00:03:44.042 LINK bdevperf 00:03:44.613 CC examples/nvmf/nvmf/nvmf.o 00:03:44.873 LINK nvmf 00:03:45.816 LINK esnap 00:03:46.077 00:03:46.077 real 0m55.608s 00:03:46.077 user 8m7.579s 00:03:46.077 sys 5m39.008s 00:03:46.077 09:37:16 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:46.077 09:37:16 make -- common/autotest_common.sh@10 -- $ set +x 00:03:46.077 ************************************ 00:03:46.077 END TEST make 00:03:46.077 ************************************ 00:03:46.077 09:37:16 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:46.077 09:37:16 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:46.077 09:37:16 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:46.077 09:37:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:46.077 09:37:16 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:46.077 09:37:16 -- pm/common@44 -- $ pid=1045018 00:03:46.077 09:37:16 -- pm/common@50 -- $ kill -TERM 1045018 00:03:46.077 09:37:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:46.077 09:37:16 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:46.077 09:37:16 -- pm/common@44 -- $ pid=1045019 00:03:46.077 09:37:16 -- pm/common@50 -- $ kill -TERM 1045019 00:03:46.077 09:37:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:46.077 09:37:16 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:46.077 09:37:16 -- pm/common@44 -- $ pid=1045021 00:03:46.077 09:37:16 -- pm/common@50 -- $ kill -TERM 1045021 00:03:46.077 09:37:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:46.077 09:37:16 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:46.077 09:37:16 -- pm/common@44 -- $ pid=1045045 00:03:46.077 09:37:16 -- pm/common@50 -- $ sudo -E kill -TERM 1045045 00:03:46.077 09:37:16 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:46.077 09:37:16 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:46.077 09:37:16 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:46.077 09:37:16 -- common/autotest_common.sh@1693 -- # lcov --version 00:03:46.078 09:37:16 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:46.340 09:37:17 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:46.340 09:37:17 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:46.340 09:37:17 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:46.340 09:37:17 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:46.340 09:37:17 -- scripts/common.sh@336 -- # IFS=.-: 00:03:46.340 09:37:17 -- scripts/common.sh@336 -- # read -ra ver1 00:03:46.340 09:37:17 -- scripts/common.sh@337 -- # IFS=.-: 00:03:46.340 09:37:17 -- scripts/common.sh@337 -- # read -ra ver2 00:03:46.340 09:37:17 -- scripts/common.sh@338 -- # local 'op=<' 00:03:46.340 09:37:17 -- scripts/common.sh@340 -- # ver1_l=2 00:03:46.340 09:37:17 -- scripts/common.sh@341 -- # ver2_l=1 00:03:46.340 09:37:17 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:46.340 09:37:17 -- scripts/common.sh@344 -- # case "$op" in 00:03:46.340 09:37:17 -- scripts/common.sh@345 -- # : 1 00:03:46.340 09:37:17 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:46.340 09:37:17 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:46.340 09:37:17 -- scripts/common.sh@365 -- # decimal 1 00:03:46.340 09:37:17 -- scripts/common.sh@353 -- # local d=1 00:03:46.340 09:37:17 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:46.340 09:37:17 -- scripts/common.sh@355 -- # echo 1 00:03:46.340 09:37:17 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:46.340 09:37:17 -- scripts/common.sh@366 -- # decimal 2 00:03:46.340 09:37:17 -- scripts/common.sh@353 -- # local d=2 00:03:46.340 09:37:17 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:46.340 09:37:17 -- scripts/common.sh@355 -- # echo 2 00:03:46.340 09:37:17 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:46.340 09:37:17 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:46.340 09:37:17 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:46.340 09:37:17 -- scripts/common.sh@368 -- # return 0 00:03:46.340 09:37:17 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:46.340 09:37:17 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:46.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:46.340 --rc genhtml_branch_coverage=1 00:03:46.340 --rc genhtml_function_coverage=1 00:03:46.340 --rc genhtml_legend=1 00:03:46.340 --rc geninfo_all_blocks=1 00:03:46.340 --rc geninfo_unexecuted_blocks=1 00:03:46.340 00:03:46.340 ' 00:03:46.340 09:37:17 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:46.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:46.340 --rc genhtml_branch_coverage=1 00:03:46.340 --rc genhtml_function_coverage=1 00:03:46.340 --rc genhtml_legend=1 00:03:46.340 --rc geninfo_all_blocks=1 00:03:46.340 --rc geninfo_unexecuted_blocks=1 00:03:46.340 00:03:46.340 ' 00:03:46.340 09:37:17 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:46.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:46.340 --rc genhtml_branch_coverage=1 00:03:46.340 --rc genhtml_function_coverage=1 00:03:46.340 --rc genhtml_legend=1 00:03:46.340 --rc geninfo_all_blocks=1 00:03:46.340 --rc geninfo_unexecuted_blocks=1 00:03:46.340 00:03:46.340 ' 00:03:46.340 09:37:17 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:46.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:46.340 --rc genhtml_branch_coverage=1 00:03:46.340 --rc genhtml_function_coverage=1 00:03:46.340 --rc genhtml_legend=1 00:03:46.340 --rc geninfo_all_blocks=1 00:03:46.340 --rc geninfo_unexecuted_blocks=1 00:03:46.340 00:03:46.340 ' 00:03:46.340 09:37:17 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:46.340 09:37:17 -- nvmf/common.sh@7 -- # uname -s 00:03:46.340 09:37:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:46.340 09:37:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:46.340 09:37:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:46.340 09:37:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:46.340 09:37:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:46.340 09:37:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:46.340 09:37:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:46.340 09:37:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:46.340 09:37:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:46.340 09:37:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:46.340 09:37:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:03:46.340 09:37:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:03:46.340 09:37:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:46.340 09:37:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:46.340 09:37:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:46.340 09:37:17 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:46.340 09:37:17 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:46.340 09:37:17 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:46.340 09:37:17 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:46.340 09:37:17 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:46.340 09:37:17 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:46.340 09:37:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:46.340 09:37:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:46.340 09:37:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:46.340 09:37:17 -- paths/export.sh@5 -- # export PATH 00:03:46.340 09:37:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:46.340 09:37:17 -- nvmf/common.sh@51 -- # : 0 00:03:46.340 09:37:17 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:46.340 09:37:17 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:46.340 09:37:17 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:46.340 09:37:17 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:46.340 09:37:17 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:46.340 09:37:17 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:46.340 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:46.340 09:37:17 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:46.340 09:37:17 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:46.340 09:37:17 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:46.340 09:37:17 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:46.340 09:37:17 -- spdk/autotest.sh@32 -- # uname -s 00:03:46.340 09:37:17 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:46.340 09:37:17 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:46.341 09:37:17 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:46.341 09:37:17 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:46.341 09:37:17 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:46.341 09:37:17 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:46.341 09:37:17 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:46.341 09:37:17 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:46.341 09:37:17 -- spdk/autotest.sh@48 -- # udevadm_pid=1111146 00:03:46.341 09:37:17 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:46.341 09:37:17 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:46.341 09:37:17 -- pm/common@17 -- # local monitor 00:03:46.341 09:37:17 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:46.341 09:37:17 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:46.341 09:37:17 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:46.341 09:37:17 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:46.341 09:37:17 -- pm/common@21 -- # date +%s 00:03:46.341 09:37:17 -- pm/common@21 -- # date +%s 00:03:46.341 09:37:17 -- pm/common@25 -- # sleep 1 00:03:46.341 09:37:17 -- pm/common@21 -- # date +%s 00:03:46.341 09:37:17 -- pm/common@21 -- # date +%s 00:03:46.341 09:37:17 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732091837 00:03:46.341 09:37:17 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732091837 00:03:46.341 09:37:17 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732091837 00:03:46.341 09:37:17 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732091837 00:03:46.341 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732091837_collect-cpu-load.pm.log 00:03:46.341 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732091837_collect-vmstat.pm.log 00:03:46.341 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732091837_collect-cpu-temp.pm.log 00:03:46.341 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732091837_collect-bmc-pm.bmc.pm.log 00:03:47.283 09:37:18 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:47.283 09:37:18 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:47.283 09:37:18 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:47.283 09:37:18 -- common/autotest_common.sh@10 -- # set +x 00:03:47.283 09:37:18 -- spdk/autotest.sh@59 -- # create_test_list 00:03:47.283 09:37:18 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:47.283 09:37:18 -- common/autotest_common.sh@10 -- # set +x 00:03:47.543 09:37:18 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:47.543 09:37:18 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:47.543 09:37:18 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:47.543 09:37:18 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:47.543 09:37:18 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:47.543 09:37:18 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:47.543 09:37:18 -- common/autotest_common.sh@1457 -- # uname 00:03:47.543 09:37:18 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:47.543 09:37:18 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:47.543 09:37:18 -- common/autotest_common.sh@1477 -- # uname 00:03:47.543 09:37:18 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:47.543 09:37:18 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:47.543 09:37:18 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:47.543 lcov: LCOV version 1.15 00:03:47.543 09:37:18 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:04:02.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:02.446 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:20.570 09:37:48 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:20.570 09:37:48 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:20.570 09:37:48 -- common/autotest_common.sh@10 -- # set +x 00:04:20.570 09:37:48 -- spdk/autotest.sh@78 -- # rm -f 00:04:20.570 09:37:48 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:21.513 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:04:21.513 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:04:21.513 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:04:21.513 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:04:21.513 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:04:21.513 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:04:21.513 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:04:21.513 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:04:21.513 0000:65:00.0 (144d a80a): Already using the nvme driver 00:04:21.513 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:04:21.513 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:04:21.513 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:04:21.513 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:04:21.513 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:04:21.775 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:04:21.775 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:04:21.775 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:04:22.036 09:37:52 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:22.036 09:37:52 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:22.036 09:37:52 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:22.036 09:37:52 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:04:22.036 09:37:52 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:22.036 09:37:52 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:04:22.036 09:37:52 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:22.036 09:37:52 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:22.036 09:37:52 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:22.036 09:37:52 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:22.036 09:37:52 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:22.036 09:37:52 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:22.036 09:37:52 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:22.036 09:37:52 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:22.036 09:37:52 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:22.036 No valid GPT data, bailing 00:04:22.036 09:37:52 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:22.036 09:37:52 -- scripts/common.sh@394 -- # pt= 00:04:22.036 09:37:52 -- scripts/common.sh@395 -- # return 1 00:04:22.036 09:37:52 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:22.036 1+0 records in 00:04:22.036 1+0 records out 00:04:22.036 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00179115 s, 585 MB/s 00:04:22.036 09:37:52 -- spdk/autotest.sh@105 -- # sync 00:04:22.036 09:37:52 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:22.036 09:37:52 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:22.036 09:37:52 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:32.040 09:38:01 -- spdk/autotest.sh@111 -- # uname -s 00:04:32.040 09:38:01 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:32.040 09:38:01 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:32.040 09:38:01 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:33.955 Hugepages 00:04:33.955 node hugesize free / total 00:04:34.215 node0 1048576kB 0 / 0 00:04:34.215 node0 2048kB 0 / 0 00:04:34.215 node1 1048576kB 0 / 0 00:04:34.215 node1 2048kB 0 / 0 00:04:34.215 00:04:34.215 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:34.215 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:04:34.215 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:04:34.215 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:04:34.215 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:04:34.215 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:04:34.215 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:04:34.215 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:04:34.215 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:04:34.215 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:04:34.215 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:04:34.215 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:04:34.215 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:04:34.215 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:04:34.216 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:04:34.216 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:04:34.216 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:04:34.216 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:04:34.216 09:38:05 -- spdk/autotest.sh@117 -- # uname -s 00:04:34.216 09:38:05 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:34.216 09:38:05 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:34.216 09:38:05 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:38.424 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:38.424 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:38.424 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:38.424 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:38.424 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:38.424 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:38.424 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:38.424 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:38.424 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:38.424 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:38.424 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:38.424 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:38.424 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:38.424 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:38.424 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:38.424 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:39.809 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:40.070 09:38:10 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:41.011 09:38:11 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:41.011 09:38:11 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:41.011 09:38:11 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:41.011 09:38:11 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:41.011 09:38:11 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:41.011 09:38:11 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:41.011 09:38:11 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:41.011 09:38:11 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:41.011 09:38:11 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:41.272 09:38:11 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:41.272 09:38:11 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:04:41.272 09:38:11 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:44.573 Waiting for block devices as requested 00:04:44.573 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:04:44.573 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:04:44.833 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:04:44.833 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:04:44.833 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:04:45.094 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:04:45.094 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:04:45.094 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:04:45.354 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:04:45.354 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:04:45.354 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:04:45.614 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:04:45.614 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:04:45.614 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:04:45.873 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:04:45.873 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:04:45.873 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:04:46.443 09:38:17 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:46.443 09:38:17 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:04:46.443 09:38:17 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:04:46.443 09:38:17 -- common/autotest_common.sh@1487 -- # grep 0000:65:00.0/nvme/nvme 00:04:46.443 09:38:17 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:04:46.443 09:38:17 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:04:46.443 09:38:17 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:04:46.443 09:38:17 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:46.443 09:38:17 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:46.443 09:38:17 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:46.443 09:38:17 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:46.443 09:38:17 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:46.443 09:38:17 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:46.443 09:38:17 -- common/autotest_common.sh@1531 -- # oacs=' 0x5f' 00:04:46.443 09:38:17 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:46.443 09:38:17 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:46.443 09:38:17 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:46.443 09:38:17 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:46.443 09:38:17 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:46.443 09:38:17 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:46.443 09:38:17 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:46.443 09:38:17 -- common/autotest_common.sh@1543 -- # continue 00:04:46.443 09:38:17 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:46.443 09:38:17 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:46.443 09:38:17 -- common/autotest_common.sh@10 -- # set +x 00:04:46.443 09:38:17 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:46.443 09:38:17 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:46.443 09:38:17 -- common/autotest_common.sh@10 -- # set +x 00:04:46.443 09:38:17 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:49.742 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:49.742 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:49.742 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:49.742 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:49.742 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:50.003 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:50.003 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:50.003 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:50.003 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:50.003 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:50.003 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:50.003 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:50.003 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:50.003 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:50.003 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:50.003 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:50.003 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:50.575 09:38:21 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:50.575 09:38:21 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:50.575 09:38:21 -- common/autotest_common.sh@10 -- # set +x 00:04:50.575 09:38:21 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:50.575 09:38:21 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:50.575 09:38:21 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:50.575 09:38:21 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:50.575 09:38:21 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:50.575 09:38:21 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:50.575 09:38:21 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:50.575 09:38:21 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:50.575 09:38:21 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:50.575 09:38:21 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:50.575 09:38:21 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:50.575 09:38:21 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:50.575 09:38:21 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:50.575 09:38:21 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:50.575 09:38:21 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:04:50.575 09:38:21 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:50.575 09:38:21 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:04:50.575 09:38:21 -- common/autotest_common.sh@1566 -- # device=0xa80a 00:04:50.575 09:38:21 -- common/autotest_common.sh@1567 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:04:50.575 09:38:21 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:50.575 09:38:21 -- common/autotest_common.sh@1572 -- # return 0 00:04:50.575 09:38:21 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:50.575 09:38:21 -- common/autotest_common.sh@1580 -- # return 0 00:04:50.575 09:38:21 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:50.575 09:38:21 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:50.575 09:38:21 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:50.575 09:38:21 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:50.575 09:38:21 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:50.575 09:38:21 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:50.575 09:38:21 -- common/autotest_common.sh@10 -- # set +x 00:04:50.575 09:38:21 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:50.575 09:38:21 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:50.575 09:38:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:50.575 09:38:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:50.575 09:38:21 -- common/autotest_common.sh@10 -- # set +x 00:04:50.575 ************************************ 00:04:50.575 START TEST env 00:04:50.575 ************************************ 00:04:50.575 09:38:21 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:50.835 * Looking for test storage... 00:04:50.835 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:50.835 09:38:21 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:50.835 09:38:21 env -- common/autotest_common.sh@1693 -- # lcov --version 00:04:50.835 09:38:21 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:50.835 09:38:21 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:50.835 09:38:21 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:50.835 09:38:21 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:50.835 09:38:21 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:50.835 09:38:21 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:50.835 09:38:21 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:50.835 09:38:21 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:50.835 09:38:21 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:50.835 09:38:21 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:50.835 09:38:21 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:50.835 09:38:21 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:50.835 09:38:21 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:50.835 09:38:21 env -- scripts/common.sh@344 -- # case "$op" in 00:04:50.835 09:38:21 env -- scripts/common.sh@345 -- # : 1 00:04:50.835 09:38:21 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:50.835 09:38:21 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:50.835 09:38:21 env -- scripts/common.sh@365 -- # decimal 1 00:04:50.835 09:38:21 env -- scripts/common.sh@353 -- # local d=1 00:04:50.835 09:38:21 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:50.835 09:38:21 env -- scripts/common.sh@355 -- # echo 1 00:04:50.835 09:38:21 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:50.835 09:38:21 env -- scripts/common.sh@366 -- # decimal 2 00:04:50.835 09:38:21 env -- scripts/common.sh@353 -- # local d=2 00:04:50.835 09:38:21 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:50.835 09:38:21 env -- scripts/common.sh@355 -- # echo 2 00:04:50.835 09:38:21 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:50.835 09:38:21 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:50.835 09:38:21 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:50.835 09:38:21 env -- scripts/common.sh@368 -- # return 0 00:04:50.835 09:38:21 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:50.835 09:38:21 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:50.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.835 --rc genhtml_branch_coverage=1 00:04:50.835 --rc genhtml_function_coverage=1 00:04:50.835 --rc genhtml_legend=1 00:04:50.835 --rc geninfo_all_blocks=1 00:04:50.835 --rc geninfo_unexecuted_blocks=1 00:04:50.835 00:04:50.835 ' 00:04:50.835 09:38:21 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:50.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.835 --rc genhtml_branch_coverage=1 00:04:50.835 --rc genhtml_function_coverage=1 00:04:50.835 --rc genhtml_legend=1 00:04:50.835 --rc geninfo_all_blocks=1 00:04:50.835 --rc geninfo_unexecuted_blocks=1 00:04:50.835 00:04:50.835 ' 00:04:50.835 09:38:21 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:50.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.835 --rc genhtml_branch_coverage=1 00:04:50.835 --rc genhtml_function_coverage=1 00:04:50.835 --rc genhtml_legend=1 00:04:50.835 --rc geninfo_all_blocks=1 00:04:50.835 --rc geninfo_unexecuted_blocks=1 00:04:50.835 00:04:50.835 ' 00:04:50.835 09:38:21 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:50.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.836 --rc genhtml_branch_coverage=1 00:04:50.836 --rc genhtml_function_coverage=1 00:04:50.836 --rc genhtml_legend=1 00:04:50.836 --rc geninfo_all_blocks=1 00:04:50.836 --rc geninfo_unexecuted_blocks=1 00:04:50.836 00:04:50.836 ' 00:04:50.836 09:38:21 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:50.836 09:38:21 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:50.836 09:38:21 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:50.836 09:38:21 env -- common/autotest_common.sh@10 -- # set +x 00:04:50.836 ************************************ 00:04:50.836 START TEST env_memory 00:04:50.836 ************************************ 00:04:50.836 09:38:21 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:50.836 00:04:50.836 00:04:50.836 CUnit - A unit testing framework for C - Version 2.1-3 00:04:50.836 http://cunit.sourceforge.net/ 00:04:50.836 00:04:50.836 00:04:50.836 Suite: memory 00:04:50.836 Test: alloc and free memory map ...[2024-11-20 09:38:21.714805] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:50.836 passed 00:04:50.836 Test: mem map translation ...[2024-11-20 09:38:21.740516] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:50.836 [2024-11-20 09:38:21.740546] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:50.836 [2024-11-20 09:38:21.740592] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:50.836 [2024-11-20 09:38:21.740600] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:51.097 passed 00:04:51.097 Test: mem map registration ...[2024-11-20 09:38:21.796050] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:51.097 [2024-11-20 09:38:21.796095] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:51.097 passed 00:04:51.097 Test: mem map adjacent registrations ...passed 00:04:51.097 00:04:51.097 Run Summary: Type Total Ran Passed Failed Inactive 00:04:51.097 suites 1 1 n/a 0 0 00:04:51.097 tests 4 4 4 0 0 00:04:51.097 asserts 152 152 152 0 n/a 00:04:51.097 00:04:51.097 Elapsed time = 0.197 seconds 00:04:51.097 00:04:51.097 real 0m0.212s 00:04:51.097 user 0m0.203s 00:04:51.097 sys 0m0.008s 00:04:51.097 09:38:21 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:51.097 09:38:21 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:51.097 ************************************ 00:04:51.097 END TEST env_memory 00:04:51.097 ************************************ 00:04:51.097 09:38:21 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:51.097 09:38:21 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:51.097 09:38:21 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:51.097 09:38:21 env -- common/autotest_common.sh@10 -- # set +x 00:04:51.097 ************************************ 00:04:51.097 START TEST env_vtophys 00:04:51.097 ************************************ 00:04:51.097 09:38:21 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:51.097 EAL: lib.eal log level changed from notice to debug 00:04:51.097 EAL: Detected lcore 0 as core 0 on socket 0 00:04:51.097 EAL: Detected lcore 1 as core 1 on socket 0 00:04:51.097 EAL: Detected lcore 2 as core 2 on socket 0 00:04:51.097 EAL: Detected lcore 3 as core 3 on socket 0 00:04:51.097 EAL: Detected lcore 4 as core 4 on socket 0 00:04:51.097 EAL: Detected lcore 5 as core 5 on socket 0 00:04:51.097 EAL: Detected lcore 6 as core 6 on socket 0 00:04:51.097 EAL: Detected lcore 7 as core 7 on socket 0 00:04:51.097 EAL: Detected lcore 8 as core 8 on socket 0 00:04:51.097 EAL: Detected lcore 9 as core 9 on socket 0 00:04:51.097 EAL: Detected lcore 10 as core 10 on socket 0 00:04:51.097 EAL: Detected lcore 11 as core 11 on socket 0 00:04:51.097 EAL: Detected lcore 12 as core 12 on socket 0 00:04:51.097 EAL: Detected lcore 13 as core 13 on socket 0 00:04:51.097 EAL: Detected lcore 14 as core 14 on socket 0 00:04:51.097 EAL: Detected lcore 15 as core 15 on socket 0 00:04:51.097 EAL: Detected lcore 16 as core 16 on socket 0 00:04:51.097 EAL: Detected lcore 17 as core 17 on socket 0 00:04:51.097 EAL: Detected lcore 18 as core 18 on socket 0 00:04:51.097 EAL: Detected lcore 19 as core 19 on socket 0 00:04:51.097 EAL: Detected lcore 20 as core 20 on socket 0 00:04:51.097 EAL: Detected lcore 21 as core 21 on socket 0 00:04:51.097 EAL: Detected lcore 22 as core 22 on socket 0 00:04:51.097 EAL: Detected lcore 23 as core 23 on socket 0 00:04:51.097 EAL: Detected lcore 24 as core 24 on socket 0 00:04:51.097 EAL: Detected lcore 25 as core 25 on socket 0 00:04:51.097 EAL: Detected lcore 26 as core 26 on socket 0 00:04:51.097 EAL: Detected lcore 27 as core 27 on socket 0 00:04:51.097 EAL: Detected lcore 28 as core 28 on socket 0 00:04:51.097 EAL: Detected lcore 29 as core 29 on socket 0 00:04:51.097 EAL: Detected lcore 30 as core 30 on socket 0 00:04:51.097 EAL: Detected lcore 31 as core 31 on socket 0 00:04:51.097 EAL: Detected lcore 32 as core 32 on socket 0 00:04:51.097 EAL: Detected lcore 33 as core 33 on socket 0 00:04:51.097 EAL: Detected lcore 34 as core 34 on socket 0 00:04:51.097 EAL: Detected lcore 35 as core 35 on socket 0 00:04:51.097 EAL: Detected lcore 36 as core 0 on socket 1 00:04:51.097 EAL: Detected lcore 37 as core 1 on socket 1 00:04:51.097 EAL: Detected lcore 38 as core 2 on socket 1 00:04:51.097 EAL: Detected lcore 39 as core 3 on socket 1 00:04:51.097 EAL: Detected lcore 40 as core 4 on socket 1 00:04:51.097 EAL: Detected lcore 41 as core 5 on socket 1 00:04:51.097 EAL: Detected lcore 42 as core 6 on socket 1 00:04:51.097 EAL: Detected lcore 43 as core 7 on socket 1 00:04:51.097 EAL: Detected lcore 44 as core 8 on socket 1 00:04:51.097 EAL: Detected lcore 45 as core 9 on socket 1 00:04:51.097 EAL: Detected lcore 46 as core 10 on socket 1 00:04:51.097 EAL: Detected lcore 47 as core 11 on socket 1 00:04:51.097 EAL: Detected lcore 48 as core 12 on socket 1 00:04:51.097 EAL: Detected lcore 49 as core 13 on socket 1 00:04:51.097 EAL: Detected lcore 50 as core 14 on socket 1 00:04:51.097 EAL: Detected lcore 51 as core 15 on socket 1 00:04:51.097 EAL: Detected lcore 52 as core 16 on socket 1 00:04:51.097 EAL: Detected lcore 53 as core 17 on socket 1 00:04:51.097 EAL: Detected lcore 54 as core 18 on socket 1 00:04:51.097 EAL: Detected lcore 55 as core 19 on socket 1 00:04:51.097 EAL: Detected lcore 56 as core 20 on socket 1 00:04:51.097 EAL: Detected lcore 57 as core 21 on socket 1 00:04:51.097 EAL: Detected lcore 58 as core 22 on socket 1 00:04:51.097 EAL: Detected lcore 59 as core 23 on socket 1 00:04:51.097 EAL: Detected lcore 60 as core 24 on socket 1 00:04:51.097 EAL: Detected lcore 61 as core 25 on socket 1 00:04:51.097 EAL: Detected lcore 62 as core 26 on socket 1 00:04:51.097 EAL: Detected lcore 63 as core 27 on socket 1 00:04:51.097 EAL: Detected lcore 64 as core 28 on socket 1 00:04:51.097 EAL: Detected lcore 65 as core 29 on socket 1 00:04:51.097 EAL: Detected lcore 66 as core 30 on socket 1 00:04:51.097 EAL: Detected lcore 67 as core 31 on socket 1 00:04:51.097 EAL: Detected lcore 68 as core 32 on socket 1 00:04:51.097 EAL: Detected lcore 69 as core 33 on socket 1 00:04:51.097 EAL: Detected lcore 70 as core 34 on socket 1 00:04:51.097 EAL: Detected lcore 71 as core 35 on socket 1 00:04:51.097 EAL: Detected lcore 72 as core 0 on socket 0 00:04:51.097 EAL: Detected lcore 73 as core 1 on socket 0 00:04:51.097 EAL: Detected lcore 74 as core 2 on socket 0 00:04:51.097 EAL: Detected lcore 75 as core 3 on socket 0 00:04:51.097 EAL: Detected lcore 76 as core 4 on socket 0 00:04:51.098 EAL: Detected lcore 77 as core 5 on socket 0 00:04:51.098 EAL: Detected lcore 78 as core 6 on socket 0 00:04:51.098 EAL: Detected lcore 79 as core 7 on socket 0 00:04:51.098 EAL: Detected lcore 80 as core 8 on socket 0 00:04:51.098 EAL: Detected lcore 81 as core 9 on socket 0 00:04:51.098 EAL: Detected lcore 82 as core 10 on socket 0 00:04:51.098 EAL: Detected lcore 83 as core 11 on socket 0 00:04:51.098 EAL: Detected lcore 84 as core 12 on socket 0 00:04:51.098 EAL: Detected lcore 85 as core 13 on socket 0 00:04:51.098 EAL: Detected lcore 86 as core 14 on socket 0 00:04:51.098 EAL: Detected lcore 87 as core 15 on socket 0 00:04:51.098 EAL: Detected lcore 88 as core 16 on socket 0 00:04:51.098 EAL: Detected lcore 89 as core 17 on socket 0 00:04:51.098 EAL: Detected lcore 90 as core 18 on socket 0 00:04:51.098 EAL: Detected lcore 91 as core 19 on socket 0 00:04:51.098 EAL: Detected lcore 92 as core 20 on socket 0 00:04:51.098 EAL: Detected lcore 93 as core 21 on socket 0 00:04:51.098 EAL: Detected lcore 94 as core 22 on socket 0 00:04:51.098 EAL: Detected lcore 95 as core 23 on socket 0 00:04:51.098 EAL: Detected lcore 96 as core 24 on socket 0 00:04:51.098 EAL: Detected lcore 97 as core 25 on socket 0 00:04:51.098 EAL: Detected lcore 98 as core 26 on socket 0 00:04:51.098 EAL: Detected lcore 99 as core 27 on socket 0 00:04:51.098 EAL: Detected lcore 100 as core 28 on socket 0 00:04:51.098 EAL: Detected lcore 101 as core 29 on socket 0 00:04:51.098 EAL: Detected lcore 102 as core 30 on socket 0 00:04:51.098 EAL: Detected lcore 103 as core 31 on socket 0 00:04:51.098 EAL: Detected lcore 104 as core 32 on socket 0 00:04:51.098 EAL: Detected lcore 105 as core 33 on socket 0 00:04:51.098 EAL: Detected lcore 106 as core 34 on socket 0 00:04:51.098 EAL: Detected lcore 107 as core 35 on socket 0 00:04:51.098 EAL: Detected lcore 108 as core 0 on socket 1 00:04:51.098 EAL: Detected lcore 109 as core 1 on socket 1 00:04:51.098 EAL: Detected lcore 110 as core 2 on socket 1 00:04:51.098 EAL: Detected lcore 111 as core 3 on socket 1 00:04:51.098 EAL: Detected lcore 112 as core 4 on socket 1 00:04:51.098 EAL: Detected lcore 113 as core 5 on socket 1 00:04:51.098 EAL: Detected lcore 114 as core 6 on socket 1 00:04:51.098 EAL: Detected lcore 115 as core 7 on socket 1 00:04:51.098 EAL: Detected lcore 116 as core 8 on socket 1 00:04:51.098 EAL: Detected lcore 117 as core 9 on socket 1 00:04:51.098 EAL: Detected lcore 118 as core 10 on socket 1 00:04:51.098 EAL: Detected lcore 119 as core 11 on socket 1 00:04:51.098 EAL: Detected lcore 120 as core 12 on socket 1 00:04:51.098 EAL: Detected lcore 121 as core 13 on socket 1 00:04:51.098 EAL: Detected lcore 122 as core 14 on socket 1 00:04:51.098 EAL: Detected lcore 123 as core 15 on socket 1 00:04:51.098 EAL: Detected lcore 124 as core 16 on socket 1 00:04:51.098 EAL: Detected lcore 125 as core 17 on socket 1 00:04:51.098 EAL: Detected lcore 126 as core 18 on socket 1 00:04:51.098 EAL: Detected lcore 127 as core 19 on socket 1 00:04:51.098 EAL: Skipped lcore 128 as core 20 on socket 1 00:04:51.098 EAL: Skipped lcore 129 as core 21 on socket 1 00:04:51.098 EAL: Skipped lcore 130 as core 22 on socket 1 00:04:51.098 EAL: Skipped lcore 131 as core 23 on socket 1 00:04:51.098 EAL: Skipped lcore 132 as core 24 on socket 1 00:04:51.098 EAL: Skipped lcore 133 as core 25 on socket 1 00:04:51.098 EAL: Skipped lcore 134 as core 26 on socket 1 00:04:51.098 EAL: Skipped lcore 135 as core 27 on socket 1 00:04:51.098 EAL: Skipped lcore 136 as core 28 on socket 1 00:04:51.098 EAL: Skipped lcore 137 as core 29 on socket 1 00:04:51.098 EAL: Skipped lcore 138 as core 30 on socket 1 00:04:51.098 EAL: Skipped lcore 139 as core 31 on socket 1 00:04:51.098 EAL: Skipped lcore 140 as core 32 on socket 1 00:04:51.098 EAL: Skipped lcore 141 as core 33 on socket 1 00:04:51.098 EAL: Skipped lcore 142 as core 34 on socket 1 00:04:51.098 EAL: Skipped lcore 143 as core 35 on socket 1 00:04:51.098 EAL: Maximum logical cores by configuration: 128 00:04:51.098 EAL: Detected CPU lcores: 128 00:04:51.098 EAL: Detected NUMA nodes: 2 00:04:51.098 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:51.098 EAL: Detected shared linkage of DPDK 00:04:51.098 EAL: No shared files mode enabled, IPC will be disabled 00:04:51.098 EAL: Bus pci wants IOVA as 'DC' 00:04:51.098 EAL: Buses did not request a specific IOVA mode. 00:04:51.098 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:51.098 EAL: Selected IOVA mode 'VA' 00:04:51.098 EAL: Probing VFIO support... 00:04:51.098 EAL: IOMMU type 1 (Type 1) is supported 00:04:51.098 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:51.098 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:51.098 EAL: VFIO support initialized 00:04:51.098 EAL: Ask a virtual area of 0x2e000 bytes 00:04:51.098 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:51.098 EAL: Setting up physically contiguous memory... 00:04:51.098 EAL: Setting maximum number of open files to 524288 00:04:51.098 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:51.098 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:51.098 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:51.098 EAL: Ask a virtual area of 0x61000 bytes 00:04:51.098 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:51.098 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:51.098 EAL: Ask a virtual area of 0x400000000 bytes 00:04:51.098 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:51.098 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:51.098 EAL: Ask a virtual area of 0x61000 bytes 00:04:51.098 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:51.098 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:51.098 EAL: Ask a virtual area of 0x400000000 bytes 00:04:51.098 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:51.098 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:51.098 EAL: Ask a virtual area of 0x61000 bytes 00:04:51.098 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:51.098 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:51.098 EAL: Ask a virtual area of 0x400000000 bytes 00:04:51.098 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:51.098 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:51.098 EAL: Ask a virtual area of 0x61000 bytes 00:04:51.098 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:51.359 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:51.359 EAL: Ask a virtual area of 0x400000000 bytes 00:04:51.359 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:51.359 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:51.359 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:51.359 EAL: Ask a virtual area of 0x61000 bytes 00:04:51.359 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:51.359 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:51.359 EAL: Ask a virtual area of 0x400000000 bytes 00:04:51.359 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:51.359 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:51.359 EAL: Ask a virtual area of 0x61000 bytes 00:04:51.359 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:51.359 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:51.359 EAL: Ask a virtual area of 0x400000000 bytes 00:04:51.359 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:51.359 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:51.359 EAL: Ask a virtual area of 0x61000 bytes 00:04:51.359 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:51.359 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:51.359 EAL: Ask a virtual area of 0x400000000 bytes 00:04:51.359 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:51.359 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:51.359 EAL: Ask a virtual area of 0x61000 bytes 00:04:51.359 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:51.359 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:51.359 EAL: Ask a virtual area of 0x400000000 bytes 00:04:51.359 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:51.359 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:51.359 EAL: Hugepages will be freed exactly as allocated. 00:04:51.359 EAL: No shared files mode enabled, IPC is disabled 00:04:51.359 EAL: No shared files mode enabled, IPC is disabled 00:04:51.359 EAL: TSC frequency is ~2400000 KHz 00:04:51.359 EAL: Main lcore 0 is ready (tid=7f4afa949a00;cpuset=[0]) 00:04:51.359 EAL: Trying to obtain current memory policy. 00:04:51.359 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:51.359 EAL: Restoring previous memory policy: 0 00:04:51.359 EAL: request: mp_malloc_sync 00:04:51.359 EAL: No shared files mode enabled, IPC is disabled 00:04:51.359 EAL: Heap on socket 0 was expanded by 2MB 00:04:51.359 EAL: No shared files mode enabled, IPC is disabled 00:04:51.359 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:51.359 EAL: Mem event callback 'spdk:(nil)' registered 00:04:51.359 00:04:51.359 00:04:51.359 CUnit - A unit testing framework for C - Version 2.1-3 00:04:51.359 http://cunit.sourceforge.net/ 00:04:51.359 00:04:51.359 00:04:51.359 Suite: components_suite 00:04:51.359 Test: vtophys_malloc_test ...passed 00:04:51.359 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:51.359 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:51.359 EAL: Restoring previous memory policy: 4 00:04:51.359 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.359 EAL: request: mp_malloc_sync 00:04:51.359 EAL: No shared files mode enabled, IPC is disabled 00:04:51.359 EAL: Heap on socket 0 was expanded by 4MB 00:04:51.359 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.359 EAL: request: mp_malloc_sync 00:04:51.359 EAL: No shared files mode enabled, IPC is disabled 00:04:51.359 EAL: Heap on socket 0 was shrunk by 4MB 00:04:51.359 EAL: Trying to obtain current memory policy. 00:04:51.359 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:51.359 EAL: Restoring previous memory policy: 4 00:04:51.359 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.359 EAL: request: mp_malloc_sync 00:04:51.359 EAL: No shared files mode enabled, IPC is disabled 00:04:51.359 EAL: Heap on socket 0 was expanded by 6MB 00:04:51.359 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.359 EAL: request: mp_malloc_sync 00:04:51.359 EAL: No shared files mode enabled, IPC is disabled 00:04:51.359 EAL: Heap on socket 0 was shrunk by 6MB 00:04:51.359 EAL: Trying to obtain current memory policy. 00:04:51.359 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:51.359 EAL: Restoring previous memory policy: 4 00:04:51.359 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.359 EAL: request: mp_malloc_sync 00:04:51.359 EAL: No shared files mode enabled, IPC is disabled 00:04:51.359 EAL: Heap on socket 0 was expanded by 10MB 00:04:51.359 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.359 EAL: request: mp_malloc_sync 00:04:51.359 EAL: No shared files mode enabled, IPC is disabled 00:04:51.359 EAL: Heap on socket 0 was shrunk by 10MB 00:04:51.359 EAL: Trying to obtain current memory policy. 00:04:51.359 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:51.359 EAL: Restoring previous memory policy: 4 00:04:51.359 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.359 EAL: request: mp_malloc_sync 00:04:51.359 EAL: No shared files mode enabled, IPC is disabled 00:04:51.359 EAL: Heap on socket 0 was expanded by 18MB 00:04:51.359 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.359 EAL: request: mp_malloc_sync 00:04:51.359 EAL: No shared files mode enabled, IPC is disabled 00:04:51.359 EAL: Heap on socket 0 was shrunk by 18MB 00:04:51.359 EAL: Trying to obtain current memory policy. 00:04:51.359 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:51.359 EAL: Restoring previous memory policy: 4 00:04:51.359 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.359 EAL: request: mp_malloc_sync 00:04:51.359 EAL: No shared files mode enabled, IPC is disabled 00:04:51.359 EAL: Heap on socket 0 was expanded by 34MB 00:04:51.359 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.359 EAL: request: mp_malloc_sync 00:04:51.359 EAL: No shared files mode enabled, IPC is disabled 00:04:51.359 EAL: Heap on socket 0 was shrunk by 34MB 00:04:51.359 EAL: Trying to obtain current memory policy. 00:04:51.359 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:51.359 EAL: Restoring previous memory policy: 4 00:04:51.359 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.359 EAL: request: mp_malloc_sync 00:04:51.359 EAL: No shared files mode enabled, IPC is disabled 00:04:51.359 EAL: Heap on socket 0 was expanded by 66MB 00:04:51.359 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.359 EAL: request: mp_malloc_sync 00:04:51.359 EAL: No shared files mode enabled, IPC is disabled 00:04:51.359 EAL: Heap on socket 0 was shrunk by 66MB 00:04:51.359 EAL: Trying to obtain current memory policy. 00:04:51.359 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:51.359 EAL: Restoring previous memory policy: 4 00:04:51.359 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.359 EAL: request: mp_malloc_sync 00:04:51.359 EAL: No shared files mode enabled, IPC is disabled 00:04:51.359 EAL: Heap on socket 0 was expanded by 130MB 00:04:51.359 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.359 EAL: request: mp_malloc_sync 00:04:51.359 EAL: No shared files mode enabled, IPC is disabled 00:04:51.359 EAL: Heap on socket 0 was shrunk by 130MB 00:04:51.359 EAL: Trying to obtain current memory policy. 00:04:51.359 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:51.359 EAL: Restoring previous memory policy: 4 00:04:51.359 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.359 EAL: request: mp_malloc_sync 00:04:51.359 EAL: No shared files mode enabled, IPC is disabled 00:04:51.359 EAL: Heap on socket 0 was expanded by 258MB 00:04:51.359 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.359 EAL: request: mp_malloc_sync 00:04:51.359 EAL: No shared files mode enabled, IPC is disabled 00:04:51.359 EAL: Heap on socket 0 was shrunk by 258MB 00:04:51.359 EAL: Trying to obtain current memory policy. 00:04:51.359 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:51.619 EAL: Restoring previous memory policy: 4 00:04:51.619 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.619 EAL: request: mp_malloc_sync 00:04:51.619 EAL: No shared files mode enabled, IPC is disabled 00:04:51.619 EAL: Heap on socket 0 was expanded by 514MB 00:04:51.619 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.619 EAL: request: mp_malloc_sync 00:04:51.619 EAL: No shared files mode enabled, IPC is disabled 00:04:51.619 EAL: Heap on socket 0 was shrunk by 514MB 00:04:51.619 EAL: Trying to obtain current memory policy. 00:04:51.619 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:51.880 EAL: Restoring previous memory policy: 4 00:04:51.880 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.880 EAL: request: mp_malloc_sync 00:04:51.880 EAL: No shared files mode enabled, IPC is disabled 00:04:51.880 EAL: Heap on socket 0 was expanded by 1026MB 00:04:51.880 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.880 EAL: request: mp_malloc_sync 00:04:51.880 EAL: No shared files mode enabled, IPC is disabled 00:04:51.880 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:51.880 passed 00:04:51.880 00:04:51.880 Run Summary: Type Total Ran Passed Failed Inactive 00:04:51.880 suites 1 1 n/a 0 0 00:04:51.880 tests 2 2 2 0 0 00:04:51.880 asserts 497 497 497 0 n/a 00:04:51.880 00:04:51.880 Elapsed time = 0.686 seconds 00:04:51.880 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.880 EAL: request: mp_malloc_sync 00:04:51.880 EAL: No shared files mode enabled, IPC is disabled 00:04:51.880 EAL: Heap on socket 0 was shrunk by 2MB 00:04:51.880 EAL: No shared files mode enabled, IPC is disabled 00:04:51.880 EAL: No shared files mode enabled, IPC is disabled 00:04:51.880 EAL: No shared files mode enabled, IPC is disabled 00:04:51.880 00:04:51.880 real 0m0.832s 00:04:51.880 user 0m0.440s 00:04:51.880 sys 0m0.367s 00:04:51.880 09:38:22 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:51.880 09:38:22 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:51.880 ************************************ 00:04:51.880 END TEST env_vtophys 00:04:51.880 ************************************ 00:04:52.223 09:38:22 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:52.223 09:38:22 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:52.223 09:38:22 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:52.223 09:38:22 env -- common/autotest_common.sh@10 -- # set +x 00:04:52.223 ************************************ 00:04:52.223 START TEST env_pci 00:04:52.223 ************************************ 00:04:52.223 09:38:22 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:52.223 00:04:52.223 00:04:52.223 CUnit - A unit testing framework for C - Version 2.1-3 00:04:52.223 http://cunit.sourceforge.net/ 00:04:52.223 00:04:52.223 00:04:52.223 Suite: pci 00:04:52.223 Test: pci_hook ...[2024-11-20 09:38:22.881995] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1130531 has claimed it 00:04:52.223 EAL: Cannot find device (10000:00:01.0) 00:04:52.223 EAL: Failed to attach device on primary process 00:04:52.223 passed 00:04:52.223 00:04:52.223 Run Summary: Type Total Ran Passed Failed Inactive 00:04:52.223 suites 1 1 n/a 0 0 00:04:52.223 tests 1 1 1 0 0 00:04:52.223 asserts 25 25 25 0 n/a 00:04:52.223 00:04:52.223 Elapsed time = 0.032 seconds 00:04:52.223 00:04:52.223 real 0m0.054s 00:04:52.223 user 0m0.015s 00:04:52.223 sys 0m0.038s 00:04:52.223 09:38:22 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:52.223 09:38:22 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:52.223 ************************************ 00:04:52.223 END TEST env_pci 00:04:52.223 ************************************ 00:04:52.223 09:38:22 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:52.223 09:38:22 env -- env/env.sh@15 -- # uname 00:04:52.223 09:38:22 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:52.223 09:38:22 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:52.223 09:38:22 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:52.223 09:38:22 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:52.223 09:38:22 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:52.223 09:38:22 env -- common/autotest_common.sh@10 -- # set +x 00:04:52.223 ************************************ 00:04:52.223 START TEST env_dpdk_post_init 00:04:52.223 ************************************ 00:04:52.223 09:38:23 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:52.223 EAL: Detected CPU lcores: 128 00:04:52.223 EAL: Detected NUMA nodes: 2 00:04:52.223 EAL: Detected shared linkage of DPDK 00:04:52.223 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:52.223 EAL: Selected IOVA mode 'VA' 00:04:52.223 EAL: VFIO support initialized 00:04:52.223 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:52.545 EAL: Using IOMMU type 1 (Type 1) 00:04:52.545 EAL: Ignore mapping IO port bar(1) 00:04:52.545 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:04:52.808 EAL: Ignore mapping IO port bar(1) 00:04:52.808 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:04:52.808 EAL: Ignore mapping IO port bar(1) 00:04:53.068 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:04:53.068 EAL: Ignore mapping IO port bar(1) 00:04:53.329 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:04:53.329 EAL: Ignore mapping IO port bar(1) 00:04:53.591 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:04:53.591 EAL: Ignore mapping IO port bar(1) 00:04:53.591 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:04:53.853 EAL: Ignore mapping IO port bar(1) 00:04:53.853 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:04:54.114 EAL: Ignore mapping IO port bar(1) 00:04:54.114 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:04:54.375 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:04:54.375 EAL: Ignore mapping IO port bar(1) 00:04:54.636 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:04:54.636 EAL: Ignore mapping IO port bar(1) 00:04:54.896 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:04:54.896 EAL: Ignore mapping IO port bar(1) 00:04:55.157 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:04:55.157 EAL: Ignore mapping IO port bar(1) 00:04:55.157 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:04:55.418 EAL: Ignore mapping IO port bar(1) 00:04:55.418 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:04:55.678 EAL: Ignore mapping IO port bar(1) 00:04:55.679 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:04:55.940 EAL: Ignore mapping IO port bar(1) 00:04:55.940 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:04:55.940 EAL: Ignore mapping IO port bar(1) 00:04:56.200 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:04:56.200 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:04:56.200 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:04:56.200 Starting DPDK initialization... 00:04:56.200 Starting SPDK post initialization... 00:04:56.200 SPDK NVMe probe 00:04:56.200 Attaching to 0000:65:00.0 00:04:56.200 Attached to 0000:65:00.0 00:04:56.200 Cleaning up... 00:04:58.113 00:04:58.113 real 0m5.740s 00:04:58.113 user 0m0.100s 00:04:58.113 sys 0m0.199s 00:04:58.113 09:38:28 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:58.113 09:38:28 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:58.113 ************************************ 00:04:58.113 END TEST env_dpdk_post_init 00:04:58.113 ************************************ 00:04:58.113 09:38:28 env -- env/env.sh@26 -- # uname 00:04:58.113 09:38:28 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:58.113 09:38:28 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:58.113 09:38:28 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:58.113 09:38:28 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:58.113 09:38:28 env -- common/autotest_common.sh@10 -- # set +x 00:04:58.113 ************************************ 00:04:58.113 START TEST env_mem_callbacks 00:04:58.113 ************************************ 00:04:58.113 09:38:28 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:58.113 EAL: Detected CPU lcores: 128 00:04:58.113 EAL: Detected NUMA nodes: 2 00:04:58.113 EAL: Detected shared linkage of DPDK 00:04:58.113 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:58.113 EAL: Selected IOVA mode 'VA' 00:04:58.113 EAL: VFIO support initialized 00:04:58.113 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:58.113 00:04:58.113 00:04:58.113 CUnit - A unit testing framework for C - Version 2.1-3 00:04:58.113 http://cunit.sourceforge.net/ 00:04:58.113 00:04:58.113 00:04:58.113 Suite: memory 00:04:58.113 Test: test ... 00:04:58.113 register 0x200000200000 2097152 00:04:58.113 malloc 3145728 00:04:58.113 register 0x200000400000 4194304 00:04:58.113 buf 0x200000500000 len 3145728 PASSED 00:04:58.113 malloc 64 00:04:58.113 buf 0x2000004fff40 len 64 PASSED 00:04:58.113 malloc 4194304 00:04:58.113 register 0x200000800000 6291456 00:04:58.113 buf 0x200000a00000 len 4194304 PASSED 00:04:58.113 free 0x200000500000 3145728 00:04:58.113 free 0x2000004fff40 64 00:04:58.113 unregister 0x200000400000 4194304 PASSED 00:04:58.113 free 0x200000a00000 4194304 00:04:58.113 unregister 0x200000800000 6291456 PASSED 00:04:58.113 malloc 8388608 00:04:58.113 register 0x200000400000 10485760 00:04:58.113 buf 0x200000600000 len 8388608 PASSED 00:04:58.113 free 0x200000600000 8388608 00:04:58.113 unregister 0x200000400000 10485760 PASSED 00:04:58.113 passed 00:04:58.113 00:04:58.113 Run Summary: Type Total Ran Passed Failed Inactive 00:04:58.113 suites 1 1 n/a 0 0 00:04:58.113 tests 1 1 1 0 0 00:04:58.113 asserts 15 15 15 0 n/a 00:04:58.113 00:04:58.113 Elapsed time = 0.010 seconds 00:04:58.113 00:04:58.113 real 0m0.069s 00:04:58.113 user 0m0.025s 00:04:58.113 sys 0m0.043s 00:04:58.113 09:38:28 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:58.113 09:38:28 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:58.113 ************************************ 00:04:58.113 END TEST env_mem_callbacks 00:04:58.113 ************************************ 00:04:58.113 00:04:58.113 real 0m7.532s 00:04:58.113 user 0m1.059s 00:04:58.113 sys 0m1.043s 00:04:58.113 09:38:28 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:58.113 09:38:28 env -- common/autotest_common.sh@10 -- # set +x 00:04:58.113 ************************************ 00:04:58.113 END TEST env 00:04:58.113 ************************************ 00:04:58.113 09:38:28 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:58.113 09:38:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:58.113 09:38:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:58.113 09:38:28 -- common/autotest_common.sh@10 -- # set +x 00:04:58.375 ************************************ 00:04:58.375 START TEST rpc 00:04:58.375 ************************************ 00:04:58.375 09:38:29 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:58.375 * Looking for test storage... 00:04:58.375 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:58.375 09:38:29 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:58.375 09:38:29 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:58.375 09:38:29 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:58.375 09:38:29 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:58.375 09:38:29 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:58.375 09:38:29 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:58.375 09:38:29 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:58.375 09:38:29 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:58.375 09:38:29 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:58.375 09:38:29 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:58.375 09:38:29 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:58.375 09:38:29 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:58.375 09:38:29 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:58.375 09:38:29 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:58.375 09:38:29 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:58.375 09:38:29 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:58.375 09:38:29 rpc -- scripts/common.sh@345 -- # : 1 00:04:58.375 09:38:29 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:58.375 09:38:29 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:58.375 09:38:29 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:58.375 09:38:29 rpc -- scripts/common.sh@353 -- # local d=1 00:04:58.375 09:38:29 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:58.375 09:38:29 rpc -- scripts/common.sh@355 -- # echo 1 00:04:58.375 09:38:29 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:58.375 09:38:29 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:58.375 09:38:29 rpc -- scripts/common.sh@353 -- # local d=2 00:04:58.375 09:38:29 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:58.375 09:38:29 rpc -- scripts/common.sh@355 -- # echo 2 00:04:58.375 09:38:29 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:58.375 09:38:29 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:58.375 09:38:29 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:58.375 09:38:29 rpc -- scripts/common.sh@368 -- # return 0 00:04:58.375 09:38:29 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:58.375 09:38:29 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:58.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.375 --rc genhtml_branch_coverage=1 00:04:58.375 --rc genhtml_function_coverage=1 00:04:58.375 --rc genhtml_legend=1 00:04:58.375 --rc geninfo_all_blocks=1 00:04:58.375 --rc geninfo_unexecuted_blocks=1 00:04:58.375 00:04:58.375 ' 00:04:58.375 09:38:29 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:58.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.375 --rc genhtml_branch_coverage=1 00:04:58.375 --rc genhtml_function_coverage=1 00:04:58.375 --rc genhtml_legend=1 00:04:58.375 --rc geninfo_all_blocks=1 00:04:58.375 --rc geninfo_unexecuted_blocks=1 00:04:58.375 00:04:58.375 ' 00:04:58.375 09:38:29 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:58.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.375 --rc genhtml_branch_coverage=1 00:04:58.375 --rc genhtml_function_coverage=1 00:04:58.375 --rc genhtml_legend=1 00:04:58.375 --rc geninfo_all_blocks=1 00:04:58.375 --rc geninfo_unexecuted_blocks=1 00:04:58.375 00:04:58.375 ' 00:04:58.375 09:38:29 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:58.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.375 --rc genhtml_branch_coverage=1 00:04:58.375 --rc genhtml_function_coverage=1 00:04:58.375 --rc genhtml_legend=1 00:04:58.375 --rc geninfo_all_blocks=1 00:04:58.375 --rc geninfo_unexecuted_blocks=1 00:04:58.375 00:04:58.375 ' 00:04:58.375 09:38:29 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1131895 00:04:58.375 09:38:29 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:58.375 09:38:29 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:58.375 09:38:29 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1131895 00:04:58.375 09:38:29 rpc -- common/autotest_common.sh@835 -- # '[' -z 1131895 ']' 00:04:58.375 09:38:29 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:58.375 09:38:29 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:58.375 09:38:29 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:58.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:58.375 09:38:29 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:58.375 09:38:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.636 [2024-11-20 09:38:29.294674] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:04:58.636 [2024-11-20 09:38:29.294738] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1131895 ] 00:04:58.636 [2024-11-20 09:38:29.386667] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.636 [2024-11-20 09:38:29.438422] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:58.636 [2024-11-20 09:38:29.438478] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1131895' to capture a snapshot of events at runtime. 00:04:58.636 [2024-11-20 09:38:29.438486] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:58.636 [2024-11-20 09:38:29.438494] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:58.636 [2024-11-20 09:38:29.438500] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1131895 for offline analysis/debug. 00:04:58.636 [2024-11-20 09:38:29.439285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.209 09:38:30 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:59.209 09:38:30 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:59.209 09:38:30 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:59.209 09:38:30 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:59.209 09:38:30 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:59.209 09:38:30 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:59.209 09:38:30 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:59.209 09:38:30 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:59.209 09:38:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.470 ************************************ 00:04:59.470 START TEST rpc_integrity 00:04:59.470 ************************************ 00:04:59.470 09:38:30 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:59.470 09:38:30 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:59.470 09:38:30 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.470 09:38:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.470 09:38:30 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.470 09:38:30 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:59.470 09:38:30 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:59.470 09:38:30 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:59.470 09:38:30 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:59.470 09:38:30 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.470 09:38:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.470 09:38:30 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.470 09:38:30 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:59.470 09:38:30 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:59.470 09:38:30 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.470 09:38:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.470 09:38:30 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.470 09:38:30 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:59.470 { 00:04:59.470 "name": "Malloc0", 00:04:59.470 "aliases": [ 00:04:59.470 "1b738c67-05f1-43d7-8e39-510ce91bb09f" 00:04:59.470 ], 00:04:59.470 "product_name": "Malloc disk", 00:04:59.470 "block_size": 512, 00:04:59.470 "num_blocks": 16384, 00:04:59.470 "uuid": "1b738c67-05f1-43d7-8e39-510ce91bb09f", 00:04:59.470 "assigned_rate_limits": { 00:04:59.470 "rw_ios_per_sec": 0, 00:04:59.470 "rw_mbytes_per_sec": 0, 00:04:59.470 "r_mbytes_per_sec": 0, 00:04:59.470 "w_mbytes_per_sec": 0 00:04:59.470 }, 00:04:59.470 "claimed": false, 00:04:59.470 "zoned": false, 00:04:59.470 "supported_io_types": { 00:04:59.470 "read": true, 00:04:59.470 "write": true, 00:04:59.470 "unmap": true, 00:04:59.470 "flush": true, 00:04:59.470 "reset": true, 00:04:59.470 "nvme_admin": false, 00:04:59.471 "nvme_io": false, 00:04:59.471 "nvme_io_md": false, 00:04:59.471 "write_zeroes": true, 00:04:59.471 "zcopy": true, 00:04:59.471 "get_zone_info": false, 00:04:59.471 "zone_management": false, 00:04:59.471 "zone_append": false, 00:04:59.471 "compare": false, 00:04:59.471 "compare_and_write": false, 00:04:59.471 "abort": true, 00:04:59.471 "seek_hole": false, 00:04:59.471 "seek_data": false, 00:04:59.471 "copy": true, 00:04:59.471 "nvme_iov_md": false 00:04:59.471 }, 00:04:59.471 "memory_domains": [ 00:04:59.471 { 00:04:59.471 "dma_device_id": "system", 00:04:59.471 "dma_device_type": 1 00:04:59.471 }, 00:04:59.471 { 00:04:59.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:59.471 "dma_device_type": 2 00:04:59.471 } 00:04:59.471 ], 00:04:59.471 "driver_specific": {} 00:04:59.471 } 00:04:59.471 ]' 00:04:59.471 09:38:30 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:59.471 09:38:30 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:59.471 09:38:30 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:59.471 09:38:30 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.471 09:38:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.471 [2024-11-20 09:38:30.296011] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:59.471 [2024-11-20 09:38:30.296059] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:59.471 [2024-11-20 09:38:30.296076] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xaf1db0 00:04:59.471 [2024-11-20 09:38:30.296084] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:59.471 [2024-11-20 09:38:30.297651] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:59.471 [2024-11-20 09:38:30.297688] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:59.471 Passthru0 00:04:59.471 09:38:30 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.471 09:38:30 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:59.471 09:38:30 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.471 09:38:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.471 09:38:30 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.471 09:38:30 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:59.471 { 00:04:59.471 "name": "Malloc0", 00:04:59.471 "aliases": [ 00:04:59.471 "1b738c67-05f1-43d7-8e39-510ce91bb09f" 00:04:59.471 ], 00:04:59.471 "product_name": "Malloc disk", 00:04:59.471 "block_size": 512, 00:04:59.471 "num_blocks": 16384, 00:04:59.471 "uuid": "1b738c67-05f1-43d7-8e39-510ce91bb09f", 00:04:59.471 "assigned_rate_limits": { 00:04:59.471 "rw_ios_per_sec": 0, 00:04:59.471 "rw_mbytes_per_sec": 0, 00:04:59.471 "r_mbytes_per_sec": 0, 00:04:59.471 "w_mbytes_per_sec": 0 00:04:59.471 }, 00:04:59.471 "claimed": true, 00:04:59.471 "claim_type": "exclusive_write", 00:04:59.471 "zoned": false, 00:04:59.471 "supported_io_types": { 00:04:59.471 "read": true, 00:04:59.471 "write": true, 00:04:59.471 "unmap": true, 00:04:59.471 "flush": true, 00:04:59.471 "reset": true, 00:04:59.471 "nvme_admin": false, 00:04:59.471 "nvme_io": false, 00:04:59.471 "nvme_io_md": false, 00:04:59.471 "write_zeroes": true, 00:04:59.471 "zcopy": true, 00:04:59.471 "get_zone_info": false, 00:04:59.471 "zone_management": false, 00:04:59.471 "zone_append": false, 00:04:59.471 "compare": false, 00:04:59.471 "compare_and_write": false, 00:04:59.471 "abort": true, 00:04:59.471 "seek_hole": false, 00:04:59.471 "seek_data": false, 00:04:59.471 "copy": true, 00:04:59.471 "nvme_iov_md": false 00:04:59.471 }, 00:04:59.471 "memory_domains": [ 00:04:59.471 { 00:04:59.471 "dma_device_id": "system", 00:04:59.471 "dma_device_type": 1 00:04:59.471 }, 00:04:59.471 { 00:04:59.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:59.471 "dma_device_type": 2 00:04:59.471 } 00:04:59.471 ], 00:04:59.471 "driver_specific": {} 00:04:59.471 }, 00:04:59.471 { 00:04:59.471 "name": "Passthru0", 00:04:59.471 "aliases": [ 00:04:59.471 "96584b1b-8dff-5c52-b250-cc4ccec2ad0a" 00:04:59.471 ], 00:04:59.471 "product_name": "passthru", 00:04:59.471 "block_size": 512, 00:04:59.471 "num_blocks": 16384, 00:04:59.471 "uuid": "96584b1b-8dff-5c52-b250-cc4ccec2ad0a", 00:04:59.471 "assigned_rate_limits": { 00:04:59.471 "rw_ios_per_sec": 0, 00:04:59.471 "rw_mbytes_per_sec": 0, 00:04:59.471 "r_mbytes_per_sec": 0, 00:04:59.471 "w_mbytes_per_sec": 0 00:04:59.471 }, 00:04:59.471 "claimed": false, 00:04:59.471 "zoned": false, 00:04:59.471 "supported_io_types": { 00:04:59.471 "read": true, 00:04:59.471 "write": true, 00:04:59.471 "unmap": true, 00:04:59.471 "flush": true, 00:04:59.471 "reset": true, 00:04:59.471 "nvme_admin": false, 00:04:59.471 "nvme_io": false, 00:04:59.471 "nvme_io_md": false, 00:04:59.471 "write_zeroes": true, 00:04:59.471 "zcopy": true, 00:04:59.471 "get_zone_info": false, 00:04:59.471 "zone_management": false, 00:04:59.471 "zone_append": false, 00:04:59.471 "compare": false, 00:04:59.471 "compare_and_write": false, 00:04:59.471 "abort": true, 00:04:59.471 "seek_hole": false, 00:04:59.471 "seek_data": false, 00:04:59.471 "copy": true, 00:04:59.471 "nvme_iov_md": false 00:04:59.471 }, 00:04:59.471 "memory_domains": [ 00:04:59.471 { 00:04:59.471 "dma_device_id": "system", 00:04:59.471 "dma_device_type": 1 00:04:59.471 }, 00:04:59.471 { 00:04:59.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:59.471 "dma_device_type": 2 00:04:59.471 } 00:04:59.471 ], 00:04:59.471 "driver_specific": { 00:04:59.471 "passthru": { 00:04:59.471 "name": "Passthru0", 00:04:59.471 "base_bdev_name": "Malloc0" 00:04:59.471 } 00:04:59.471 } 00:04:59.471 } 00:04:59.471 ]' 00:04:59.471 09:38:30 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:59.471 09:38:30 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:59.471 09:38:30 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:59.471 09:38:30 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.471 09:38:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.732 09:38:30 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.732 09:38:30 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:59.732 09:38:30 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.732 09:38:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.732 09:38:30 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.732 09:38:30 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:59.732 09:38:30 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.732 09:38:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.732 09:38:30 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.732 09:38:30 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:59.732 09:38:30 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:59.733 09:38:30 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:59.733 00:04:59.733 real 0m0.302s 00:04:59.733 user 0m0.184s 00:04:59.733 sys 0m0.052s 00:04:59.733 09:38:30 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:59.733 09:38:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.733 ************************************ 00:04:59.733 END TEST rpc_integrity 00:04:59.733 ************************************ 00:04:59.733 09:38:30 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:59.733 09:38:30 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:59.733 09:38:30 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:59.733 09:38:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.733 ************************************ 00:04:59.733 START TEST rpc_plugins 00:04:59.733 ************************************ 00:04:59.733 09:38:30 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:59.733 09:38:30 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:59.733 09:38:30 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.733 09:38:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:59.733 09:38:30 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.733 09:38:30 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:59.733 09:38:30 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:59.733 09:38:30 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.733 09:38:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:59.733 09:38:30 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.733 09:38:30 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:59.733 { 00:04:59.733 "name": "Malloc1", 00:04:59.733 "aliases": [ 00:04:59.733 "e097d566-c05b-4200-9f73-5cff115418cf" 00:04:59.733 ], 00:04:59.733 "product_name": "Malloc disk", 00:04:59.733 "block_size": 4096, 00:04:59.733 "num_blocks": 256, 00:04:59.733 "uuid": "e097d566-c05b-4200-9f73-5cff115418cf", 00:04:59.733 "assigned_rate_limits": { 00:04:59.733 "rw_ios_per_sec": 0, 00:04:59.733 "rw_mbytes_per_sec": 0, 00:04:59.733 "r_mbytes_per_sec": 0, 00:04:59.733 "w_mbytes_per_sec": 0 00:04:59.733 }, 00:04:59.733 "claimed": false, 00:04:59.733 "zoned": false, 00:04:59.733 "supported_io_types": { 00:04:59.733 "read": true, 00:04:59.733 "write": true, 00:04:59.733 "unmap": true, 00:04:59.733 "flush": true, 00:04:59.733 "reset": true, 00:04:59.733 "nvme_admin": false, 00:04:59.733 "nvme_io": false, 00:04:59.733 "nvme_io_md": false, 00:04:59.733 "write_zeroes": true, 00:04:59.733 "zcopy": true, 00:04:59.733 "get_zone_info": false, 00:04:59.733 "zone_management": false, 00:04:59.733 "zone_append": false, 00:04:59.733 "compare": false, 00:04:59.733 "compare_and_write": false, 00:04:59.733 "abort": true, 00:04:59.733 "seek_hole": false, 00:04:59.733 "seek_data": false, 00:04:59.733 "copy": true, 00:04:59.733 "nvme_iov_md": false 00:04:59.733 }, 00:04:59.733 "memory_domains": [ 00:04:59.733 { 00:04:59.733 "dma_device_id": "system", 00:04:59.733 "dma_device_type": 1 00:04:59.733 }, 00:04:59.733 { 00:04:59.733 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:59.733 "dma_device_type": 2 00:04:59.733 } 00:04:59.733 ], 00:04:59.733 "driver_specific": {} 00:04:59.733 } 00:04:59.733 ]' 00:04:59.733 09:38:30 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:59.733 09:38:30 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:59.733 09:38:30 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:59.733 09:38:30 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.733 09:38:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:59.733 09:38:30 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.733 09:38:30 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:59.733 09:38:30 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.733 09:38:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:59.994 09:38:30 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.994 09:38:30 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:59.994 09:38:30 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:59.994 09:38:30 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:59.994 00:04:59.994 real 0m0.156s 00:04:59.994 user 0m0.099s 00:04:59.994 sys 0m0.021s 00:04:59.994 09:38:30 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:59.994 09:38:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:59.994 ************************************ 00:04:59.994 END TEST rpc_plugins 00:04:59.994 ************************************ 00:04:59.994 09:38:30 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:59.994 09:38:30 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:59.994 09:38:30 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:59.994 09:38:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.994 ************************************ 00:04:59.994 START TEST rpc_trace_cmd_test 00:04:59.994 ************************************ 00:04:59.994 09:38:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:59.994 09:38:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:59.994 09:38:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:59.994 09:38:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.994 09:38:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:59.994 09:38:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.994 09:38:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:59.994 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1131895", 00:04:59.994 "tpoint_group_mask": "0x8", 00:04:59.994 "iscsi_conn": { 00:04:59.994 "mask": "0x2", 00:04:59.994 "tpoint_mask": "0x0" 00:04:59.994 }, 00:04:59.994 "scsi": { 00:04:59.994 "mask": "0x4", 00:04:59.994 "tpoint_mask": "0x0" 00:04:59.994 }, 00:04:59.994 "bdev": { 00:04:59.994 "mask": "0x8", 00:04:59.994 "tpoint_mask": "0xffffffffffffffff" 00:04:59.994 }, 00:04:59.994 "nvmf_rdma": { 00:04:59.994 "mask": "0x10", 00:04:59.994 "tpoint_mask": "0x0" 00:04:59.994 }, 00:04:59.994 "nvmf_tcp": { 00:04:59.994 "mask": "0x20", 00:04:59.994 "tpoint_mask": "0x0" 00:04:59.994 }, 00:04:59.994 "ftl": { 00:04:59.994 "mask": "0x40", 00:04:59.994 "tpoint_mask": "0x0" 00:04:59.994 }, 00:04:59.994 "blobfs": { 00:04:59.994 "mask": "0x80", 00:04:59.994 "tpoint_mask": "0x0" 00:04:59.994 }, 00:04:59.994 "dsa": { 00:04:59.994 "mask": "0x200", 00:04:59.994 "tpoint_mask": "0x0" 00:04:59.994 }, 00:04:59.994 "thread": { 00:04:59.994 "mask": "0x400", 00:04:59.994 "tpoint_mask": "0x0" 00:04:59.994 }, 00:04:59.994 "nvme_pcie": { 00:04:59.994 "mask": "0x800", 00:04:59.994 "tpoint_mask": "0x0" 00:04:59.994 }, 00:04:59.994 "iaa": { 00:04:59.994 "mask": "0x1000", 00:04:59.994 "tpoint_mask": "0x0" 00:04:59.994 }, 00:04:59.994 "nvme_tcp": { 00:04:59.994 "mask": "0x2000", 00:04:59.994 "tpoint_mask": "0x0" 00:04:59.994 }, 00:04:59.994 "bdev_nvme": { 00:04:59.994 "mask": "0x4000", 00:04:59.994 "tpoint_mask": "0x0" 00:04:59.994 }, 00:04:59.994 "sock": { 00:04:59.994 "mask": "0x8000", 00:04:59.994 "tpoint_mask": "0x0" 00:04:59.994 }, 00:04:59.994 "blob": { 00:04:59.994 "mask": "0x10000", 00:04:59.994 "tpoint_mask": "0x0" 00:04:59.994 }, 00:04:59.994 "bdev_raid": { 00:04:59.994 "mask": "0x20000", 00:04:59.994 "tpoint_mask": "0x0" 00:04:59.994 }, 00:04:59.994 "scheduler": { 00:04:59.994 "mask": "0x40000", 00:04:59.994 "tpoint_mask": "0x0" 00:04:59.994 } 00:04:59.994 }' 00:04:59.994 09:38:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:59.994 09:38:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:59.994 09:38:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:59.994 09:38:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:59.994 09:38:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:00.256 09:38:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:00.256 09:38:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:00.256 09:38:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:00.256 09:38:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:00.256 09:38:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:00.256 00:05:00.256 real 0m0.254s 00:05:00.256 user 0m0.206s 00:05:00.256 sys 0m0.037s 00:05:00.256 09:38:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.256 09:38:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:00.256 ************************************ 00:05:00.256 END TEST rpc_trace_cmd_test 00:05:00.256 ************************************ 00:05:00.256 09:38:31 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:00.256 09:38:31 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:00.256 09:38:31 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:00.256 09:38:31 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:00.256 09:38:31 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.256 09:38:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.256 ************************************ 00:05:00.256 START TEST rpc_daemon_integrity 00:05:00.256 ************************************ 00:05:00.256 09:38:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:00.256 09:38:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:00.256 09:38:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:00.256 09:38:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.256 09:38:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:00.256 09:38:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:00.256 09:38:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:00.517 09:38:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:00.517 09:38:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:00.517 09:38:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:00.517 09:38:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.517 09:38:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:00.517 09:38:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:00.518 09:38:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:00.518 09:38:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:00.518 09:38:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.518 09:38:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:00.518 09:38:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:00.518 { 00:05:00.518 "name": "Malloc2", 00:05:00.518 "aliases": [ 00:05:00.518 "83fdaecd-7973-4823-8352-a8f108f27cf0" 00:05:00.518 ], 00:05:00.518 "product_name": "Malloc disk", 00:05:00.518 "block_size": 512, 00:05:00.518 "num_blocks": 16384, 00:05:00.518 "uuid": "83fdaecd-7973-4823-8352-a8f108f27cf0", 00:05:00.518 "assigned_rate_limits": { 00:05:00.518 "rw_ios_per_sec": 0, 00:05:00.518 "rw_mbytes_per_sec": 0, 00:05:00.518 "r_mbytes_per_sec": 0, 00:05:00.518 "w_mbytes_per_sec": 0 00:05:00.518 }, 00:05:00.518 "claimed": false, 00:05:00.518 "zoned": false, 00:05:00.518 "supported_io_types": { 00:05:00.518 "read": true, 00:05:00.518 "write": true, 00:05:00.518 "unmap": true, 00:05:00.518 "flush": true, 00:05:00.518 "reset": true, 00:05:00.518 "nvme_admin": false, 00:05:00.518 "nvme_io": false, 00:05:00.518 "nvme_io_md": false, 00:05:00.518 "write_zeroes": true, 00:05:00.518 "zcopy": true, 00:05:00.518 "get_zone_info": false, 00:05:00.518 "zone_management": false, 00:05:00.518 "zone_append": false, 00:05:00.518 "compare": false, 00:05:00.518 "compare_and_write": false, 00:05:00.518 "abort": true, 00:05:00.518 "seek_hole": false, 00:05:00.518 "seek_data": false, 00:05:00.518 "copy": true, 00:05:00.518 "nvme_iov_md": false 00:05:00.518 }, 00:05:00.518 "memory_domains": [ 00:05:00.518 { 00:05:00.518 "dma_device_id": "system", 00:05:00.518 "dma_device_type": 1 00:05:00.518 }, 00:05:00.518 { 00:05:00.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:00.518 "dma_device_type": 2 00:05:00.518 } 00:05:00.518 ], 00:05:00.518 "driver_specific": {} 00:05:00.518 } 00:05:00.518 ]' 00:05:00.518 09:38:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:00.518 09:38:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:00.518 09:38:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:00.518 09:38:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:00.518 09:38:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.518 [2024-11-20 09:38:31.258644] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:00.518 [2024-11-20 09:38:31.258687] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:00.518 [2024-11-20 09:38:31.258704] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc228d0 00:05:00.518 [2024-11-20 09:38:31.258711] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:00.518 [2024-11-20 09:38:31.260183] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:00.518 [2024-11-20 09:38:31.260217] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:00.518 Passthru0 00:05:00.518 09:38:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:00.518 09:38:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:00.518 09:38:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:00.518 09:38:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.518 09:38:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:00.518 09:38:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:00.518 { 00:05:00.518 "name": "Malloc2", 00:05:00.518 "aliases": [ 00:05:00.518 "83fdaecd-7973-4823-8352-a8f108f27cf0" 00:05:00.518 ], 00:05:00.518 "product_name": "Malloc disk", 00:05:00.518 "block_size": 512, 00:05:00.518 "num_blocks": 16384, 00:05:00.518 "uuid": "83fdaecd-7973-4823-8352-a8f108f27cf0", 00:05:00.518 "assigned_rate_limits": { 00:05:00.518 "rw_ios_per_sec": 0, 00:05:00.518 "rw_mbytes_per_sec": 0, 00:05:00.518 "r_mbytes_per_sec": 0, 00:05:00.518 "w_mbytes_per_sec": 0 00:05:00.518 }, 00:05:00.518 "claimed": true, 00:05:00.518 "claim_type": "exclusive_write", 00:05:00.518 "zoned": false, 00:05:00.518 "supported_io_types": { 00:05:00.518 "read": true, 00:05:00.518 "write": true, 00:05:00.518 "unmap": true, 00:05:00.518 "flush": true, 00:05:00.518 "reset": true, 00:05:00.518 "nvme_admin": false, 00:05:00.518 "nvme_io": false, 00:05:00.518 "nvme_io_md": false, 00:05:00.518 "write_zeroes": true, 00:05:00.518 "zcopy": true, 00:05:00.518 "get_zone_info": false, 00:05:00.518 "zone_management": false, 00:05:00.518 "zone_append": false, 00:05:00.518 "compare": false, 00:05:00.518 "compare_and_write": false, 00:05:00.518 "abort": true, 00:05:00.518 "seek_hole": false, 00:05:00.518 "seek_data": false, 00:05:00.518 "copy": true, 00:05:00.518 "nvme_iov_md": false 00:05:00.518 }, 00:05:00.518 "memory_domains": [ 00:05:00.518 { 00:05:00.518 "dma_device_id": "system", 00:05:00.518 "dma_device_type": 1 00:05:00.518 }, 00:05:00.518 { 00:05:00.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:00.518 "dma_device_type": 2 00:05:00.518 } 00:05:00.518 ], 00:05:00.518 "driver_specific": {} 00:05:00.518 }, 00:05:00.518 { 00:05:00.518 "name": "Passthru0", 00:05:00.518 "aliases": [ 00:05:00.518 "07d951be-0eed-539d-b978-567dd3c4cf78" 00:05:00.518 ], 00:05:00.518 "product_name": "passthru", 00:05:00.518 "block_size": 512, 00:05:00.518 "num_blocks": 16384, 00:05:00.518 "uuid": "07d951be-0eed-539d-b978-567dd3c4cf78", 00:05:00.518 "assigned_rate_limits": { 00:05:00.518 "rw_ios_per_sec": 0, 00:05:00.518 "rw_mbytes_per_sec": 0, 00:05:00.518 "r_mbytes_per_sec": 0, 00:05:00.518 "w_mbytes_per_sec": 0 00:05:00.518 }, 00:05:00.518 "claimed": false, 00:05:00.518 "zoned": false, 00:05:00.518 "supported_io_types": { 00:05:00.518 "read": true, 00:05:00.518 "write": true, 00:05:00.518 "unmap": true, 00:05:00.518 "flush": true, 00:05:00.518 "reset": true, 00:05:00.518 "nvme_admin": false, 00:05:00.518 "nvme_io": false, 00:05:00.518 "nvme_io_md": false, 00:05:00.518 "write_zeroes": true, 00:05:00.518 "zcopy": true, 00:05:00.518 "get_zone_info": false, 00:05:00.518 "zone_management": false, 00:05:00.518 "zone_append": false, 00:05:00.518 "compare": false, 00:05:00.518 "compare_and_write": false, 00:05:00.518 "abort": true, 00:05:00.518 "seek_hole": false, 00:05:00.518 "seek_data": false, 00:05:00.518 "copy": true, 00:05:00.518 "nvme_iov_md": false 00:05:00.518 }, 00:05:00.518 "memory_domains": [ 00:05:00.518 { 00:05:00.518 "dma_device_id": "system", 00:05:00.518 "dma_device_type": 1 00:05:00.518 }, 00:05:00.518 { 00:05:00.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:00.518 "dma_device_type": 2 00:05:00.518 } 00:05:00.518 ], 00:05:00.518 "driver_specific": { 00:05:00.518 "passthru": { 00:05:00.518 "name": "Passthru0", 00:05:00.518 "base_bdev_name": "Malloc2" 00:05:00.518 } 00:05:00.518 } 00:05:00.518 } 00:05:00.518 ]' 00:05:00.518 09:38:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:00.518 09:38:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:00.518 09:38:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:00.518 09:38:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:00.518 09:38:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.518 09:38:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:00.518 09:38:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:00.518 09:38:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:00.518 09:38:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.518 09:38:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:00.518 09:38:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:00.518 09:38:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:00.518 09:38:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.518 09:38:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:00.518 09:38:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:00.518 09:38:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:00.518 09:38:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:00.518 00:05:00.518 real 0m0.302s 00:05:00.518 user 0m0.184s 00:05:00.518 sys 0m0.051s 00:05:00.518 09:38:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.518 09:38:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.518 ************************************ 00:05:00.518 END TEST rpc_daemon_integrity 00:05:00.518 ************************************ 00:05:00.779 09:38:31 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:00.779 09:38:31 rpc -- rpc/rpc.sh@84 -- # killprocess 1131895 00:05:00.779 09:38:31 rpc -- common/autotest_common.sh@954 -- # '[' -z 1131895 ']' 00:05:00.779 09:38:31 rpc -- common/autotest_common.sh@958 -- # kill -0 1131895 00:05:00.779 09:38:31 rpc -- common/autotest_common.sh@959 -- # uname 00:05:00.779 09:38:31 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:00.779 09:38:31 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1131895 00:05:00.779 09:38:31 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:00.779 09:38:31 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:00.779 09:38:31 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1131895' 00:05:00.779 killing process with pid 1131895 00:05:00.779 09:38:31 rpc -- common/autotest_common.sh@973 -- # kill 1131895 00:05:00.779 09:38:31 rpc -- common/autotest_common.sh@978 -- # wait 1131895 00:05:01.040 00:05:01.040 real 0m2.737s 00:05:01.040 user 0m3.472s 00:05:01.040 sys 0m0.868s 00:05:01.040 09:38:31 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:01.040 09:38:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.040 ************************************ 00:05:01.040 END TEST rpc 00:05:01.040 ************************************ 00:05:01.040 09:38:31 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:01.040 09:38:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:01.040 09:38:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:01.040 09:38:31 -- common/autotest_common.sh@10 -- # set +x 00:05:01.040 ************************************ 00:05:01.040 START TEST skip_rpc 00:05:01.040 ************************************ 00:05:01.040 09:38:31 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:01.040 * Looking for test storage... 00:05:01.040 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:01.040 09:38:31 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:01.300 09:38:31 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:01.300 09:38:31 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:01.300 09:38:32 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:01.300 09:38:32 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:01.300 09:38:32 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:01.300 09:38:32 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:01.300 09:38:32 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:01.300 09:38:32 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:01.300 09:38:32 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:01.300 09:38:32 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:01.300 09:38:32 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:01.300 09:38:32 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:01.300 09:38:32 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:01.300 09:38:32 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:01.300 09:38:32 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:01.300 09:38:32 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:01.300 09:38:32 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:01.300 09:38:32 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:01.300 09:38:32 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:01.300 09:38:32 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:01.300 09:38:32 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:01.300 09:38:32 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:01.300 09:38:32 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:01.300 09:38:32 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:01.300 09:38:32 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:01.300 09:38:32 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:01.300 09:38:32 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:01.301 09:38:32 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:01.301 09:38:32 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:01.301 09:38:32 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:01.301 09:38:32 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:01.301 09:38:32 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:01.301 09:38:32 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:01.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.301 --rc genhtml_branch_coverage=1 00:05:01.301 --rc genhtml_function_coverage=1 00:05:01.301 --rc genhtml_legend=1 00:05:01.301 --rc geninfo_all_blocks=1 00:05:01.301 --rc geninfo_unexecuted_blocks=1 00:05:01.301 00:05:01.301 ' 00:05:01.301 09:38:32 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:01.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.301 --rc genhtml_branch_coverage=1 00:05:01.301 --rc genhtml_function_coverage=1 00:05:01.301 --rc genhtml_legend=1 00:05:01.301 --rc geninfo_all_blocks=1 00:05:01.301 --rc geninfo_unexecuted_blocks=1 00:05:01.301 00:05:01.301 ' 00:05:01.301 09:38:32 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:01.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.301 --rc genhtml_branch_coverage=1 00:05:01.301 --rc genhtml_function_coverage=1 00:05:01.301 --rc genhtml_legend=1 00:05:01.301 --rc geninfo_all_blocks=1 00:05:01.301 --rc geninfo_unexecuted_blocks=1 00:05:01.301 00:05:01.301 ' 00:05:01.301 09:38:32 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:01.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.301 --rc genhtml_branch_coverage=1 00:05:01.301 --rc genhtml_function_coverage=1 00:05:01.301 --rc genhtml_legend=1 00:05:01.301 --rc geninfo_all_blocks=1 00:05:01.301 --rc geninfo_unexecuted_blocks=1 00:05:01.301 00:05:01.301 ' 00:05:01.301 09:38:32 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:01.301 09:38:32 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:01.301 09:38:32 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:01.301 09:38:32 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:01.301 09:38:32 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:01.301 09:38:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.301 ************************************ 00:05:01.301 START TEST skip_rpc 00:05:01.301 ************************************ 00:05:01.301 09:38:32 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:01.301 09:38:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1132745 00:05:01.301 09:38:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:01.301 09:38:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:01.301 09:38:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:01.301 [2024-11-20 09:38:32.147285] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:05:01.301 [2024-11-20 09:38:32.147345] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1132745 ] 00:05:01.561 [2024-11-20 09:38:32.239850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.561 [2024-11-20 09:38:32.293653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.841 09:38:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:06.841 09:38:37 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:06.841 09:38:37 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:06.841 09:38:37 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:06.841 09:38:37 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:06.841 09:38:37 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:06.841 09:38:37 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:06.841 09:38:37 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:06.841 09:38:37 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:06.841 09:38:37 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:06.841 09:38:37 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:06.841 09:38:37 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:06.841 09:38:37 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:06.841 09:38:37 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:06.841 09:38:37 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:06.841 09:38:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:06.841 09:38:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1132745 00:05:06.841 09:38:37 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 1132745 ']' 00:05:06.841 09:38:37 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 1132745 00:05:06.841 09:38:37 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:06.841 09:38:37 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:06.841 09:38:37 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1132745 00:05:06.841 09:38:37 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:06.841 09:38:37 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:06.841 09:38:37 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1132745' 00:05:06.841 killing process with pid 1132745 00:05:06.841 09:38:37 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 1132745 00:05:06.841 09:38:37 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 1132745 00:05:06.841 00:05:06.841 real 0m5.263s 00:05:06.841 user 0m5.023s 00:05:06.841 sys 0m0.289s 00:05:06.841 09:38:37 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:06.841 09:38:37 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:06.841 ************************************ 00:05:06.841 END TEST skip_rpc 00:05:06.841 ************************************ 00:05:06.841 09:38:37 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:06.841 09:38:37 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:06.841 09:38:37 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:06.841 09:38:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:06.841 ************************************ 00:05:06.841 START TEST skip_rpc_with_json 00:05:06.841 ************************************ 00:05:06.841 09:38:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:06.841 09:38:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:06.841 09:38:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1133787 00:05:06.841 09:38:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:06.841 09:38:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1133787 00:05:06.841 09:38:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:06.841 09:38:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 1133787 ']' 00:05:06.841 09:38:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:06.841 09:38:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:06.841 09:38:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:06.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:06.841 09:38:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:06.841 09:38:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:06.841 [2024-11-20 09:38:37.480913] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:05:06.841 [2024-11-20 09:38:37.480963] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1133787 ] 00:05:06.841 [2024-11-20 09:38:37.562987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.841 [2024-11-20 09:38:37.594186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.412 09:38:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:07.412 09:38:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:07.412 09:38:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:07.412 09:38:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.412 09:38:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:07.412 [2024-11-20 09:38:38.268098] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:07.412 request: 00:05:07.412 { 00:05:07.412 "trtype": "tcp", 00:05:07.412 "method": "nvmf_get_transports", 00:05:07.412 "req_id": 1 00:05:07.412 } 00:05:07.412 Got JSON-RPC error response 00:05:07.412 response: 00:05:07.412 { 00:05:07.412 "code": -19, 00:05:07.412 "message": "No such device" 00:05:07.412 } 00:05:07.412 09:38:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:07.412 09:38:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:07.412 09:38:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.412 09:38:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:07.412 [2024-11-20 09:38:38.280197] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:07.412 09:38:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.412 09:38:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:07.412 09:38:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.412 09:38:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:07.673 09:38:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.673 09:38:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:07.673 { 00:05:07.673 "subsystems": [ 00:05:07.673 { 00:05:07.673 "subsystem": "fsdev", 00:05:07.673 "config": [ 00:05:07.673 { 00:05:07.673 "method": "fsdev_set_opts", 00:05:07.673 "params": { 00:05:07.673 "fsdev_io_pool_size": 65535, 00:05:07.673 "fsdev_io_cache_size": 256 00:05:07.673 } 00:05:07.673 } 00:05:07.673 ] 00:05:07.673 }, 00:05:07.673 { 00:05:07.673 "subsystem": "vfio_user_target", 00:05:07.673 "config": null 00:05:07.673 }, 00:05:07.673 { 00:05:07.673 "subsystem": "keyring", 00:05:07.673 "config": [] 00:05:07.673 }, 00:05:07.673 { 00:05:07.673 "subsystem": "iobuf", 00:05:07.673 "config": [ 00:05:07.673 { 00:05:07.673 "method": "iobuf_set_options", 00:05:07.673 "params": { 00:05:07.673 "small_pool_count": 8192, 00:05:07.673 "large_pool_count": 1024, 00:05:07.673 "small_bufsize": 8192, 00:05:07.673 "large_bufsize": 135168, 00:05:07.673 "enable_numa": false 00:05:07.673 } 00:05:07.673 } 00:05:07.673 ] 00:05:07.673 }, 00:05:07.673 { 00:05:07.673 "subsystem": "sock", 00:05:07.673 "config": [ 00:05:07.673 { 00:05:07.673 "method": "sock_set_default_impl", 00:05:07.673 "params": { 00:05:07.673 "impl_name": "posix" 00:05:07.673 } 00:05:07.673 }, 00:05:07.673 { 00:05:07.673 "method": "sock_impl_set_options", 00:05:07.673 "params": { 00:05:07.673 "impl_name": "ssl", 00:05:07.673 "recv_buf_size": 4096, 00:05:07.673 "send_buf_size": 4096, 00:05:07.673 "enable_recv_pipe": true, 00:05:07.673 "enable_quickack": false, 00:05:07.673 "enable_placement_id": 0, 00:05:07.673 "enable_zerocopy_send_server": true, 00:05:07.673 "enable_zerocopy_send_client": false, 00:05:07.673 "zerocopy_threshold": 0, 00:05:07.673 "tls_version": 0, 00:05:07.673 "enable_ktls": false 00:05:07.673 } 00:05:07.673 }, 00:05:07.673 { 00:05:07.673 "method": "sock_impl_set_options", 00:05:07.673 "params": { 00:05:07.673 "impl_name": "posix", 00:05:07.673 "recv_buf_size": 2097152, 00:05:07.673 "send_buf_size": 2097152, 00:05:07.673 "enable_recv_pipe": true, 00:05:07.673 "enable_quickack": false, 00:05:07.673 "enable_placement_id": 0, 00:05:07.673 "enable_zerocopy_send_server": true, 00:05:07.673 "enable_zerocopy_send_client": false, 00:05:07.673 "zerocopy_threshold": 0, 00:05:07.673 "tls_version": 0, 00:05:07.673 "enable_ktls": false 00:05:07.673 } 00:05:07.673 } 00:05:07.673 ] 00:05:07.673 }, 00:05:07.673 { 00:05:07.673 "subsystem": "vmd", 00:05:07.673 "config": [] 00:05:07.673 }, 00:05:07.673 { 00:05:07.673 "subsystem": "accel", 00:05:07.673 "config": [ 00:05:07.673 { 00:05:07.673 "method": "accel_set_options", 00:05:07.673 "params": { 00:05:07.673 "small_cache_size": 128, 00:05:07.673 "large_cache_size": 16, 00:05:07.673 "task_count": 2048, 00:05:07.673 "sequence_count": 2048, 00:05:07.673 "buf_count": 2048 00:05:07.673 } 00:05:07.673 } 00:05:07.673 ] 00:05:07.673 }, 00:05:07.673 { 00:05:07.673 "subsystem": "bdev", 00:05:07.673 "config": [ 00:05:07.673 { 00:05:07.673 "method": "bdev_set_options", 00:05:07.673 "params": { 00:05:07.673 "bdev_io_pool_size": 65535, 00:05:07.673 "bdev_io_cache_size": 256, 00:05:07.673 "bdev_auto_examine": true, 00:05:07.673 "iobuf_small_cache_size": 128, 00:05:07.673 "iobuf_large_cache_size": 16 00:05:07.673 } 00:05:07.673 }, 00:05:07.673 { 00:05:07.673 "method": "bdev_raid_set_options", 00:05:07.673 "params": { 00:05:07.673 "process_window_size_kb": 1024, 00:05:07.673 "process_max_bandwidth_mb_sec": 0 00:05:07.673 } 00:05:07.673 }, 00:05:07.673 { 00:05:07.673 "method": "bdev_iscsi_set_options", 00:05:07.673 "params": { 00:05:07.673 "timeout_sec": 30 00:05:07.673 } 00:05:07.673 }, 00:05:07.673 { 00:05:07.673 "method": "bdev_nvme_set_options", 00:05:07.673 "params": { 00:05:07.673 "action_on_timeout": "none", 00:05:07.673 "timeout_us": 0, 00:05:07.673 "timeout_admin_us": 0, 00:05:07.673 "keep_alive_timeout_ms": 10000, 00:05:07.673 "arbitration_burst": 0, 00:05:07.673 "low_priority_weight": 0, 00:05:07.673 "medium_priority_weight": 0, 00:05:07.673 "high_priority_weight": 0, 00:05:07.673 "nvme_adminq_poll_period_us": 10000, 00:05:07.673 "nvme_ioq_poll_period_us": 0, 00:05:07.673 "io_queue_requests": 0, 00:05:07.673 "delay_cmd_submit": true, 00:05:07.673 "transport_retry_count": 4, 00:05:07.673 "bdev_retry_count": 3, 00:05:07.673 "transport_ack_timeout": 0, 00:05:07.673 "ctrlr_loss_timeout_sec": 0, 00:05:07.673 "reconnect_delay_sec": 0, 00:05:07.673 "fast_io_fail_timeout_sec": 0, 00:05:07.673 "disable_auto_failback": false, 00:05:07.673 "generate_uuids": false, 00:05:07.673 "transport_tos": 0, 00:05:07.673 "nvme_error_stat": false, 00:05:07.673 "rdma_srq_size": 0, 00:05:07.673 "io_path_stat": false, 00:05:07.673 "allow_accel_sequence": false, 00:05:07.673 "rdma_max_cq_size": 0, 00:05:07.673 "rdma_cm_event_timeout_ms": 0, 00:05:07.673 "dhchap_digests": [ 00:05:07.673 "sha256", 00:05:07.673 "sha384", 00:05:07.673 "sha512" 00:05:07.673 ], 00:05:07.673 "dhchap_dhgroups": [ 00:05:07.673 "null", 00:05:07.673 "ffdhe2048", 00:05:07.673 "ffdhe3072", 00:05:07.673 "ffdhe4096", 00:05:07.673 "ffdhe6144", 00:05:07.673 "ffdhe8192" 00:05:07.673 ] 00:05:07.673 } 00:05:07.673 }, 00:05:07.673 { 00:05:07.673 "method": "bdev_nvme_set_hotplug", 00:05:07.673 "params": { 00:05:07.673 "period_us": 100000, 00:05:07.673 "enable": false 00:05:07.673 } 00:05:07.673 }, 00:05:07.673 { 00:05:07.673 "method": "bdev_wait_for_examine" 00:05:07.673 } 00:05:07.673 ] 00:05:07.673 }, 00:05:07.673 { 00:05:07.673 "subsystem": "scsi", 00:05:07.673 "config": null 00:05:07.673 }, 00:05:07.673 { 00:05:07.673 "subsystem": "scheduler", 00:05:07.673 "config": [ 00:05:07.673 { 00:05:07.673 "method": "framework_set_scheduler", 00:05:07.673 "params": { 00:05:07.674 "name": "static" 00:05:07.674 } 00:05:07.674 } 00:05:07.674 ] 00:05:07.674 }, 00:05:07.674 { 00:05:07.674 "subsystem": "vhost_scsi", 00:05:07.674 "config": [] 00:05:07.674 }, 00:05:07.674 { 00:05:07.674 "subsystem": "vhost_blk", 00:05:07.674 "config": [] 00:05:07.674 }, 00:05:07.674 { 00:05:07.674 "subsystem": "ublk", 00:05:07.674 "config": [] 00:05:07.674 }, 00:05:07.674 { 00:05:07.674 "subsystem": "nbd", 00:05:07.674 "config": [] 00:05:07.674 }, 00:05:07.674 { 00:05:07.674 "subsystem": "nvmf", 00:05:07.674 "config": [ 00:05:07.674 { 00:05:07.674 "method": "nvmf_set_config", 00:05:07.674 "params": { 00:05:07.674 "discovery_filter": "match_any", 00:05:07.674 "admin_cmd_passthru": { 00:05:07.674 "identify_ctrlr": false 00:05:07.674 }, 00:05:07.674 "dhchap_digests": [ 00:05:07.674 "sha256", 00:05:07.674 "sha384", 00:05:07.674 "sha512" 00:05:07.674 ], 00:05:07.674 "dhchap_dhgroups": [ 00:05:07.674 "null", 00:05:07.674 "ffdhe2048", 00:05:07.674 "ffdhe3072", 00:05:07.674 "ffdhe4096", 00:05:07.674 "ffdhe6144", 00:05:07.674 "ffdhe8192" 00:05:07.674 ] 00:05:07.674 } 00:05:07.674 }, 00:05:07.674 { 00:05:07.674 "method": "nvmf_set_max_subsystems", 00:05:07.674 "params": { 00:05:07.674 "max_subsystems": 1024 00:05:07.674 } 00:05:07.674 }, 00:05:07.674 { 00:05:07.674 "method": "nvmf_set_crdt", 00:05:07.674 "params": { 00:05:07.674 "crdt1": 0, 00:05:07.674 "crdt2": 0, 00:05:07.674 "crdt3": 0 00:05:07.674 } 00:05:07.674 }, 00:05:07.674 { 00:05:07.674 "method": "nvmf_create_transport", 00:05:07.674 "params": { 00:05:07.674 "trtype": "TCP", 00:05:07.674 "max_queue_depth": 128, 00:05:07.674 "max_io_qpairs_per_ctrlr": 127, 00:05:07.674 "in_capsule_data_size": 4096, 00:05:07.674 "max_io_size": 131072, 00:05:07.674 "io_unit_size": 131072, 00:05:07.674 "max_aq_depth": 128, 00:05:07.674 "num_shared_buffers": 511, 00:05:07.674 "buf_cache_size": 4294967295, 00:05:07.674 "dif_insert_or_strip": false, 00:05:07.674 "zcopy": false, 00:05:07.674 "c2h_success": true, 00:05:07.674 "sock_priority": 0, 00:05:07.674 "abort_timeout_sec": 1, 00:05:07.674 "ack_timeout": 0, 00:05:07.674 "data_wr_pool_size": 0 00:05:07.674 } 00:05:07.674 } 00:05:07.674 ] 00:05:07.674 }, 00:05:07.674 { 00:05:07.674 "subsystem": "iscsi", 00:05:07.674 "config": [ 00:05:07.674 { 00:05:07.674 "method": "iscsi_set_options", 00:05:07.674 "params": { 00:05:07.674 "node_base": "iqn.2016-06.io.spdk", 00:05:07.674 "max_sessions": 128, 00:05:07.674 "max_connections_per_session": 2, 00:05:07.674 "max_queue_depth": 64, 00:05:07.674 "default_time2wait": 2, 00:05:07.674 "default_time2retain": 20, 00:05:07.674 "first_burst_length": 8192, 00:05:07.674 "immediate_data": true, 00:05:07.674 "allow_duplicated_isid": false, 00:05:07.674 "error_recovery_level": 0, 00:05:07.674 "nop_timeout": 60, 00:05:07.674 "nop_in_interval": 30, 00:05:07.674 "disable_chap": false, 00:05:07.674 "require_chap": false, 00:05:07.674 "mutual_chap": false, 00:05:07.674 "chap_group": 0, 00:05:07.674 "max_large_datain_per_connection": 64, 00:05:07.674 "max_r2t_per_connection": 4, 00:05:07.674 "pdu_pool_size": 36864, 00:05:07.674 "immediate_data_pool_size": 16384, 00:05:07.674 "data_out_pool_size": 2048 00:05:07.674 } 00:05:07.674 } 00:05:07.674 ] 00:05:07.674 } 00:05:07.674 ] 00:05:07.674 } 00:05:07.674 09:38:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:07.674 09:38:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1133787 00:05:07.674 09:38:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 1133787 ']' 00:05:07.674 09:38:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 1133787 00:05:07.674 09:38:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:07.674 09:38:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:07.674 09:38:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1133787 00:05:07.674 09:38:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:07.674 09:38:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:07.674 09:38:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1133787' 00:05:07.674 killing process with pid 1133787 00:05:07.674 09:38:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 1133787 00:05:07.674 09:38:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 1133787 00:05:07.934 09:38:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1134128 00:05:07.934 09:38:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:07.934 09:38:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:13.217 09:38:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1134128 00:05:13.217 09:38:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 1134128 ']' 00:05:13.217 09:38:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 1134128 00:05:13.217 09:38:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:13.217 09:38:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:13.217 09:38:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1134128 00:05:13.217 09:38:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:13.217 09:38:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:13.217 09:38:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1134128' 00:05:13.217 killing process with pid 1134128 00:05:13.217 09:38:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 1134128 00:05:13.217 09:38:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 1134128 00:05:13.217 09:38:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:13.217 09:38:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:13.217 00:05:13.217 real 0m6.549s 00:05:13.217 user 0m6.455s 00:05:13.217 sys 0m0.557s 00:05:13.217 09:38:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:13.217 09:38:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:13.217 ************************************ 00:05:13.217 END TEST skip_rpc_with_json 00:05:13.217 ************************************ 00:05:13.217 09:38:44 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:13.217 09:38:44 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:13.217 09:38:44 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:13.217 09:38:44 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.217 ************************************ 00:05:13.217 START TEST skip_rpc_with_delay 00:05:13.217 ************************************ 00:05:13.217 09:38:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:13.217 09:38:44 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:13.217 09:38:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:13.217 09:38:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:13.217 09:38:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:13.217 09:38:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:13.217 09:38:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:13.217 09:38:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:13.217 09:38:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:13.217 09:38:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:13.217 09:38:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:13.217 09:38:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:13.217 09:38:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:13.217 [2024-11-20 09:38:44.109641] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:13.217 09:38:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:13.217 09:38:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:13.217 09:38:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:13.217 09:38:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:13.217 00:05:13.217 real 0m0.076s 00:05:13.217 user 0m0.049s 00:05:13.217 sys 0m0.026s 00:05:13.217 09:38:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:13.217 09:38:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:13.217 ************************************ 00:05:13.217 END TEST skip_rpc_with_delay 00:05:13.217 ************************************ 00:05:13.477 09:38:44 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:13.477 09:38:44 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:13.477 09:38:44 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:13.477 09:38:44 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:13.477 09:38:44 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:13.477 09:38:44 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.477 ************************************ 00:05:13.477 START TEST exit_on_failed_rpc_init 00:05:13.477 ************************************ 00:05:13.477 09:38:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:13.477 09:38:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1135191 00:05:13.477 09:38:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1135191 00:05:13.477 09:38:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:13.477 09:38:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 1135191 ']' 00:05:13.477 09:38:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.477 09:38:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:13.477 09:38:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.477 09:38:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:13.477 09:38:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:13.477 [2024-11-20 09:38:44.264250] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:05:13.477 [2024-11-20 09:38:44.264309] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1135191 ] 00:05:13.477 [2024-11-20 09:38:44.352482] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.477 [2024-11-20 09:38:44.387336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.420 09:38:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:14.420 09:38:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:14.420 09:38:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:14.420 09:38:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:14.420 09:38:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:14.420 09:38:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:14.420 09:38:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:14.420 09:38:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:14.420 09:38:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:14.420 09:38:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:14.420 09:38:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:14.420 09:38:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:14.420 09:38:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:14.420 09:38:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:14.420 09:38:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:14.420 [2024-11-20 09:38:45.123597] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:05:14.420 [2024-11-20 09:38:45.123650] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1135355 ] 00:05:14.420 [2024-11-20 09:38:45.210025] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.420 [2024-11-20 09:38:45.246021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:14.420 [2024-11-20 09:38:45.246070] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:14.420 [2024-11-20 09:38:45.246080] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:14.420 [2024-11-20 09:38:45.246086] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:14.420 09:38:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:14.420 09:38:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:14.420 09:38:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:14.420 09:38:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:14.420 09:38:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:14.420 09:38:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:14.420 09:38:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:14.420 09:38:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1135191 00:05:14.420 09:38:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 1135191 ']' 00:05:14.420 09:38:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 1135191 00:05:14.420 09:38:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:14.420 09:38:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:14.420 09:38:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1135191 00:05:14.681 09:38:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:14.681 09:38:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:14.681 09:38:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1135191' 00:05:14.681 killing process with pid 1135191 00:05:14.681 09:38:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 1135191 00:05:14.681 09:38:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 1135191 00:05:14.681 00:05:14.681 real 0m1.330s 00:05:14.681 user 0m1.566s 00:05:14.681 sys 0m0.383s 00:05:14.681 09:38:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:14.681 09:38:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:14.681 ************************************ 00:05:14.681 END TEST exit_on_failed_rpc_init 00:05:14.681 ************************************ 00:05:14.681 09:38:45 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:14.681 00:05:14.681 real 0m13.730s 00:05:14.681 user 0m13.331s 00:05:14.681 sys 0m1.561s 00:05:14.681 09:38:45 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:14.681 09:38:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.681 ************************************ 00:05:14.681 END TEST skip_rpc 00:05:14.681 ************************************ 00:05:14.941 09:38:45 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:14.942 09:38:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:14.942 09:38:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:14.942 09:38:45 -- common/autotest_common.sh@10 -- # set +x 00:05:14.942 ************************************ 00:05:14.942 START TEST rpc_client 00:05:14.942 ************************************ 00:05:14.942 09:38:45 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:14.942 * Looking for test storage... 00:05:14.942 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:14.942 09:38:45 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:14.942 09:38:45 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:05:14.942 09:38:45 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:14.942 09:38:45 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:14.942 09:38:45 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:14.942 09:38:45 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:14.942 09:38:45 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:14.942 09:38:45 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:14.942 09:38:45 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:14.942 09:38:45 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:14.942 09:38:45 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:14.942 09:38:45 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:14.942 09:38:45 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:14.942 09:38:45 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:14.942 09:38:45 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:14.942 09:38:45 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:14.942 09:38:45 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:14.942 09:38:45 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:14.942 09:38:45 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:14.942 09:38:45 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:14.942 09:38:45 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:14.942 09:38:45 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:14.942 09:38:45 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:14.942 09:38:45 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:14.942 09:38:45 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:14.942 09:38:45 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:14.942 09:38:45 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:14.942 09:38:45 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:14.942 09:38:45 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:14.942 09:38:45 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:14.942 09:38:45 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:14.942 09:38:45 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:14.942 09:38:45 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:14.942 09:38:45 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:14.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.942 --rc genhtml_branch_coverage=1 00:05:14.942 --rc genhtml_function_coverage=1 00:05:14.942 --rc genhtml_legend=1 00:05:14.942 --rc geninfo_all_blocks=1 00:05:14.942 --rc geninfo_unexecuted_blocks=1 00:05:14.942 00:05:14.942 ' 00:05:14.942 09:38:45 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:14.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.942 --rc genhtml_branch_coverage=1 00:05:14.942 --rc genhtml_function_coverage=1 00:05:14.942 --rc genhtml_legend=1 00:05:14.942 --rc geninfo_all_blocks=1 00:05:14.942 --rc geninfo_unexecuted_blocks=1 00:05:14.942 00:05:14.942 ' 00:05:14.942 09:38:45 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:14.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.942 --rc genhtml_branch_coverage=1 00:05:14.942 --rc genhtml_function_coverage=1 00:05:14.942 --rc genhtml_legend=1 00:05:14.942 --rc geninfo_all_blocks=1 00:05:14.942 --rc geninfo_unexecuted_blocks=1 00:05:14.942 00:05:14.942 ' 00:05:14.942 09:38:45 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:14.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.942 --rc genhtml_branch_coverage=1 00:05:14.942 --rc genhtml_function_coverage=1 00:05:14.942 --rc genhtml_legend=1 00:05:14.942 --rc geninfo_all_blocks=1 00:05:14.942 --rc geninfo_unexecuted_blocks=1 00:05:14.942 00:05:14.942 ' 00:05:14.942 09:38:45 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:15.203 OK 00:05:15.203 09:38:45 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:15.203 00:05:15.203 real 0m0.219s 00:05:15.203 user 0m0.125s 00:05:15.203 sys 0m0.108s 00:05:15.203 09:38:45 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:15.203 09:38:45 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:15.203 ************************************ 00:05:15.203 END TEST rpc_client 00:05:15.203 ************************************ 00:05:15.203 09:38:45 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:15.203 09:38:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:15.203 09:38:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:15.203 09:38:45 -- common/autotest_common.sh@10 -- # set +x 00:05:15.203 ************************************ 00:05:15.203 START TEST json_config 00:05:15.203 ************************************ 00:05:15.203 09:38:45 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:15.203 09:38:46 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:15.203 09:38:46 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:05:15.203 09:38:46 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:15.203 09:38:46 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:15.203 09:38:46 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:15.203 09:38:46 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:15.203 09:38:46 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:15.203 09:38:46 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:15.203 09:38:46 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:15.203 09:38:46 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:15.203 09:38:46 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:15.203 09:38:46 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:15.203 09:38:46 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:15.203 09:38:46 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:15.203 09:38:46 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:15.203 09:38:46 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:15.203 09:38:46 json_config -- scripts/common.sh@345 -- # : 1 00:05:15.203 09:38:46 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:15.203 09:38:46 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:15.204 09:38:46 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:15.204 09:38:46 json_config -- scripts/common.sh@353 -- # local d=1 00:05:15.204 09:38:46 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:15.204 09:38:46 json_config -- scripts/common.sh@355 -- # echo 1 00:05:15.204 09:38:46 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:15.204 09:38:46 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:15.204 09:38:46 json_config -- scripts/common.sh@353 -- # local d=2 00:05:15.204 09:38:46 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:15.204 09:38:46 json_config -- scripts/common.sh@355 -- # echo 2 00:05:15.204 09:38:46 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:15.204 09:38:46 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:15.204 09:38:46 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:15.204 09:38:46 json_config -- scripts/common.sh@368 -- # return 0 00:05:15.204 09:38:46 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:15.204 09:38:46 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:15.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.204 --rc genhtml_branch_coverage=1 00:05:15.204 --rc genhtml_function_coverage=1 00:05:15.204 --rc genhtml_legend=1 00:05:15.204 --rc geninfo_all_blocks=1 00:05:15.204 --rc geninfo_unexecuted_blocks=1 00:05:15.204 00:05:15.204 ' 00:05:15.204 09:38:46 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:15.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.204 --rc genhtml_branch_coverage=1 00:05:15.204 --rc genhtml_function_coverage=1 00:05:15.204 --rc genhtml_legend=1 00:05:15.204 --rc geninfo_all_blocks=1 00:05:15.204 --rc geninfo_unexecuted_blocks=1 00:05:15.204 00:05:15.204 ' 00:05:15.204 09:38:46 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:15.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.204 --rc genhtml_branch_coverage=1 00:05:15.204 --rc genhtml_function_coverage=1 00:05:15.204 --rc genhtml_legend=1 00:05:15.204 --rc geninfo_all_blocks=1 00:05:15.204 --rc geninfo_unexecuted_blocks=1 00:05:15.204 00:05:15.204 ' 00:05:15.204 09:38:46 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:15.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.204 --rc genhtml_branch_coverage=1 00:05:15.204 --rc genhtml_function_coverage=1 00:05:15.204 --rc genhtml_legend=1 00:05:15.204 --rc geninfo_all_blocks=1 00:05:15.204 --rc geninfo_unexecuted_blocks=1 00:05:15.204 00:05:15.204 ' 00:05:15.464 09:38:46 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:15.464 09:38:46 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:15.464 09:38:46 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:15.464 09:38:46 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:15.464 09:38:46 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:15.464 09:38:46 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:15.464 09:38:46 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:15.464 09:38:46 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:15.464 09:38:46 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:15.464 09:38:46 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:15.464 09:38:46 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:15.464 09:38:46 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:15.465 09:38:46 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:15.465 09:38:46 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:15.465 09:38:46 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:15.465 09:38:46 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:15.465 09:38:46 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:15.465 09:38:46 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:15.465 09:38:46 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:15.465 09:38:46 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:15.465 09:38:46 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:15.465 09:38:46 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:15.465 09:38:46 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:15.465 09:38:46 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:15.465 09:38:46 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:15.465 09:38:46 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:15.465 09:38:46 json_config -- paths/export.sh@5 -- # export PATH 00:05:15.465 09:38:46 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:15.465 09:38:46 json_config -- nvmf/common.sh@51 -- # : 0 00:05:15.465 09:38:46 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:15.465 09:38:46 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:15.465 09:38:46 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:15.465 09:38:46 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:15.465 09:38:46 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:15.465 09:38:46 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:15.465 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:15.465 09:38:46 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:15.465 09:38:46 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:15.465 09:38:46 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:15.465 09:38:46 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:15.465 09:38:46 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:15.465 09:38:46 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:15.465 09:38:46 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:15.465 09:38:46 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:15.465 09:38:46 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:15.465 09:38:46 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:15.465 09:38:46 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:15.465 09:38:46 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:15.465 09:38:46 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:15.465 09:38:46 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:15.465 09:38:46 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:15.465 09:38:46 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:15.465 09:38:46 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:15.465 09:38:46 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:15.465 09:38:46 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:05:15.465 INFO: JSON configuration test init 00:05:15.465 09:38:46 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:05:15.465 09:38:46 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:05:15.465 09:38:46 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:15.465 09:38:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:15.465 09:38:46 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:05:15.465 09:38:46 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:15.465 09:38:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:15.465 09:38:46 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:05:15.465 09:38:46 json_config -- json_config/common.sh@9 -- # local app=target 00:05:15.465 09:38:46 json_config -- json_config/common.sh@10 -- # shift 00:05:15.465 09:38:46 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:15.465 09:38:46 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:15.465 09:38:46 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:15.465 09:38:46 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:15.465 09:38:46 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:15.465 09:38:46 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1135664 00:05:15.465 09:38:46 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:15.465 Waiting for target to run... 00:05:15.465 09:38:46 json_config -- json_config/common.sh@25 -- # waitforlisten 1135664 /var/tmp/spdk_tgt.sock 00:05:15.465 09:38:46 json_config -- common/autotest_common.sh@835 -- # '[' -z 1135664 ']' 00:05:15.465 09:38:46 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:15.465 09:38:46 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:15.465 09:38:46 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:15.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:15.465 09:38:46 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:15.465 09:38:46 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:15.465 09:38:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:15.465 [2024-11-20 09:38:46.231272] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:05:15.465 [2024-11-20 09:38:46.231324] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1135664 ] 00:05:15.725 [2024-11-20 09:38:46.539514] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.725 [2024-11-20 09:38:46.571481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.294 09:38:47 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:16.295 09:38:47 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:16.295 09:38:47 json_config -- json_config/common.sh@26 -- # echo '' 00:05:16.295 00:05:16.295 09:38:47 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:05:16.295 09:38:47 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:05:16.295 09:38:47 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:16.295 09:38:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:16.295 09:38:47 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:05:16.295 09:38:47 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:05:16.295 09:38:47 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:16.295 09:38:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:16.295 09:38:47 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:16.295 09:38:47 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:05:16.295 09:38:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:16.865 09:38:47 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:05:16.865 09:38:47 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:16.865 09:38:47 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:16.865 09:38:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:16.865 09:38:47 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:16.865 09:38:47 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:16.865 09:38:47 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:16.865 09:38:47 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:05:16.865 09:38:47 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:05:16.865 09:38:47 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:16.865 09:38:47 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:16.865 09:38:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:17.124 09:38:47 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:05:17.125 09:38:47 json_config -- json_config/json_config.sh@51 -- # local get_types 00:05:17.125 09:38:47 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:05:17.125 09:38:47 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:05:17.125 09:38:47 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:05:17.125 09:38:47 json_config -- json_config/json_config.sh@54 -- # sort 00:05:17.125 09:38:47 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:05:17.125 09:38:47 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:05:17.125 09:38:47 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:05:17.125 09:38:47 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:05:17.125 09:38:47 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:17.125 09:38:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:17.125 09:38:47 json_config -- json_config/json_config.sh@62 -- # return 0 00:05:17.125 09:38:47 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:05:17.125 09:38:47 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:05:17.125 09:38:47 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:05:17.125 09:38:47 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:05:17.125 09:38:47 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:05:17.125 09:38:47 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:05:17.125 09:38:47 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:17.125 09:38:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:17.125 09:38:47 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:17.125 09:38:47 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:05:17.125 09:38:47 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:05:17.125 09:38:47 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:17.125 09:38:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:17.125 MallocForNvmf0 00:05:17.385 09:38:48 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:17.385 09:38:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:17.385 MallocForNvmf1 00:05:17.385 09:38:48 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:17.385 09:38:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:17.646 [2024-11-20 09:38:48.369321] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:17.646 09:38:48 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:17.646 09:38:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:17.907 09:38:48 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:17.907 09:38:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:17.907 09:38:48 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:17.907 09:38:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:18.166 09:38:48 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:18.166 09:38:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:18.429 [2024-11-20 09:38:49.087514] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:18.429 09:38:49 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:05:18.429 09:38:49 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:18.429 09:38:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:18.429 09:38:49 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:05:18.429 09:38:49 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:18.429 09:38:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:18.429 09:38:49 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:05:18.429 09:38:49 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:18.429 09:38:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:18.690 MallocBdevForConfigChangeCheck 00:05:18.690 09:38:49 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:05:18.690 09:38:49 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:18.690 09:38:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:18.690 09:38:49 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:05:18.690 09:38:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:18.952 09:38:49 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:05:18.952 INFO: shutting down applications... 00:05:18.952 09:38:49 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:05:18.952 09:38:49 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:05:18.952 09:38:49 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:05:18.952 09:38:49 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:19.524 Calling clear_iscsi_subsystem 00:05:19.524 Calling clear_nvmf_subsystem 00:05:19.524 Calling clear_nbd_subsystem 00:05:19.524 Calling clear_ublk_subsystem 00:05:19.524 Calling clear_vhost_blk_subsystem 00:05:19.524 Calling clear_vhost_scsi_subsystem 00:05:19.524 Calling clear_bdev_subsystem 00:05:19.524 09:38:50 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:19.524 09:38:50 json_config -- json_config/json_config.sh@350 -- # count=100 00:05:19.524 09:38:50 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:05:19.524 09:38:50 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:19.524 09:38:50 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:19.524 09:38:50 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:19.784 09:38:50 json_config -- json_config/json_config.sh@352 -- # break 00:05:19.784 09:38:50 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:05:19.785 09:38:50 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:05:19.785 09:38:50 json_config -- json_config/common.sh@31 -- # local app=target 00:05:19.785 09:38:50 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:19.785 09:38:50 json_config -- json_config/common.sh@35 -- # [[ -n 1135664 ]] 00:05:19.785 09:38:50 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1135664 00:05:19.785 09:38:50 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:19.785 09:38:50 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:19.785 09:38:50 json_config -- json_config/common.sh@41 -- # kill -0 1135664 00:05:19.785 09:38:50 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:20.355 09:38:51 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:20.355 09:38:51 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:20.355 09:38:51 json_config -- json_config/common.sh@41 -- # kill -0 1135664 00:05:20.355 09:38:51 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:20.355 09:38:51 json_config -- json_config/common.sh@43 -- # break 00:05:20.355 09:38:51 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:20.355 09:38:51 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:20.355 SPDK target shutdown done 00:05:20.355 09:38:51 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:05:20.355 INFO: relaunching applications... 00:05:20.355 09:38:51 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:20.355 09:38:51 json_config -- json_config/common.sh@9 -- # local app=target 00:05:20.355 09:38:51 json_config -- json_config/common.sh@10 -- # shift 00:05:20.355 09:38:51 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:20.355 09:38:51 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:20.355 09:38:51 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:20.355 09:38:51 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:20.355 09:38:51 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:20.355 09:38:51 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1136801 00:05:20.355 09:38:51 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:20.355 Waiting for target to run... 00:05:20.355 09:38:51 json_config -- json_config/common.sh@25 -- # waitforlisten 1136801 /var/tmp/spdk_tgt.sock 00:05:20.355 09:38:51 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:20.355 09:38:51 json_config -- common/autotest_common.sh@835 -- # '[' -z 1136801 ']' 00:05:20.355 09:38:51 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:20.355 09:38:51 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:20.355 09:38:51 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:20.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:20.355 09:38:51 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:20.355 09:38:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:20.355 [2024-11-20 09:38:51.131394] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:05:20.355 [2024-11-20 09:38:51.131452] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1136801 ] 00:05:20.616 [2024-11-20 09:38:51.433329] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.616 [2024-11-20 09:38:51.458298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.188 [2024-11-20 09:38:51.955867] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:21.188 [2024-11-20 09:38:51.988236] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:21.188 09:38:52 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:21.188 09:38:52 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:21.188 09:38:52 json_config -- json_config/common.sh@26 -- # echo '' 00:05:21.188 00:05:21.188 09:38:52 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:21.188 09:38:52 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:21.188 INFO: Checking if target configuration is the same... 00:05:21.188 09:38:52 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:21.188 09:38:52 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:21.188 09:38:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:21.188 + '[' 2 -ne 2 ']' 00:05:21.188 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:21.188 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:21.188 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:21.188 +++ basename /dev/fd/62 00:05:21.188 ++ mktemp /tmp/62.XXX 00:05:21.188 + tmp_file_1=/tmp/62.VNr 00:05:21.188 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:21.188 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:21.188 + tmp_file_2=/tmp/spdk_tgt_config.json.KMb 00:05:21.188 + ret=0 00:05:21.188 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:21.759 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:21.759 + diff -u /tmp/62.VNr /tmp/spdk_tgt_config.json.KMb 00:05:21.759 + echo 'INFO: JSON config files are the same' 00:05:21.759 INFO: JSON config files are the same 00:05:21.759 + rm /tmp/62.VNr /tmp/spdk_tgt_config.json.KMb 00:05:21.759 + exit 0 00:05:21.759 09:38:52 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:21.759 09:38:52 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:21.759 INFO: changing configuration and checking if this can be detected... 00:05:21.759 09:38:52 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:21.759 09:38:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:21.759 09:38:52 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:21.759 09:38:52 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:21.759 09:38:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:21.759 + '[' 2 -ne 2 ']' 00:05:21.759 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:21.759 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:21.759 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:21.759 +++ basename /dev/fd/62 00:05:21.759 ++ mktemp /tmp/62.XXX 00:05:21.759 + tmp_file_1=/tmp/62.XXM 00:05:21.759 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:21.759 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:21.759 + tmp_file_2=/tmp/spdk_tgt_config.json.bVX 00:05:21.759 + ret=0 00:05:21.759 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:22.020 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:22.281 + diff -u /tmp/62.XXM /tmp/spdk_tgt_config.json.bVX 00:05:22.281 + ret=1 00:05:22.281 + echo '=== Start of file: /tmp/62.XXM ===' 00:05:22.281 + cat /tmp/62.XXM 00:05:22.281 + echo '=== End of file: /tmp/62.XXM ===' 00:05:22.281 + echo '' 00:05:22.281 + echo '=== Start of file: /tmp/spdk_tgt_config.json.bVX ===' 00:05:22.281 + cat /tmp/spdk_tgt_config.json.bVX 00:05:22.281 + echo '=== End of file: /tmp/spdk_tgt_config.json.bVX ===' 00:05:22.281 + echo '' 00:05:22.281 + rm /tmp/62.XXM /tmp/spdk_tgt_config.json.bVX 00:05:22.281 + exit 1 00:05:22.281 09:38:52 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:22.281 INFO: configuration change detected. 00:05:22.281 09:38:52 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:22.281 09:38:52 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:22.281 09:38:52 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:22.281 09:38:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:22.281 09:38:52 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:22.281 09:38:52 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:22.281 09:38:52 json_config -- json_config/json_config.sh@324 -- # [[ -n 1136801 ]] 00:05:22.281 09:38:52 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:22.281 09:38:52 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:22.281 09:38:52 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:22.281 09:38:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:22.281 09:38:53 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:22.281 09:38:53 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:22.281 09:38:53 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:22.281 09:38:53 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:22.281 09:38:53 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:22.281 09:38:53 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:22.281 09:38:53 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:22.281 09:38:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:22.281 09:38:53 json_config -- json_config/json_config.sh@330 -- # killprocess 1136801 00:05:22.281 09:38:53 json_config -- common/autotest_common.sh@954 -- # '[' -z 1136801 ']' 00:05:22.281 09:38:53 json_config -- common/autotest_common.sh@958 -- # kill -0 1136801 00:05:22.281 09:38:53 json_config -- common/autotest_common.sh@959 -- # uname 00:05:22.281 09:38:53 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:22.281 09:38:53 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1136801 00:05:22.281 09:38:53 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:22.281 09:38:53 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:22.281 09:38:53 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1136801' 00:05:22.281 killing process with pid 1136801 00:05:22.281 09:38:53 json_config -- common/autotest_common.sh@973 -- # kill 1136801 00:05:22.281 09:38:53 json_config -- common/autotest_common.sh@978 -- # wait 1136801 00:05:22.542 09:38:53 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:22.542 09:38:53 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:22.542 09:38:53 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:22.542 09:38:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:22.542 09:38:53 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:22.542 09:38:53 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:22.542 INFO: Success 00:05:22.542 00:05:22.542 real 0m7.479s 00:05:22.542 user 0m9.175s 00:05:22.542 sys 0m1.915s 00:05:22.542 09:38:53 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:22.542 09:38:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:22.542 ************************************ 00:05:22.542 END TEST json_config 00:05:22.542 ************************************ 00:05:22.804 09:38:53 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:22.804 09:38:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:22.804 09:38:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:22.804 09:38:53 -- common/autotest_common.sh@10 -- # set +x 00:05:22.804 ************************************ 00:05:22.804 START TEST json_config_extra_key 00:05:22.805 ************************************ 00:05:22.805 09:38:53 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:22.805 09:38:53 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:22.805 09:38:53 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:05:22.805 09:38:53 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:22.805 09:38:53 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:22.805 09:38:53 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:22.805 09:38:53 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:22.805 09:38:53 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:22.805 09:38:53 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:22.805 09:38:53 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:22.805 09:38:53 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:22.805 09:38:53 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:22.805 09:38:53 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:22.805 09:38:53 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:22.805 09:38:53 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:22.805 09:38:53 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:22.805 09:38:53 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:22.805 09:38:53 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:22.805 09:38:53 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:22.805 09:38:53 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:22.805 09:38:53 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:22.805 09:38:53 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:22.805 09:38:53 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:22.805 09:38:53 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:22.805 09:38:53 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:22.805 09:38:53 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:22.805 09:38:53 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:22.805 09:38:53 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:22.805 09:38:53 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:22.805 09:38:53 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:22.805 09:38:53 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:22.805 09:38:53 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:22.805 09:38:53 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:22.805 09:38:53 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:22.805 09:38:53 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:22.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.805 --rc genhtml_branch_coverage=1 00:05:22.805 --rc genhtml_function_coverage=1 00:05:22.805 --rc genhtml_legend=1 00:05:22.805 --rc geninfo_all_blocks=1 00:05:22.805 --rc geninfo_unexecuted_blocks=1 00:05:22.805 00:05:22.805 ' 00:05:22.805 09:38:53 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:22.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.805 --rc genhtml_branch_coverage=1 00:05:22.805 --rc genhtml_function_coverage=1 00:05:22.805 --rc genhtml_legend=1 00:05:22.805 --rc geninfo_all_blocks=1 00:05:22.805 --rc geninfo_unexecuted_blocks=1 00:05:22.805 00:05:22.805 ' 00:05:22.805 09:38:53 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:22.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.805 --rc genhtml_branch_coverage=1 00:05:22.805 --rc genhtml_function_coverage=1 00:05:22.805 --rc genhtml_legend=1 00:05:22.805 --rc geninfo_all_blocks=1 00:05:22.805 --rc geninfo_unexecuted_blocks=1 00:05:22.805 00:05:22.805 ' 00:05:22.805 09:38:53 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:22.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.805 --rc genhtml_branch_coverage=1 00:05:22.805 --rc genhtml_function_coverage=1 00:05:22.805 --rc genhtml_legend=1 00:05:22.805 --rc geninfo_all_blocks=1 00:05:22.805 --rc geninfo_unexecuted_blocks=1 00:05:22.805 00:05:22.805 ' 00:05:22.805 09:38:53 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:22.805 09:38:53 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:22.805 09:38:53 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:22.805 09:38:53 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:22.805 09:38:53 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:22.805 09:38:53 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:22.805 09:38:53 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:22.805 09:38:53 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:22.805 09:38:53 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:22.805 09:38:53 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:22.805 09:38:53 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:22.805 09:38:53 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:22.805 09:38:53 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:22.805 09:38:53 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:22.805 09:38:53 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:22.805 09:38:53 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:22.805 09:38:53 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:22.805 09:38:53 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:22.805 09:38:53 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:22.805 09:38:53 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:22.805 09:38:53 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:22.805 09:38:53 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:22.805 09:38:53 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:22.805 09:38:53 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:22.805 09:38:53 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:22.805 09:38:53 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:22.805 09:38:53 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:22.805 09:38:53 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:22.805 09:38:53 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:22.805 09:38:53 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:22.805 09:38:53 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:22.805 09:38:53 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:22.805 09:38:53 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:22.805 09:38:53 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:22.805 09:38:53 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:22.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:22.805 09:38:53 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:22.805 09:38:53 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:22.805 09:38:53 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:22.805 09:38:53 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:22.805 09:38:53 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:22.805 09:38:53 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:23.067 09:38:53 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:23.067 09:38:53 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:23.067 09:38:53 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:23.067 09:38:53 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:23.067 09:38:53 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:23.067 09:38:53 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:23.067 09:38:53 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:23.067 09:38:53 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:23.067 INFO: launching applications... 00:05:23.067 09:38:53 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:23.067 09:38:53 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:23.067 09:38:53 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:23.067 09:38:53 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:23.067 09:38:53 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:23.067 09:38:53 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:23.067 09:38:53 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:23.067 09:38:53 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:23.067 09:38:53 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1137554 00:05:23.067 09:38:53 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:23.067 Waiting for target to run... 00:05:23.067 09:38:53 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1137554 /var/tmp/spdk_tgt.sock 00:05:23.067 09:38:53 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 1137554 ']' 00:05:23.067 09:38:53 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:23.067 09:38:53 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:23.067 09:38:53 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:23.067 09:38:53 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:23.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:23.067 09:38:53 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:23.067 09:38:53 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:23.067 [2024-11-20 09:38:53.792656] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:05:23.067 [2024-11-20 09:38:53.792737] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1137554 ] 00:05:23.328 [2024-11-20 09:38:54.083119] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.328 [2024-11-20 09:38:54.110809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.956 09:38:54 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:23.956 09:38:54 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:23.956 09:38:54 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:23.956 00:05:23.956 09:38:54 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:23.956 INFO: shutting down applications... 00:05:23.956 09:38:54 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:23.956 09:38:54 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:23.956 09:38:54 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:23.956 09:38:54 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1137554 ]] 00:05:23.956 09:38:54 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1137554 00:05:23.956 09:38:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:23.956 09:38:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:23.956 09:38:54 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1137554 00:05:23.956 09:38:54 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:24.217 09:38:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:24.217 09:38:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:24.217 09:38:55 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1137554 00:05:24.217 09:38:55 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:24.217 09:38:55 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:24.217 09:38:55 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:24.217 09:38:55 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:24.217 SPDK target shutdown done 00:05:24.217 09:38:55 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:24.217 Success 00:05:24.217 00:05:24.217 real 0m1.585s 00:05:24.217 user 0m1.198s 00:05:24.217 sys 0m0.423s 00:05:24.217 09:38:55 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:24.217 09:38:55 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:24.217 ************************************ 00:05:24.217 END TEST json_config_extra_key 00:05:24.217 ************************************ 00:05:24.479 09:38:55 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:24.479 09:38:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:24.479 09:38:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:24.479 09:38:55 -- common/autotest_common.sh@10 -- # set +x 00:05:24.479 ************************************ 00:05:24.479 START TEST alias_rpc 00:05:24.479 ************************************ 00:05:24.479 09:38:55 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:24.479 * Looking for test storage... 00:05:24.479 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:24.479 09:38:55 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:24.479 09:38:55 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:24.479 09:38:55 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:24.479 09:38:55 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:24.479 09:38:55 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:24.479 09:38:55 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:24.479 09:38:55 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:24.479 09:38:55 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:24.479 09:38:55 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:24.479 09:38:55 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:24.479 09:38:55 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:24.479 09:38:55 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:24.479 09:38:55 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:24.479 09:38:55 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:24.479 09:38:55 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:24.479 09:38:55 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:24.479 09:38:55 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:24.479 09:38:55 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:24.479 09:38:55 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:24.479 09:38:55 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:24.479 09:38:55 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:24.479 09:38:55 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:24.479 09:38:55 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:24.479 09:38:55 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:24.479 09:38:55 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:24.479 09:38:55 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:24.479 09:38:55 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:24.479 09:38:55 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:24.479 09:38:55 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:24.479 09:38:55 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:24.479 09:38:55 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:24.479 09:38:55 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:24.479 09:38:55 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:24.479 09:38:55 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:24.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.479 --rc genhtml_branch_coverage=1 00:05:24.479 --rc genhtml_function_coverage=1 00:05:24.479 --rc genhtml_legend=1 00:05:24.479 --rc geninfo_all_blocks=1 00:05:24.479 --rc geninfo_unexecuted_blocks=1 00:05:24.479 00:05:24.479 ' 00:05:24.479 09:38:55 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:24.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.479 --rc genhtml_branch_coverage=1 00:05:24.479 --rc genhtml_function_coverage=1 00:05:24.479 --rc genhtml_legend=1 00:05:24.479 --rc geninfo_all_blocks=1 00:05:24.479 --rc geninfo_unexecuted_blocks=1 00:05:24.479 00:05:24.479 ' 00:05:24.479 09:38:55 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:24.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.479 --rc genhtml_branch_coverage=1 00:05:24.479 --rc genhtml_function_coverage=1 00:05:24.479 --rc genhtml_legend=1 00:05:24.479 --rc geninfo_all_blocks=1 00:05:24.479 --rc geninfo_unexecuted_blocks=1 00:05:24.479 00:05:24.479 ' 00:05:24.479 09:38:55 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:24.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.479 --rc genhtml_branch_coverage=1 00:05:24.479 --rc genhtml_function_coverage=1 00:05:24.479 --rc genhtml_legend=1 00:05:24.479 --rc geninfo_all_blocks=1 00:05:24.479 --rc geninfo_unexecuted_blocks=1 00:05:24.479 00:05:24.479 ' 00:05:24.479 09:38:55 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:24.479 09:38:55 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1137922 00:05:24.479 09:38:55 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1137922 00:05:24.479 09:38:55 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 1137922 ']' 00:05:24.479 09:38:55 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:24.479 09:38:55 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.479 09:38:55 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:24.479 09:38:55 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.479 09:38:55 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:24.479 09:38:55 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.741 [2024-11-20 09:38:55.431064] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:05:24.741 [2024-11-20 09:38:55.431144] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1137922 ] 00:05:24.741 [2024-11-20 09:38:55.518350] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.741 [2024-11-20 09:38:55.552647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.312 09:38:56 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:25.312 09:38:56 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:25.312 09:38:56 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:25.572 09:38:56 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1137922 00:05:25.572 09:38:56 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 1137922 ']' 00:05:25.572 09:38:56 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 1137922 00:05:25.572 09:38:56 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:25.572 09:38:56 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:25.572 09:38:56 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1137922 00:05:25.572 09:38:56 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:25.572 09:38:56 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:25.572 09:38:56 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1137922' 00:05:25.572 killing process with pid 1137922 00:05:25.572 09:38:56 alias_rpc -- common/autotest_common.sh@973 -- # kill 1137922 00:05:25.572 09:38:56 alias_rpc -- common/autotest_common.sh@978 -- # wait 1137922 00:05:25.833 00:05:25.833 real 0m1.492s 00:05:25.833 user 0m1.627s 00:05:25.833 sys 0m0.424s 00:05:25.833 09:38:56 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:25.833 09:38:56 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.833 ************************************ 00:05:25.833 END TEST alias_rpc 00:05:25.833 ************************************ 00:05:25.833 09:38:56 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:25.833 09:38:56 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:25.833 09:38:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:25.833 09:38:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:25.833 09:38:56 -- common/autotest_common.sh@10 -- # set +x 00:05:25.833 ************************************ 00:05:25.833 START TEST spdkcli_tcp 00:05:25.833 ************************************ 00:05:25.833 09:38:56 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:26.095 * Looking for test storage... 00:05:26.095 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:26.095 09:38:56 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:26.095 09:38:56 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:26.095 09:38:56 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:26.095 09:38:56 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:26.095 09:38:56 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:26.095 09:38:56 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:26.095 09:38:56 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:26.095 09:38:56 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:26.095 09:38:56 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:26.095 09:38:56 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:26.095 09:38:56 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:26.095 09:38:56 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:26.095 09:38:56 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:26.095 09:38:56 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:26.095 09:38:56 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:26.095 09:38:56 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:26.095 09:38:56 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:26.095 09:38:56 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:26.095 09:38:56 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:26.095 09:38:56 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:26.095 09:38:56 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:26.095 09:38:56 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:26.095 09:38:56 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:26.095 09:38:56 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:26.095 09:38:56 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:26.095 09:38:56 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:26.095 09:38:56 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:26.095 09:38:56 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:26.095 09:38:56 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:26.095 09:38:56 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:26.095 09:38:56 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:26.095 09:38:56 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:26.095 09:38:56 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:26.095 09:38:56 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:26.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.095 --rc genhtml_branch_coverage=1 00:05:26.095 --rc genhtml_function_coverage=1 00:05:26.095 --rc genhtml_legend=1 00:05:26.095 --rc geninfo_all_blocks=1 00:05:26.095 --rc geninfo_unexecuted_blocks=1 00:05:26.095 00:05:26.095 ' 00:05:26.095 09:38:56 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:26.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.095 --rc genhtml_branch_coverage=1 00:05:26.095 --rc genhtml_function_coverage=1 00:05:26.095 --rc genhtml_legend=1 00:05:26.095 --rc geninfo_all_blocks=1 00:05:26.095 --rc geninfo_unexecuted_blocks=1 00:05:26.095 00:05:26.095 ' 00:05:26.095 09:38:56 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:26.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.095 --rc genhtml_branch_coverage=1 00:05:26.095 --rc genhtml_function_coverage=1 00:05:26.095 --rc genhtml_legend=1 00:05:26.095 --rc geninfo_all_blocks=1 00:05:26.095 --rc geninfo_unexecuted_blocks=1 00:05:26.095 00:05:26.095 ' 00:05:26.095 09:38:56 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:26.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.095 --rc genhtml_branch_coverage=1 00:05:26.095 --rc genhtml_function_coverage=1 00:05:26.095 --rc genhtml_legend=1 00:05:26.095 --rc geninfo_all_blocks=1 00:05:26.095 --rc geninfo_unexecuted_blocks=1 00:05:26.095 00:05:26.095 ' 00:05:26.095 09:38:56 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:26.095 09:38:56 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:26.095 09:38:56 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:26.095 09:38:56 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:26.095 09:38:56 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:26.095 09:38:56 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:26.095 09:38:56 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:26.095 09:38:56 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:26.095 09:38:56 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:26.095 09:38:56 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1138258 00:05:26.095 09:38:56 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1138258 00:05:26.095 09:38:56 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:26.095 09:38:56 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 1138258 ']' 00:05:26.095 09:38:56 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.095 09:38:56 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:26.095 09:38:56 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.095 09:38:56 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:26.095 09:38:56 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:26.356 [2024-11-20 09:38:57.009610] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:05:26.356 [2024-11-20 09:38:57.009688] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1138258 ] 00:05:26.356 [2024-11-20 09:38:57.097610] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:26.356 [2024-11-20 09:38:57.133812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:26.356 [2024-11-20 09:38:57.133813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.925 09:38:57 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:26.925 09:38:57 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:26.925 09:38:57 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1138403 00:05:26.925 09:38:57 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:26.925 09:38:57 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:27.186 [ 00:05:27.186 "bdev_malloc_delete", 00:05:27.186 "bdev_malloc_create", 00:05:27.186 "bdev_null_resize", 00:05:27.186 "bdev_null_delete", 00:05:27.186 "bdev_null_create", 00:05:27.186 "bdev_nvme_cuse_unregister", 00:05:27.186 "bdev_nvme_cuse_register", 00:05:27.186 "bdev_opal_new_user", 00:05:27.186 "bdev_opal_set_lock_state", 00:05:27.186 "bdev_opal_delete", 00:05:27.186 "bdev_opal_get_info", 00:05:27.186 "bdev_opal_create", 00:05:27.186 "bdev_nvme_opal_revert", 00:05:27.186 "bdev_nvme_opal_init", 00:05:27.186 "bdev_nvme_send_cmd", 00:05:27.186 "bdev_nvme_set_keys", 00:05:27.186 "bdev_nvme_get_path_iostat", 00:05:27.186 "bdev_nvme_get_mdns_discovery_info", 00:05:27.186 "bdev_nvme_stop_mdns_discovery", 00:05:27.186 "bdev_nvme_start_mdns_discovery", 00:05:27.186 "bdev_nvme_set_multipath_policy", 00:05:27.186 "bdev_nvme_set_preferred_path", 00:05:27.186 "bdev_nvme_get_io_paths", 00:05:27.186 "bdev_nvme_remove_error_injection", 00:05:27.186 "bdev_nvme_add_error_injection", 00:05:27.186 "bdev_nvme_get_discovery_info", 00:05:27.186 "bdev_nvme_stop_discovery", 00:05:27.186 "bdev_nvme_start_discovery", 00:05:27.186 "bdev_nvme_get_controller_health_info", 00:05:27.186 "bdev_nvme_disable_controller", 00:05:27.186 "bdev_nvme_enable_controller", 00:05:27.186 "bdev_nvme_reset_controller", 00:05:27.186 "bdev_nvme_get_transport_statistics", 00:05:27.186 "bdev_nvme_apply_firmware", 00:05:27.186 "bdev_nvme_detach_controller", 00:05:27.186 "bdev_nvme_get_controllers", 00:05:27.186 "bdev_nvme_attach_controller", 00:05:27.186 "bdev_nvme_set_hotplug", 00:05:27.186 "bdev_nvme_set_options", 00:05:27.186 "bdev_passthru_delete", 00:05:27.186 "bdev_passthru_create", 00:05:27.186 "bdev_lvol_set_parent_bdev", 00:05:27.186 "bdev_lvol_set_parent", 00:05:27.186 "bdev_lvol_check_shallow_copy", 00:05:27.186 "bdev_lvol_start_shallow_copy", 00:05:27.186 "bdev_lvol_grow_lvstore", 00:05:27.186 "bdev_lvol_get_lvols", 00:05:27.186 "bdev_lvol_get_lvstores", 00:05:27.186 "bdev_lvol_delete", 00:05:27.186 "bdev_lvol_set_read_only", 00:05:27.186 "bdev_lvol_resize", 00:05:27.186 "bdev_lvol_decouple_parent", 00:05:27.186 "bdev_lvol_inflate", 00:05:27.186 "bdev_lvol_rename", 00:05:27.186 "bdev_lvol_clone_bdev", 00:05:27.186 "bdev_lvol_clone", 00:05:27.186 "bdev_lvol_snapshot", 00:05:27.186 "bdev_lvol_create", 00:05:27.186 "bdev_lvol_delete_lvstore", 00:05:27.186 "bdev_lvol_rename_lvstore", 00:05:27.186 "bdev_lvol_create_lvstore", 00:05:27.186 "bdev_raid_set_options", 00:05:27.186 "bdev_raid_remove_base_bdev", 00:05:27.186 "bdev_raid_add_base_bdev", 00:05:27.186 "bdev_raid_delete", 00:05:27.186 "bdev_raid_create", 00:05:27.186 "bdev_raid_get_bdevs", 00:05:27.186 "bdev_error_inject_error", 00:05:27.186 "bdev_error_delete", 00:05:27.186 "bdev_error_create", 00:05:27.186 "bdev_split_delete", 00:05:27.186 "bdev_split_create", 00:05:27.186 "bdev_delay_delete", 00:05:27.186 "bdev_delay_create", 00:05:27.186 "bdev_delay_update_latency", 00:05:27.186 "bdev_zone_block_delete", 00:05:27.186 "bdev_zone_block_create", 00:05:27.186 "blobfs_create", 00:05:27.186 "blobfs_detect", 00:05:27.186 "blobfs_set_cache_size", 00:05:27.186 "bdev_aio_delete", 00:05:27.186 "bdev_aio_rescan", 00:05:27.186 "bdev_aio_create", 00:05:27.186 "bdev_ftl_set_property", 00:05:27.186 "bdev_ftl_get_properties", 00:05:27.186 "bdev_ftl_get_stats", 00:05:27.186 "bdev_ftl_unmap", 00:05:27.186 "bdev_ftl_unload", 00:05:27.186 "bdev_ftl_delete", 00:05:27.186 "bdev_ftl_load", 00:05:27.186 "bdev_ftl_create", 00:05:27.186 "bdev_virtio_attach_controller", 00:05:27.186 "bdev_virtio_scsi_get_devices", 00:05:27.186 "bdev_virtio_detach_controller", 00:05:27.186 "bdev_virtio_blk_set_hotplug", 00:05:27.186 "bdev_iscsi_delete", 00:05:27.186 "bdev_iscsi_create", 00:05:27.186 "bdev_iscsi_set_options", 00:05:27.186 "accel_error_inject_error", 00:05:27.186 "ioat_scan_accel_module", 00:05:27.186 "dsa_scan_accel_module", 00:05:27.186 "iaa_scan_accel_module", 00:05:27.186 "vfu_virtio_create_fs_endpoint", 00:05:27.186 "vfu_virtio_create_scsi_endpoint", 00:05:27.186 "vfu_virtio_scsi_remove_target", 00:05:27.186 "vfu_virtio_scsi_add_target", 00:05:27.186 "vfu_virtio_create_blk_endpoint", 00:05:27.186 "vfu_virtio_delete_endpoint", 00:05:27.186 "keyring_file_remove_key", 00:05:27.186 "keyring_file_add_key", 00:05:27.186 "keyring_linux_set_options", 00:05:27.186 "fsdev_aio_delete", 00:05:27.186 "fsdev_aio_create", 00:05:27.186 "iscsi_get_histogram", 00:05:27.186 "iscsi_enable_histogram", 00:05:27.186 "iscsi_set_options", 00:05:27.186 "iscsi_get_auth_groups", 00:05:27.186 "iscsi_auth_group_remove_secret", 00:05:27.186 "iscsi_auth_group_add_secret", 00:05:27.186 "iscsi_delete_auth_group", 00:05:27.186 "iscsi_create_auth_group", 00:05:27.186 "iscsi_set_discovery_auth", 00:05:27.186 "iscsi_get_options", 00:05:27.186 "iscsi_target_node_request_logout", 00:05:27.186 "iscsi_target_node_set_redirect", 00:05:27.186 "iscsi_target_node_set_auth", 00:05:27.186 "iscsi_target_node_add_lun", 00:05:27.186 "iscsi_get_stats", 00:05:27.186 "iscsi_get_connections", 00:05:27.186 "iscsi_portal_group_set_auth", 00:05:27.186 "iscsi_start_portal_group", 00:05:27.186 "iscsi_delete_portal_group", 00:05:27.186 "iscsi_create_portal_group", 00:05:27.186 "iscsi_get_portal_groups", 00:05:27.186 "iscsi_delete_target_node", 00:05:27.186 "iscsi_target_node_remove_pg_ig_maps", 00:05:27.186 "iscsi_target_node_add_pg_ig_maps", 00:05:27.186 "iscsi_create_target_node", 00:05:27.186 "iscsi_get_target_nodes", 00:05:27.186 "iscsi_delete_initiator_group", 00:05:27.186 "iscsi_initiator_group_remove_initiators", 00:05:27.186 "iscsi_initiator_group_add_initiators", 00:05:27.186 "iscsi_create_initiator_group", 00:05:27.186 "iscsi_get_initiator_groups", 00:05:27.186 "nvmf_set_crdt", 00:05:27.186 "nvmf_set_config", 00:05:27.186 "nvmf_set_max_subsystems", 00:05:27.186 "nvmf_stop_mdns_prr", 00:05:27.186 "nvmf_publish_mdns_prr", 00:05:27.186 "nvmf_subsystem_get_listeners", 00:05:27.186 "nvmf_subsystem_get_qpairs", 00:05:27.186 "nvmf_subsystem_get_controllers", 00:05:27.186 "nvmf_get_stats", 00:05:27.186 "nvmf_get_transports", 00:05:27.186 "nvmf_create_transport", 00:05:27.186 "nvmf_get_targets", 00:05:27.186 "nvmf_delete_target", 00:05:27.186 "nvmf_create_target", 00:05:27.186 "nvmf_subsystem_allow_any_host", 00:05:27.186 "nvmf_subsystem_set_keys", 00:05:27.186 "nvmf_subsystem_remove_host", 00:05:27.186 "nvmf_subsystem_add_host", 00:05:27.186 "nvmf_ns_remove_host", 00:05:27.186 "nvmf_ns_add_host", 00:05:27.187 "nvmf_subsystem_remove_ns", 00:05:27.187 "nvmf_subsystem_set_ns_ana_group", 00:05:27.187 "nvmf_subsystem_add_ns", 00:05:27.187 "nvmf_subsystem_listener_set_ana_state", 00:05:27.187 "nvmf_discovery_get_referrals", 00:05:27.187 "nvmf_discovery_remove_referral", 00:05:27.187 "nvmf_discovery_add_referral", 00:05:27.187 "nvmf_subsystem_remove_listener", 00:05:27.187 "nvmf_subsystem_add_listener", 00:05:27.187 "nvmf_delete_subsystem", 00:05:27.187 "nvmf_create_subsystem", 00:05:27.187 "nvmf_get_subsystems", 00:05:27.187 "env_dpdk_get_mem_stats", 00:05:27.187 "nbd_get_disks", 00:05:27.187 "nbd_stop_disk", 00:05:27.187 "nbd_start_disk", 00:05:27.187 "ublk_recover_disk", 00:05:27.187 "ublk_get_disks", 00:05:27.187 "ublk_stop_disk", 00:05:27.187 "ublk_start_disk", 00:05:27.187 "ublk_destroy_target", 00:05:27.187 "ublk_create_target", 00:05:27.187 "virtio_blk_create_transport", 00:05:27.187 "virtio_blk_get_transports", 00:05:27.187 "vhost_controller_set_coalescing", 00:05:27.187 "vhost_get_controllers", 00:05:27.187 "vhost_delete_controller", 00:05:27.187 "vhost_create_blk_controller", 00:05:27.187 "vhost_scsi_controller_remove_target", 00:05:27.187 "vhost_scsi_controller_add_target", 00:05:27.187 "vhost_start_scsi_controller", 00:05:27.187 "vhost_create_scsi_controller", 00:05:27.187 "thread_set_cpumask", 00:05:27.187 "scheduler_set_options", 00:05:27.187 "framework_get_governor", 00:05:27.187 "framework_get_scheduler", 00:05:27.187 "framework_set_scheduler", 00:05:27.187 "framework_get_reactors", 00:05:27.187 "thread_get_io_channels", 00:05:27.187 "thread_get_pollers", 00:05:27.187 "thread_get_stats", 00:05:27.187 "framework_monitor_context_switch", 00:05:27.187 "spdk_kill_instance", 00:05:27.187 "log_enable_timestamps", 00:05:27.187 "log_get_flags", 00:05:27.187 "log_clear_flag", 00:05:27.187 "log_set_flag", 00:05:27.187 "log_get_level", 00:05:27.187 "log_set_level", 00:05:27.187 "log_get_print_level", 00:05:27.187 "log_set_print_level", 00:05:27.187 "framework_enable_cpumask_locks", 00:05:27.187 "framework_disable_cpumask_locks", 00:05:27.187 "framework_wait_init", 00:05:27.187 "framework_start_init", 00:05:27.187 "scsi_get_devices", 00:05:27.187 "bdev_get_histogram", 00:05:27.187 "bdev_enable_histogram", 00:05:27.187 "bdev_set_qos_limit", 00:05:27.187 "bdev_set_qd_sampling_period", 00:05:27.187 "bdev_get_bdevs", 00:05:27.187 "bdev_reset_iostat", 00:05:27.187 "bdev_get_iostat", 00:05:27.187 "bdev_examine", 00:05:27.187 "bdev_wait_for_examine", 00:05:27.187 "bdev_set_options", 00:05:27.187 "accel_get_stats", 00:05:27.187 "accel_set_options", 00:05:27.187 "accel_set_driver", 00:05:27.187 "accel_crypto_key_destroy", 00:05:27.187 "accel_crypto_keys_get", 00:05:27.187 "accel_crypto_key_create", 00:05:27.187 "accel_assign_opc", 00:05:27.187 "accel_get_module_info", 00:05:27.187 "accel_get_opc_assignments", 00:05:27.187 "vmd_rescan", 00:05:27.187 "vmd_remove_device", 00:05:27.187 "vmd_enable", 00:05:27.187 "sock_get_default_impl", 00:05:27.187 "sock_set_default_impl", 00:05:27.187 "sock_impl_set_options", 00:05:27.187 "sock_impl_get_options", 00:05:27.187 "iobuf_get_stats", 00:05:27.187 "iobuf_set_options", 00:05:27.187 "keyring_get_keys", 00:05:27.187 "vfu_tgt_set_base_path", 00:05:27.187 "framework_get_pci_devices", 00:05:27.187 "framework_get_config", 00:05:27.187 "framework_get_subsystems", 00:05:27.187 "fsdev_set_opts", 00:05:27.187 "fsdev_get_opts", 00:05:27.187 "trace_get_info", 00:05:27.187 "trace_get_tpoint_group_mask", 00:05:27.187 "trace_disable_tpoint_group", 00:05:27.187 "trace_enable_tpoint_group", 00:05:27.187 "trace_clear_tpoint_mask", 00:05:27.187 "trace_set_tpoint_mask", 00:05:27.187 "notify_get_notifications", 00:05:27.187 "notify_get_types", 00:05:27.187 "spdk_get_version", 00:05:27.187 "rpc_get_methods" 00:05:27.187 ] 00:05:27.187 09:38:57 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:27.187 09:38:57 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:27.187 09:38:57 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:27.187 09:38:58 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:27.187 09:38:58 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1138258 00:05:27.187 09:38:58 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 1138258 ']' 00:05:27.187 09:38:58 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 1138258 00:05:27.187 09:38:58 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:27.187 09:38:58 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:27.187 09:38:58 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1138258 00:05:27.187 09:38:58 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:27.187 09:38:58 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:27.187 09:38:58 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1138258' 00:05:27.187 killing process with pid 1138258 00:05:27.187 09:38:58 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 1138258 00:05:27.187 09:38:58 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 1138258 00:05:27.447 00:05:27.447 real 0m1.533s 00:05:27.447 user 0m2.766s 00:05:27.447 sys 0m0.493s 00:05:27.447 09:38:58 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:27.447 09:38:58 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:27.447 ************************************ 00:05:27.447 END TEST spdkcli_tcp 00:05:27.447 ************************************ 00:05:27.447 09:38:58 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:27.447 09:38:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:27.447 09:38:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:27.447 09:38:58 -- common/autotest_common.sh@10 -- # set +x 00:05:27.448 ************************************ 00:05:27.448 START TEST dpdk_mem_utility 00:05:27.448 ************************************ 00:05:27.448 09:38:58 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:27.708 * Looking for test storage... 00:05:27.708 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:27.708 09:38:58 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:27.708 09:38:58 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:05:27.708 09:38:58 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:27.708 09:38:58 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:27.708 09:38:58 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:27.708 09:38:58 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:27.708 09:38:58 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:27.708 09:38:58 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:27.708 09:38:58 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:27.708 09:38:58 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:27.708 09:38:58 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:27.708 09:38:58 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:27.708 09:38:58 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:27.708 09:38:58 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:27.708 09:38:58 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:27.708 09:38:58 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:27.708 09:38:58 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:27.708 09:38:58 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:27.708 09:38:58 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:27.708 09:38:58 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:27.708 09:38:58 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:27.708 09:38:58 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:27.708 09:38:58 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:27.708 09:38:58 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:27.708 09:38:58 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:27.708 09:38:58 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:27.708 09:38:58 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:27.708 09:38:58 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:27.708 09:38:58 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:27.709 09:38:58 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:27.709 09:38:58 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:27.709 09:38:58 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:27.709 09:38:58 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:27.709 09:38:58 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:27.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.709 --rc genhtml_branch_coverage=1 00:05:27.709 --rc genhtml_function_coverage=1 00:05:27.709 --rc genhtml_legend=1 00:05:27.709 --rc geninfo_all_blocks=1 00:05:27.709 --rc geninfo_unexecuted_blocks=1 00:05:27.709 00:05:27.709 ' 00:05:27.709 09:38:58 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:27.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.709 --rc genhtml_branch_coverage=1 00:05:27.709 --rc genhtml_function_coverage=1 00:05:27.709 --rc genhtml_legend=1 00:05:27.709 --rc geninfo_all_blocks=1 00:05:27.709 --rc geninfo_unexecuted_blocks=1 00:05:27.709 00:05:27.709 ' 00:05:27.709 09:38:58 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:27.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.709 --rc genhtml_branch_coverage=1 00:05:27.709 --rc genhtml_function_coverage=1 00:05:27.709 --rc genhtml_legend=1 00:05:27.709 --rc geninfo_all_blocks=1 00:05:27.709 --rc geninfo_unexecuted_blocks=1 00:05:27.709 00:05:27.709 ' 00:05:27.709 09:38:58 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:27.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.709 --rc genhtml_branch_coverage=1 00:05:27.709 --rc genhtml_function_coverage=1 00:05:27.709 --rc genhtml_legend=1 00:05:27.709 --rc geninfo_all_blocks=1 00:05:27.709 --rc geninfo_unexecuted_blocks=1 00:05:27.709 00:05:27.709 ' 00:05:27.709 09:38:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:27.709 09:38:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1138624 00:05:27.709 09:38:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1138624 00:05:27.709 09:38:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:27.709 09:38:58 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 1138624 ']' 00:05:27.709 09:38:58 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.709 09:38:58 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:27.709 09:38:58 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.709 09:38:58 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:27.709 09:38:58 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:27.709 [2024-11-20 09:38:58.613101] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:05:27.709 [2024-11-20 09:38:58.613190] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1138624 ] 00:05:27.969 [2024-11-20 09:38:58.704848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.969 [2024-11-20 09:38:58.739284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.538 09:38:59 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:28.538 09:38:59 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:28.538 09:38:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:28.538 09:38:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:28.538 09:38:59 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:28.538 09:38:59 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:28.538 { 00:05:28.538 "filename": "/tmp/spdk_mem_dump.txt" 00:05:28.538 } 00:05:28.538 09:38:59 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:28.538 09:38:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:28.538 DPDK memory size 810.000000 MiB in 1 heap(s) 00:05:28.538 1 heaps totaling size 810.000000 MiB 00:05:28.538 size: 810.000000 MiB heap id: 0 00:05:28.538 end heaps---------- 00:05:28.538 9 mempools totaling size 595.772034 MiB 00:05:28.538 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:28.538 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:28.538 size: 92.545471 MiB name: bdev_io_1138624 00:05:28.538 size: 50.003479 MiB name: msgpool_1138624 00:05:28.538 size: 36.509338 MiB name: fsdev_io_1138624 00:05:28.538 size: 21.763794 MiB name: PDU_Pool 00:05:28.538 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:28.538 size: 4.133484 MiB name: evtpool_1138624 00:05:28.538 size: 0.026123 MiB name: Session_Pool 00:05:28.538 end mempools------- 00:05:28.538 6 memzones totaling size 4.142822 MiB 00:05:28.538 size: 1.000366 MiB name: RG_ring_0_1138624 00:05:28.538 size: 1.000366 MiB name: RG_ring_1_1138624 00:05:28.538 size: 1.000366 MiB name: RG_ring_4_1138624 00:05:28.538 size: 1.000366 MiB name: RG_ring_5_1138624 00:05:28.538 size: 0.125366 MiB name: RG_ring_2_1138624 00:05:28.538 size: 0.015991 MiB name: RG_ring_3_1138624 00:05:28.538 end memzones------- 00:05:28.804 09:38:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:28.804 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:05:28.804 list of free elements. size: 10.862488 MiB 00:05:28.804 element at address: 0x200018a00000 with size: 0.999878 MiB 00:05:28.804 element at address: 0x200018c00000 with size: 0.999878 MiB 00:05:28.804 element at address: 0x200000400000 with size: 0.998535 MiB 00:05:28.804 element at address: 0x200031800000 with size: 0.994446 MiB 00:05:28.804 element at address: 0x200006400000 with size: 0.959839 MiB 00:05:28.804 element at address: 0x200012c00000 with size: 0.954285 MiB 00:05:28.804 element at address: 0x200018e00000 with size: 0.936584 MiB 00:05:28.804 element at address: 0x200000200000 with size: 0.717346 MiB 00:05:28.804 element at address: 0x20001a600000 with size: 0.582886 MiB 00:05:28.804 element at address: 0x200000c00000 with size: 0.495422 MiB 00:05:28.804 element at address: 0x20000a600000 with size: 0.490723 MiB 00:05:28.804 element at address: 0x200019000000 with size: 0.485657 MiB 00:05:28.804 element at address: 0x200003e00000 with size: 0.481934 MiB 00:05:28.804 element at address: 0x200027a00000 with size: 0.410034 MiB 00:05:28.804 element at address: 0x200000800000 with size: 0.355042 MiB 00:05:28.804 list of standard malloc elements. size: 199.218628 MiB 00:05:28.804 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:05:28.804 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:05:28.804 element at address: 0x200018afff80 with size: 1.000122 MiB 00:05:28.804 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:05:28.804 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:28.804 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:28.804 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:05:28.804 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:28.804 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:05:28.804 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:28.804 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:28.804 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:05:28.804 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:05:28.804 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:05:28.804 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:05:28.804 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:05:28.804 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:05:28.804 element at address: 0x20000085b040 with size: 0.000183 MiB 00:05:28.804 element at address: 0x20000085f300 with size: 0.000183 MiB 00:05:28.804 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:05:28.804 element at address: 0x20000087f680 with size: 0.000183 MiB 00:05:28.805 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:05:28.805 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:05:28.805 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:05:28.805 element at address: 0x200000cff000 with size: 0.000183 MiB 00:05:28.805 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:05:28.805 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:05:28.805 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:05:28.805 element at address: 0x200003efb980 with size: 0.000183 MiB 00:05:28.805 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:05:28.805 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:05:28.805 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:05:28.805 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:05:28.805 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:05:28.805 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:05:28.805 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:05:28.805 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:05:28.805 element at address: 0x20001a695380 with size: 0.000183 MiB 00:05:28.805 element at address: 0x20001a695440 with size: 0.000183 MiB 00:05:28.805 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:05:28.805 element at address: 0x200027a69040 with size: 0.000183 MiB 00:05:28.805 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:05:28.805 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:05:28.805 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:05:28.805 list of memzone associated elements. size: 599.918884 MiB 00:05:28.805 element at address: 0x20001a695500 with size: 211.416748 MiB 00:05:28.805 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:28.805 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:05:28.805 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:28.805 element at address: 0x200012df4780 with size: 92.045044 MiB 00:05:28.805 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_1138624_0 00:05:28.805 element at address: 0x200000dff380 with size: 48.003052 MiB 00:05:28.805 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1138624_0 00:05:28.805 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:05:28.805 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_1138624_0 00:05:28.805 element at address: 0x2000191be940 with size: 20.255554 MiB 00:05:28.805 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:28.805 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:05:28.805 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:28.805 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:05:28.805 associated memzone info: size: 3.000122 MiB name: MP_evtpool_1138624_0 00:05:28.805 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:05:28.805 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1138624 00:05:28.805 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:28.805 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1138624 00:05:28.805 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:05:28.805 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:28.805 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:05:28.805 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:28.805 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:05:28.805 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:28.805 element at address: 0x200003efba40 with size: 1.008118 MiB 00:05:28.805 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:28.805 element at address: 0x200000cff180 with size: 1.000488 MiB 00:05:28.805 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1138624 00:05:28.805 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:05:28.805 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1138624 00:05:28.805 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:05:28.805 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1138624 00:05:28.805 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:05:28.805 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1138624 00:05:28.805 element at address: 0x20000087f740 with size: 0.500488 MiB 00:05:28.805 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_1138624 00:05:28.805 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:05:28.805 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1138624 00:05:28.805 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:05:28.805 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:28.805 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:05:28.805 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:28.805 element at address: 0x20001907c540 with size: 0.250488 MiB 00:05:28.805 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:28.805 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:05:28.805 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_1138624 00:05:28.805 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:05:28.805 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1138624 00:05:28.805 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:05:28.805 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:28.805 element at address: 0x200027a69100 with size: 0.023743 MiB 00:05:28.805 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:28.805 element at address: 0x20000085b100 with size: 0.016113 MiB 00:05:28.805 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1138624 00:05:28.805 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:05:28.805 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:28.805 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:05:28.805 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1138624 00:05:28.805 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:05:28.805 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_1138624 00:05:28.805 element at address: 0x20000085af00 with size: 0.000305 MiB 00:05:28.805 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1138624 00:05:28.805 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:05:28.805 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:28.805 09:38:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:28.805 09:38:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1138624 00:05:28.805 09:38:59 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 1138624 ']' 00:05:28.805 09:38:59 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 1138624 00:05:28.805 09:38:59 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:28.805 09:38:59 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:28.805 09:38:59 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1138624 00:05:28.805 09:38:59 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:28.805 09:38:59 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:28.805 09:38:59 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1138624' 00:05:28.805 killing process with pid 1138624 00:05:28.805 09:38:59 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 1138624 00:05:28.805 09:38:59 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 1138624 00:05:29.072 00:05:29.072 real 0m1.408s 00:05:29.072 user 0m1.466s 00:05:29.072 sys 0m0.433s 00:05:29.072 09:38:59 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:29.072 09:38:59 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:29.072 ************************************ 00:05:29.072 END TEST dpdk_mem_utility 00:05:29.072 ************************************ 00:05:29.072 09:38:59 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:29.072 09:38:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:29.072 09:38:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:29.072 09:38:59 -- common/autotest_common.sh@10 -- # set +x 00:05:29.072 ************************************ 00:05:29.072 START TEST event 00:05:29.072 ************************************ 00:05:29.072 09:38:59 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:29.072 * Looking for test storage... 00:05:29.072 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:29.072 09:38:59 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:29.072 09:38:59 event -- common/autotest_common.sh@1693 -- # lcov --version 00:05:29.072 09:38:59 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:29.333 09:39:00 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:29.333 09:39:00 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:29.333 09:39:00 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:29.333 09:39:00 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:29.333 09:39:00 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:29.333 09:39:00 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:29.333 09:39:00 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:29.333 09:39:00 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:29.333 09:39:00 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:29.333 09:39:00 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:29.333 09:39:00 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:29.333 09:39:00 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:29.333 09:39:00 event -- scripts/common.sh@344 -- # case "$op" in 00:05:29.333 09:39:00 event -- scripts/common.sh@345 -- # : 1 00:05:29.333 09:39:00 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:29.333 09:39:00 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:29.333 09:39:00 event -- scripts/common.sh@365 -- # decimal 1 00:05:29.333 09:39:00 event -- scripts/common.sh@353 -- # local d=1 00:05:29.333 09:39:00 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:29.333 09:39:00 event -- scripts/common.sh@355 -- # echo 1 00:05:29.333 09:39:00 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:29.333 09:39:00 event -- scripts/common.sh@366 -- # decimal 2 00:05:29.333 09:39:00 event -- scripts/common.sh@353 -- # local d=2 00:05:29.333 09:39:00 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:29.333 09:39:00 event -- scripts/common.sh@355 -- # echo 2 00:05:29.333 09:39:00 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:29.333 09:39:00 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:29.333 09:39:00 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:29.333 09:39:00 event -- scripts/common.sh@368 -- # return 0 00:05:29.333 09:39:00 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:29.333 09:39:00 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:29.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.333 --rc genhtml_branch_coverage=1 00:05:29.333 --rc genhtml_function_coverage=1 00:05:29.333 --rc genhtml_legend=1 00:05:29.333 --rc geninfo_all_blocks=1 00:05:29.333 --rc geninfo_unexecuted_blocks=1 00:05:29.333 00:05:29.333 ' 00:05:29.333 09:39:00 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:29.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.333 --rc genhtml_branch_coverage=1 00:05:29.333 --rc genhtml_function_coverage=1 00:05:29.333 --rc genhtml_legend=1 00:05:29.333 --rc geninfo_all_blocks=1 00:05:29.333 --rc geninfo_unexecuted_blocks=1 00:05:29.333 00:05:29.333 ' 00:05:29.333 09:39:00 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:29.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.333 --rc genhtml_branch_coverage=1 00:05:29.333 --rc genhtml_function_coverage=1 00:05:29.333 --rc genhtml_legend=1 00:05:29.333 --rc geninfo_all_blocks=1 00:05:29.333 --rc geninfo_unexecuted_blocks=1 00:05:29.333 00:05:29.333 ' 00:05:29.333 09:39:00 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:29.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.333 --rc genhtml_branch_coverage=1 00:05:29.333 --rc genhtml_function_coverage=1 00:05:29.333 --rc genhtml_legend=1 00:05:29.333 --rc geninfo_all_blocks=1 00:05:29.333 --rc geninfo_unexecuted_blocks=1 00:05:29.333 00:05:29.333 ' 00:05:29.333 09:39:00 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:29.333 09:39:00 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:29.333 09:39:00 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:29.333 09:39:00 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:29.333 09:39:00 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:29.333 09:39:00 event -- common/autotest_common.sh@10 -- # set +x 00:05:29.333 ************************************ 00:05:29.333 START TEST event_perf 00:05:29.333 ************************************ 00:05:29.333 09:39:00 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:29.333 Running I/O for 1 seconds...[2024-11-20 09:39:00.093792] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:05:29.333 [2024-11-20 09:39:00.093898] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1138939 ] 00:05:29.333 [2024-11-20 09:39:00.183767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:29.333 [2024-11-20 09:39:00.227464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:29.333 [2024-11-20 09:39:00.227659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:29.333 [2024-11-20 09:39:00.227690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.333 [2024-11-20 09:39:00.227691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:30.716 Running I/O for 1 seconds... 00:05:30.716 lcore 0: 177625 00:05:30.716 lcore 1: 177628 00:05:30.716 lcore 2: 177623 00:05:30.716 lcore 3: 177622 00:05:30.716 done. 00:05:30.716 00:05:30.716 real 0m1.184s 00:05:30.716 user 0m4.090s 00:05:30.716 sys 0m0.089s 00:05:30.716 09:39:01 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:30.716 09:39:01 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:30.716 ************************************ 00:05:30.716 END TEST event_perf 00:05:30.716 ************************************ 00:05:30.716 09:39:01 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:30.716 09:39:01 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:30.716 09:39:01 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:30.716 09:39:01 event -- common/autotest_common.sh@10 -- # set +x 00:05:30.716 ************************************ 00:05:30.716 START TEST event_reactor 00:05:30.716 ************************************ 00:05:30.716 09:39:01 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:30.716 [2024-11-20 09:39:01.353913] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:05:30.716 [2024-11-20 09:39:01.354008] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1139322 ] 00:05:30.716 [2024-11-20 09:39:01.441020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.716 [2024-11-20 09:39:01.476234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.657 test_start 00:05:31.657 oneshot 00:05:31.657 tick 100 00:05:31.657 tick 100 00:05:31.657 tick 250 00:05:31.657 tick 100 00:05:31.657 tick 100 00:05:31.657 tick 100 00:05:31.657 tick 250 00:05:31.657 tick 500 00:05:31.657 tick 100 00:05:31.657 tick 100 00:05:31.657 tick 250 00:05:31.657 tick 100 00:05:31.657 tick 100 00:05:31.657 test_end 00:05:31.657 00:05:31.657 real 0m1.168s 00:05:31.657 user 0m1.081s 00:05:31.657 sys 0m0.082s 00:05:31.657 09:39:02 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:31.657 09:39:02 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:31.657 ************************************ 00:05:31.657 END TEST event_reactor 00:05:31.657 ************************************ 00:05:31.657 09:39:02 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:31.657 09:39:02 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:31.657 09:39:02 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.657 09:39:02 event -- common/autotest_common.sh@10 -- # set +x 00:05:31.917 ************************************ 00:05:31.917 START TEST event_reactor_perf 00:05:31.917 ************************************ 00:05:31.917 09:39:02 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:31.917 [2024-11-20 09:39:02.604173] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:05:31.917 [2024-11-20 09:39:02.604267] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1139698 ] 00:05:31.917 [2024-11-20 09:39:02.692959] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.917 [2024-11-20 09:39:02.731948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.857 test_start 00:05:32.857 test_end 00:05:32.857 Performance: 538413 events per second 00:05:32.857 00:05:32.857 real 0m1.176s 00:05:32.857 user 0m1.095s 00:05:32.857 sys 0m0.077s 00:05:32.857 09:39:03 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:32.857 09:39:03 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:32.857 ************************************ 00:05:32.857 END TEST event_reactor_perf 00:05:32.857 ************************************ 00:05:33.118 09:39:03 event -- event/event.sh@49 -- # uname -s 00:05:33.118 09:39:03 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:33.118 09:39:03 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:33.118 09:39:03 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:33.118 09:39:03 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:33.118 09:39:03 event -- common/autotest_common.sh@10 -- # set +x 00:05:33.118 ************************************ 00:05:33.118 START TEST event_scheduler 00:05:33.118 ************************************ 00:05:33.118 09:39:03 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:33.118 * Looking for test storage... 00:05:33.118 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:33.118 09:39:03 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:33.118 09:39:03 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:05:33.118 09:39:03 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:33.118 09:39:04 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:33.118 09:39:04 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:33.118 09:39:04 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:33.118 09:39:04 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:33.118 09:39:04 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:33.118 09:39:04 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:33.118 09:39:04 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:33.118 09:39:04 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:33.118 09:39:04 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:33.118 09:39:04 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:33.118 09:39:04 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:33.118 09:39:04 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:33.118 09:39:04 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:33.118 09:39:04 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:33.118 09:39:04 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:33.118 09:39:04 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:33.118 09:39:04 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:33.379 09:39:04 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:33.380 09:39:04 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:33.380 09:39:04 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:33.380 09:39:04 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:33.380 09:39:04 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:33.380 09:39:04 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:33.380 09:39:04 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:33.380 09:39:04 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:33.380 09:39:04 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:33.380 09:39:04 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:33.380 09:39:04 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:33.380 09:39:04 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:33.380 09:39:04 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:33.380 09:39:04 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:33.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.380 --rc genhtml_branch_coverage=1 00:05:33.380 --rc genhtml_function_coverage=1 00:05:33.380 --rc genhtml_legend=1 00:05:33.380 --rc geninfo_all_blocks=1 00:05:33.380 --rc geninfo_unexecuted_blocks=1 00:05:33.380 00:05:33.380 ' 00:05:33.380 09:39:04 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:33.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.380 --rc genhtml_branch_coverage=1 00:05:33.380 --rc genhtml_function_coverage=1 00:05:33.380 --rc genhtml_legend=1 00:05:33.380 --rc geninfo_all_blocks=1 00:05:33.380 --rc geninfo_unexecuted_blocks=1 00:05:33.380 00:05:33.380 ' 00:05:33.380 09:39:04 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:33.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.380 --rc genhtml_branch_coverage=1 00:05:33.380 --rc genhtml_function_coverage=1 00:05:33.380 --rc genhtml_legend=1 00:05:33.380 --rc geninfo_all_blocks=1 00:05:33.380 --rc geninfo_unexecuted_blocks=1 00:05:33.380 00:05:33.380 ' 00:05:33.380 09:39:04 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:33.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.380 --rc genhtml_branch_coverage=1 00:05:33.380 --rc genhtml_function_coverage=1 00:05:33.380 --rc genhtml_legend=1 00:05:33.380 --rc geninfo_all_blocks=1 00:05:33.380 --rc geninfo_unexecuted_blocks=1 00:05:33.380 00:05:33.380 ' 00:05:33.380 09:39:04 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:33.380 09:39:04 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1140087 00:05:33.380 09:39:04 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:33.380 09:39:04 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:33.380 09:39:04 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1140087 00:05:33.380 09:39:04 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 1140087 ']' 00:05:33.380 09:39:04 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.380 09:39:04 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:33.380 09:39:04 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.380 09:39:04 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:33.380 09:39:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:33.380 [2024-11-20 09:39:04.098063] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:05:33.380 [2024-11-20 09:39:04.098136] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1140087 ] 00:05:33.380 [2024-11-20 09:39:04.192350] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:33.380 [2024-11-20 09:39:04.247815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.380 [2024-11-20 09:39:04.247978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:33.380 [2024-11-20 09:39:04.248134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:33.380 [2024-11-20 09:39:04.248134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:34.331 09:39:04 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:34.331 09:39:04 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:34.331 09:39:04 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:34.331 09:39:04 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.331 09:39:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:34.331 [2024-11-20 09:39:04.914513] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:34.331 [2024-11-20 09:39:04.914532] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:34.331 [2024-11-20 09:39:04.914542] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:34.331 [2024-11-20 09:39:04.914548] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:34.331 [2024-11-20 09:39:04.914553] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:34.331 09:39:04 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.331 09:39:04 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:34.331 09:39:04 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.331 09:39:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:34.331 [2024-11-20 09:39:04.981795] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:34.331 09:39:04 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.331 09:39:04 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:34.331 09:39:04 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:34.331 09:39:04 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.331 09:39:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:34.331 ************************************ 00:05:34.331 START TEST scheduler_create_thread 00:05:34.331 ************************************ 00:05:34.331 09:39:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:34.331 09:39:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:34.331 09:39:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.331 09:39:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.331 2 00:05:34.331 09:39:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.331 09:39:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:34.331 09:39:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.331 09:39:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.331 3 00:05:34.331 09:39:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.331 09:39:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:34.331 09:39:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.331 09:39:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.331 4 00:05:34.331 09:39:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.331 09:39:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:34.331 09:39:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.331 09:39:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.331 5 00:05:34.331 09:39:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.331 09:39:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:34.331 09:39:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.331 09:39:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.331 6 00:05:34.331 09:39:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.331 09:39:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:34.331 09:39:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.331 09:39:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.331 7 00:05:34.331 09:39:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.331 09:39:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:34.331 09:39:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.331 09:39:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.331 8 00:05:34.331 09:39:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.331 09:39:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:34.331 09:39:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.331 09:39:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.331 9 00:05:34.331 09:39:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.331 09:39:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:34.331 09:39:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.331 09:39:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.902 10 00:05:34.902 09:39:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.902 09:39:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:34.902 09:39:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.902 09:39:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:36.284 09:39:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:36.284 09:39:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:36.284 09:39:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:36.284 09:39:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:36.284 09:39:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:36.855 09:39:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:36.855 09:39:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:36.855 09:39:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:36.855 09:39:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.795 09:39:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.795 09:39:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:37.795 09:39:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:37.795 09:39:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.795 09:39:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:38.365 09:39:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:38.365 00:05:38.365 real 0m4.224s 00:05:38.365 user 0m0.026s 00:05:38.365 sys 0m0.006s 00:05:38.365 09:39:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:38.365 09:39:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:38.365 ************************************ 00:05:38.365 END TEST scheduler_create_thread 00:05:38.365 ************************************ 00:05:38.624 09:39:09 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:38.624 09:39:09 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1140087 00:05:38.624 09:39:09 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 1140087 ']' 00:05:38.624 09:39:09 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 1140087 00:05:38.624 09:39:09 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:38.624 09:39:09 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:38.624 09:39:09 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1140087 00:05:38.624 09:39:09 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:38.624 09:39:09 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:38.624 09:39:09 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1140087' 00:05:38.624 killing process with pid 1140087 00:05:38.624 09:39:09 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 1140087 00:05:38.624 09:39:09 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 1140087 00:05:38.624 [2024-11-20 09:39:09.523519] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:38.885 00:05:38.885 real 0m5.836s 00:05:38.885 user 0m12.867s 00:05:38.885 sys 0m0.429s 00:05:38.885 09:39:09 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:38.885 09:39:09 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:38.885 ************************************ 00:05:38.885 END TEST event_scheduler 00:05:38.885 ************************************ 00:05:38.885 09:39:09 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:38.885 09:39:09 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:38.885 09:39:09 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:38.885 09:39:09 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:38.885 09:39:09 event -- common/autotest_common.sh@10 -- # set +x 00:05:38.885 ************************************ 00:05:38.885 START TEST app_repeat 00:05:38.885 ************************************ 00:05:38.885 09:39:09 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:38.885 09:39:09 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.885 09:39:09 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.885 09:39:09 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:38.885 09:39:09 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:38.885 09:39:09 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:38.885 09:39:09 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:38.885 09:39:09 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:38.885 09:39:09 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1141420 00:05:38.885 09:39:09 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:38.885 09:39:09 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:38.885 09:39:09 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1141420' 00:05:38.885 Process app_repeat pid: 1141420 00:05:38.885 09:39:09 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:38.885 09:39:09 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:38.885 spdk_app_start Round 0 00:05:38.885 09:39:09 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1141420 /var/tmp/spdk-nbd.sock 00:05:38.885 09:39:09 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1141420 ']' 00:05:38.885 09:39:09 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:38.885 09:39:09 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:38.885 09:39:09 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:38.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:38.885 09:39:09 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:38.885 09:39:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:39.146 [2024-11-20 09:39:09.799317] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:05:39.146 [2024-11-20 09:39:09.799389] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1141420 ] 00:05:39.146 [2024-11-20 09:39:09.886146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:39.146 [2024-11-20 09:39:09.919510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:39.146 [2024-11-20 09:39:09.919512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.146 09:39:09 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:39.146 09:39:09 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:39.146 09:39:09 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:39.406 Malloc0 00:05:39.406 09:39:10 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:39.665 Malloc1 00:05:39.665 09:39:10 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:39.665 09:39:10 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.666 09:39:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:39.666 09:39:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:39.666 09:39:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.666 09:39:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:39.666 09:39:10 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:39.666 09:39:10 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.666 09:39:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:39.666 09:39:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:39.666 09:39:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.666 09:39:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:39.666 09:39:10 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:39.666 09:39:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:39.666 09:39:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:39.666 09:39:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:39.666 /dev/nbd0 00:05:39.666 09:39:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:39.666 09:39:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:39.666 09:39:10 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:39.666 09:39:10 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:39.666 09:39:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:39.666 09:39:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:39.666 09:39:10 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:39.666 09:39:10 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:39.666 09:39:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:39.666 09:39:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:39.666 09:39:10 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:39.666 1+0 records in 00:05:39.666 1+0 records out 00:05:39.666 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000214097 s, 19.1 MB/s 00:05:39.666 09:39:10 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:39.666 09:39:10 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:39.666 09:39:10 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:39.666 09:39:10 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:39.666 09:39:10 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:39.666 09:39:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:39.666 09:39:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:39.666 09:39:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:39.926 /dev/nbd1 00:05:39.926 09:39:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:39.926 09:39:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:39.926 09:39:10 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:39.926 09:39:10 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:39.926 09:39:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:39.926 09:39:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:39.926 09:39:10 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:39.926 09:39:10 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:39.926 09:39:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:39.926 09:39:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:39.926 09:39:10 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:39.926 1+0 records in 00:05:39.926 1+0 records out 00:05:39.926 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000311059 s, 13.2 MB/s 00:05:39.926 09:39:10 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:39.926 09:39:10 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:39.926 09:39:10 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:39.926 09:39:10 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:39.926 09:39:10 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:39.926 09:39:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:39.926 09:39:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:39.926 09:39:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:39.926 09:39:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.926 09:39:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:40.187 09:39:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:40.187 { 00:05:40.187 "nbd_device": "/dev/nbd0", 00:05:40.187 "bdev_name": "Malloc0" 00:05:40.187 }, 00:05:40.187 { 00:05:40.187 "nbd_device": "/dev/nbd1", 00:05:40.187 "bdev_name": "Malloc1" 00:05:40.187 } 00:05:40.187 ]' 00:05:40.187 09:39:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:40.187 { 00:05:40.187 "nbd_device": "/dev/nbd0", 00:05:40.187 "bdev_name": "Malloc0" 00:05:40.187 }, 00:05:40.187 { 00:05:40.187 "nbd_device": "/dev/nbd1", 00:05:40.187 "bdev_name": "Malloc1" 00:05:40.187 } 00:05:40.187 ]' 00:05:40.187 09:39:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:40.187 09:39:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:40.187 /dev/nbd1' 00:05:40.187 09:39:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:40.187 /dev/nbd1' 00:05:40.187 09:39:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:40.187 09:39:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:40.187 09:39:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:40.187 09:39:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:40.187 09:39:11 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:40.187 09:39:11 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:40.187 09:39:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.187 09:39:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:40.187 09:39:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:40.187 09:39:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:40.187 09:39:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:40.187 09:39:11 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:40.187 256+0 records in 00:05:40.187 256+0 records out 00:05:40.187 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127083 s, 82.5 MB/s 00:05:40.187 09:39:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:40.187 09:39:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:40.187 256+0 records in 00:05:40.187 256+0 records out 00:05:40.187 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0118656 s, 88.4 MB/s 00:05:40.187 09:39:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:40.187 09:39:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:40.448 256+0 records in 00:05:40.448 256+0 records out 00:05:40.448 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0129024 s, 81.3 MB/s 00:05:40.448 09:39:11 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:40.448 09:39:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.448 09:39:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:40.448 09:39:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:40.448 09:39:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:40.448 09:39:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:40.448 09:39:11 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:40.448 09:39:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:40.448 09:39:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:40.448 09:39:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:40.448 09:39:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:40.448 09:39:11 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:40.448 09:39:11 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:40.448 09:39:11 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.448 09:39:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.448 09:39:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:40.448 09:39:11 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:40.448 09:39:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:40.448 09:39:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:40.448 09:39:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:40.448 09:39:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:40.448 09:39:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:40.448 09:39:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:40.448 09:39:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:40.448 09:39:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:40.448 09:39:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:40.448 09:39:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:40.448 09:39:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:40.448 09:39:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:40.709 09:39:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:40.709 09:39:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:40.709 09:39:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:40.709 09:39:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:40.709 09:39:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:40.709 09:39:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:40.709 09:39:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:40.709 09:39:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:40.709 09:39:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:40.709 09:39:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.709 09:39:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:40.971 09:39:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:40.971 09:39:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:40.971 09:39:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:40.971 09:39:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:40.971 09:39:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:40.971 09:39:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:40.971 09:39:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:40.971 09:39:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:40.971 09:39:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:40.971 09:39:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:40.971 09:39:11 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:40.971 09:39:11 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:40.971 09:39:11 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:41.232 09:39:11 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:41.232 [2024-11-20 09:39:12.015508] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:41.232 [2024-11-20 09:39:12.046392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.232 [2024-11-20 09:39:12.046393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:41.232 [2024-11-20 09:39:12.075387] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:41.232 [2024-11-20 09:39:12.075414] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:44.530 09:39:14 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:44.530 09:39:14 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:44.530 spdk_app_start Round 1 00:05:44.530 09:39:14 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1141420 /var/tmp/spdk-nbd.sock 00:05:44.530 09:39:14 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1141420 ']' 00:05:44.530 09:39:14 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:44.530 09:39:14 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:44.530 09:39:14 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:44.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:44.530 09:39:14 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:44.530 09:39:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:44.530 09:39:15 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:44.530 09:39:15 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:44.530 09:39:15 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:44.530 Malloc0 00:05:44.530 09:39:15 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:44.791 Malloc1 00:05:44.791 09:39:15 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:44.791 09:39:15 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.791 09:39:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:44.791 09:39:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:44.791 09:39:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.791 09:39:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:44.791 09:39:15 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:44.791 09:39:15 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.791 09:39:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:44.791 09:39:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:44.791 09:39:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.791 09:39:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:44.791 09:39:15 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:44.791 09:39:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:44.791 09:39:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:44.791 09:39:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:45.052 /dev/nbd0 00:05:45.052 09:39:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:45.052 09:39:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:45.052 09:39:15 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:45.052 09:39:15 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:45.052 09:39:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:45.052 09:39:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:45.052 09:39:15 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:45.052 09:39:15 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:45.052 09:39:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:45.052 09:39:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:45.052 09:39:15 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:45.052 1+0 records in 00:05:45.052 1+0 records out 00:05:45.052 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000287014 s, 14.3 MB/s 00:05:45.052 09:39:15 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:45.052 09:39:15 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:45.052 09:39:15 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:45.052 09:39:15 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:45.052 09:39:15 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:45.052 09:39:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:45.052 09:39:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:45.052 09:39:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:45.052 /dev/nbd1 00:05:45.052 09:39:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:45.052 09:39:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:45.052 09:39:15 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:45.052 09:39:15 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:45.052 09:39:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:45.052 09:39:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:45.052 09:39:15 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:45.052 09:39:15 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:45.052 09:39:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:45.052 09:39:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:45.052 09:39:15 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:45.052 1+0 records in 00:05:45.052 1+0 records out 00:05:45.052 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000126143 s, 32.5 MB/s 00:05:45.052 09:39:15 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:45.313 09:39:15 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:45.313 09:39:15 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:45.313 09:39:15 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:45.313 09:39:15 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:45.313 09:39:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:45.313 09:39:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:45.313 09:39:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:45.313 09:39:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.313 09:39:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:45.313 09:39:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:45.313 { 00:05:45.313 "nbd_device": "/dev/nbd0", 00:05:45.313 "bdev_name": "Malloc0" 00:05:45.313 }, 00:05:45.313 { 00:05:45.313 "nbd_device": "/dev/nbd1", 00:05:45.313 "bdev_name": "Malloc1" 00:05:45.313 } 00:05:45.313 ]' 00:05:45.313 09:39:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:45.313 { 00:05:45.313 "nbd_device": "/dev/nbd0", 00:05:45.313 "bdev_name": "Malloc0" 00:05:45.313 }, 00:05:45.313 { 00:05:45.313 "nbd_device": "/dev/nbd1", 00:05:45.313 "bdev_name": "Malloc1" 00:05:45.313 } 00:05:45.313 ]' 00:05:45.313 09:39:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:45.313 09:39:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:45.313 /dev/nbd1' 00:05:45.313 09:39:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:45.313 /dev/nbd1' 00:05:45.313 09:39:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:45.313 09:39:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:45.313 09:39:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:45.313 09:39:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:45.313 09:39:16 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:45.313 09:39:16 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:45.313 09:39:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.313 09:39:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:45.313 09:39:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:45.313 09:39:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:45.313 09:39:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:45.313 09:39:16 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:45.574 256+0 records in 00:05:45.574 256+0 records out 00:05:45.574 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0122304 s, 85.7 MB/s 00:05:45.574 09:39:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:45.574 09:39:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:45.574 256+0 records in 00:05:45.574 256+0 records out 00:05:45.574 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0122372 s, 85.7 MB/s 00:05:45.574 09:39:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:45.574 09:39:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:45.574 256+0 records in 00:05:45.574 256+0 records out 00:05:45.574 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0129373 s, 81.1 MB/s 00:05:45.574 09:39:16 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:45.574 09:39:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.574 09:39:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:45.574 09:39:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:45.574 09:39:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:45.574 09:39:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:45.574 09:39:16 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:45.574 09:39:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:45.574 09:39:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:45.574 09:39:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:45.574 09:39:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:45.574 09:39:16 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:45.574 09:39:16 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:45.574 09:39:16 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.574 09:39:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.574 09:39:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:45.574 09:39:16 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:45.574 09:39:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:45.574 09:39:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:45.574 09:39:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:45.574 09:39:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:45.574 09:39:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:45.574 09:39:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:45.574 09:39:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:45.574 09:39:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:45.574 09:39:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:45.574 09:39:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:45.574 09:39:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:45.574 09:39:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:45.836 09:39:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:45.836 09:39:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:45.836 09:39:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:45.836 09:39:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:45.836 09:39:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:45.836 09:39:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:45.836 09:39:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:45.836 09:39:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:45.836 09:39:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:45.836 09:39:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.836 09:39:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:46.098 09:39:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:46.098 09:39:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:46.098 09:39:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:46.098 09:39:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:46.098 09:39:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:46.098 09:39:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:46.098 09:39:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:46.098 09:39:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:46.098 09:39:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:46.098 09:39:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:46.098 09:39:16 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:46.098 09:39:16 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:46.098 09:39:16 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:46.359 09:39:17 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:46.359 [2024-11-20 09:39:17.181490] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:46.359 [2024-11-20 09:39:17.210893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:46.359 [2024-11-20 09:39:17.210894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.359 [2024-11-20 09:39:17.240558] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:46.359 [2024-11-20 09:39:17.240590] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:49.655 09:39:20 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:49.655 09:39:20 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:49.655 spdk_app_start Round 2 00:05:49.655 09:39:20 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1141420 /var/tmp/spdk-nbd.sock 00:05:49.655 09:39:20 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1141420 ']' 00:05:49.655 09:39:20 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:49.655 09:39:20 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:49.655 09:39:20 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:49.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:49.655 09:39:20 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:49.655 09:39:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:49.655 09:39:20 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:49.655 09:39:20 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:49.655 09:39:20 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:49.655 Malloc0 00:05:49.655 09:39:20 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:49.916 Malloc1 00:05:49.916 09:39:20 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:49.916 09:39:20 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.916 09:39:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:49.916 09:39:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:49.916 09:39:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.916 09:39:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:49.916 09:39:20 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:49.916 09:39:20 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.916 09:39:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:49.916 09:39:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:49.916 09:39:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.916 09:39:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:49.916 09:39:20 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:49.916 09:39:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:49.916 09:39:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:49.916 09:39:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:50.177 /dev/nbd0 00:05:50.177 09:39:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:50.177 09:39:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:50.177 09:39:20 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:50.177 09:39:20 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:50.177 09:39:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:50.177 09:39:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:50.177 09:39:20 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:50.177 09:39:20 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:50.177 09:39:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:50.177 09:39:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:50.177 09:39:20 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:50.177 1+0 records in 00:05:50.177 1+0 records out 00:05:50.177 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000213091 s, 19.2 MB/s 00:05:50.177 09:39:20 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:50.177 09:39:20 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:50.177 09:39:20 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:50.177 09:39:20 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:50.178 09:39:20 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:50.178 09:39:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:50.178 09:39:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:50.178 09:39:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:50.178 /dev/nbd1 00:05:50.438 09:39:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:50.438 09:39:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:50.438 09:39:21 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:50.438 09:39:21 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:50.438 09:39:21 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:50.438 09:39:21 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:50.438 09:39:21 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:50.438 09:39:21 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:50.438 09:39:21 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:50.438 09:39:21 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:50.438 09:39:21 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:50.438 1+0 records in 00:05:50.439 1+0 records out 00:05:50.439 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000166377 s, 24.6 MB/s 00:05:50.439 09:39:21 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:50.439 09:39:21 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:50.439 09:39:21 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:50.439 09:39:21 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:50.439 09:39:21 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:50.439 09:39:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:50.439 09:39:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:50.439 09:39:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:50.439 09:39:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.439 09:39:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:50.439 09:39:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:50.439 { 00:05:50.439 "nbd_device": "/dev/nbd0", 00:05:50.439 "bdev_name": "Malloc0" 00:05:50.439 }, 00:05:50.439 { 00:05:50.439 "nbd_device": "/dev/nbd1", 00:05:50.439 "bdev_name": "Malloc1" 00:05:50.439 } 00:05:50.439 ]' 00:05:50.439 09:39:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:50.439 09:39:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:50.439 { 00:05:50.439 "nbd_device": "/dev/nbd0", 00:05:50.439 "bdev_name": "Malloc0" 00:05:50.439 }, 00:05:50.439 { 00:05:50.439 "nbd_device": "/dev/nbd1", 00:05:50.439 "bdev_name": "Malloc1" 00:05:50.439 } 00:05:50.439 ]' 00:05:50.439 09:39:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:50.439 /dev/nbd1' 00:05:50.439 09:39:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:50.439 /dev/nbd1' 00:05:50.439 09:39:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:50.439 09:39:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:50.439 09:39:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:50.699 09:39:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:50.699 09:39:21 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:50.699 09:39:21 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:50.699 09:39:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.699 09:39:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:50.699 09:39:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:50.699 09:39:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:50.699 09:39:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:50.699 09:39:21 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:50.699 256+0 records in 00:05:50.699 256+0 records out 00:05:50.699 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00300595 s, 349 MB/s 00:05:50.699 09:39:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:50.699 09:39:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:50.699 256+0 records in 00:05:50.699 256+0 records out 00:05:50.699 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0122157 s, 85.8 MB/s 00:05:50.699 09:39:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:50.699 09:39:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:50.699 256+0 records in 00:05:50.699 256+0 records out 00:05:50.699 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0136319 s, 76.9 MB/s 00:05:50.699 09:39:21 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:50.699 09:39:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.699 09:39:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:50.699 09:39:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:50.699 09:39:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:50.699 09:39:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:50.699 09:39:21 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:50.699 09:39:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:50.700 09:39:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:50.700 09:39:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:50.700 09:39:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:50.700 09:39:21 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:50.700 09:39:21 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:50.700 09:39:21 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.700 09:39:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.700 09:39:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:50.700 09:39:21 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:50.700 09:39:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:50.700 09:39:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:50.700 09:39:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:50.700 09:39:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:50.700 09:39:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:50.700 09:39:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:50.700 09:39:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:50.700 09:39:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:50.700 09:39:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:50.700 09:39:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:50.700 09:39:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:50.700 09:39:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:50.960 09:39:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:50.960 09:39:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:50.960 09:39:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:50.960 09:39:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:50.960 09:39:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:50.960 09:39:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:50.960 09:39:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:50.960 09:39:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:50.960 09:39:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:50.960 09:39:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.960 09:39:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:51.221 09:39:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:51.221 09:39:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:51.221 09:39:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:51.221 09:39:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:51.221 09:39:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:51.221 09:39:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:51.221 09:39:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:51.221 09:39:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:51.221 09:39:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:51.221 09:39:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:51.221 09:39:22 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:51.221 09:39:22 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:51.221 09:39:22 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:51.482 09:39:22 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:51.482 [2024-11-20 09:39:22.294404] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:51.482 [2024-11-20 09:39:22.323968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:51.482 [2024-11-20 09:39:22.323968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.482 [2024-11-20 09:39:22.353109] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:51.482 [2024-11-20 09:39:22.353140] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:54.781 09:39:25 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1141420 /var/tmp/spdk-nbd.sock 00:05:54.781 09:39:25 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1141420 ']' 00:05:54.781 09:39:25 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:54.781 09:39:25 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:54.781 09:39:25 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:54.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:54.781 09:39:25 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:54.781 09:39:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:54.781 09:39:25 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:54.781 09:39:25 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:54.781 09:39:25 event.app_repeat -- event/event.sh@39 -- # killprocess 1141420 00:05:54.781 09:39:25 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 1141420 ']' 00:05:54.781 09:39:25 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 1141420 00:05:54.781 09:39:25 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:54.781 09:39:25 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:54.781 09:39:25 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1141420 00:05:54.781 09:39:25 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:54.781 09:39:25 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:54.781 09:39:25 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1141420' 00:05:54.781 killing process with pid 1141420 00:05:54.781 09:39:25 event.app_repeat -- common/autotest_common.sh@973 -- # kill 1141420 00:05:54.781 09:39:25 event.app_repeat -- common/autotest_common.sh@978 -- # wait 1141420 00:05:54.781 spdk_app_start is called in Round 0. 00:05:54.781 Shutdown signal received, stop current app iteration 00:05:54.781 Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 reinitialization... 00:05:54.781 spdk_app_start is called in Round 1. 00:05:54.781 Shutdown signal received, stop current app iteration 00:05:54.781 Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 reinitialization... 00:05:54.781 spdk_app_start is called in Round 2. 00:05:54.781 Shutdown signal received, stop current app iteration 00:05:54.781 Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 reinitialization... 00:05:54.781 spdk_app_start is called in Round 3. 00:05:54.781 Shutdown signal received, stop current app iteration 00:05:54.781 09:39:25 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:54.781 09:39:25 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:54.781 00:05:54.781 real 0m15.795s 00:05:54.781 user 0m34.710s 00:05:54.781 sys 0m2.255s 00:05:54.781 09:39:25 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:54.781 09:39:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:54.781 ************************************ 00:05:54.781 END TEST app_repeat 00:05:54.781 ************************************ 00:05:54.781 09:39:25 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:54.781 09:39:25 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:54.781 09:39:25 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:54.781 09:39:25 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:54.781 09:39:25 event -- common/autotest_common.sh@10 -- # set +x 00:05:54.781 ************************************ 00:05:54.781 START TEST cpu_locks 00:05:54.781 ************************************ 00:05:54.781 09:39:25 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:55.042 * Looking for test storage... 00:05:55.042 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:55.042 09:39:25 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:55.042 09:39:25 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:05:55.042 09:39:25 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:55.042 09:39:25 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:55.042 09:39:25 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:55.042 09:39:25 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:55.042 09:39:25 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:55.042 09:39:25 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:55.042 09:39:25 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:55.042 09:39:25 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:55.042 09:39:25 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:55.042 09:39:25 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:55.042 09:39:25 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:55.042 09:39:25 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:55.042 09:39:25 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:55.042 09:39:25 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:55.042 09:39:25 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:55.042 09:39:25 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:55.042 09:39:25 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:55.042 09:39:25 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:55.042 09:39:25 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:55.042 09:39:25 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:55.042 09:39:25 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:55.042 09:39:25 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:55.042 09:39:25 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:55.042 09:39:25 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:55.042 09:39:25 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:55.042 09:39:25 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:55.042 09:39:25 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:55.042 09:39:25 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:55.042 09:39:25 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:55.042 09:39:25 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:55.042 09:39:25 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:55.042 09:39:25 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:55.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.042 --rc genhtml_branch_coverage=1 00:05:55.042 --rc genhtml_function_coverage=1 00:05:55.042 --rc genhtml_legend=1 00:05:55.042 --rc geninfo_all_blocks=1 00:05:55.042 --rc geninfo_unexecuted_blocks=1 00:05:55.042 00:05:55.042 ' 00:05:55.042 09:39:25 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:55.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.042 --rc genhtml_branch_coverage=1 00:05:55.042 --rc genhtml_function_coverage=1 00:05:55.042 --rc genhtml_legend=1 00:05:55.042 --rc geninfo_all_blocks=1 00:05:55.043 --rc geninfo_unexecuted_blocks=1 00:05:55.043 00:05:55.043 ' 00:05:55.043 09:39:25 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:55.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.043 --rc genhtml_branch_coverage=1 00:05:55.043 --rc genhtml_function_coverage=1 00:05:55.043 --rc genhtml_legend=1 00:05:55.043 --rc geninfo_all_blocks=1 00:05:55.043 --rc geninfo_unexecuted_blocks=1 00:05:55.043 00:05:55.043 ' 00:05:55.043 09:39:25 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:55.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.043 --rc genhtml_branch_coverage=1 00:05:55.043 --rc genhtml_function_coverage=1 00:05:55.043 --rc genhtml_legend=1 00:05:55.043 --rc geninfo_all_blocks=1 00:05:55.043 --rc geninfo_unexecuted_blocks=1 00:05:55.043 00:05:55.043 ' 00:05:55.043 09:39:25 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:55.043 09:39:25 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:55.043 09:39:25 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:55.043 09:39:25 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:55.043 09:39:25 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:55.043 09:39:25 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:55.043 09:39:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:55.043 ************************************ 00:05:55.043 START TEST default_locks 00:05:55.043 ************************************ 00:05:55.043 09:39:25 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:55.043 09:39:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1145144 00:05:55.043 09:39:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1145144 00:05:55.043 09:39:25 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 1145144 ']' 00:05:55.043 09:39:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:55.043 09:39:25 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.043 09:39:25 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:55.043 09:39:25 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.043 09:39:25 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:55.043 09:39:25 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:55.043 [2024-11-20 09:39:25.932726] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:05:55.043 [2024-11-20 09:39:25.932790] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1145144 ] 00:05:55.303 [2024-11-20 09:39:26.019820] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.303 [2024-11-20 09:39:26.054371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.874 09:39:26 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:55.874 09:39:26 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:55.874 09:39:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1145144 00:05:55.874 09:39:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1145144 00:05:55.874 09:39:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:56.444 lslocks: write error 00:05:56.444 09:39:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1145144 00:05:56.444 09:39:27 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 1145144 ']' 00:05:56.444 09:39:27 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 1145144 00:05:56.444 09:39:27 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:56.444 09:39:27 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:56.444 09:39:27 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1145144 00:05:56.444 09:39:27 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:56.444 09:39:27 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:56.444 09:39:27 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1145144' 00:05:56.444 killing process with pid 1145144 00:05:56.444 09:39:27 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 1145144 00:05:56.444 09:39:27 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 1145144 00:05:56.706 09:39:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1145144 00:05:56.706 09:39:27 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:56.706 09:39:27 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1145144 00:05:56.706 09:39:27 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:56.706 09:39:27 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:56.706 09:39:27 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:56.706 09:39:27 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:56.706 09:39:27 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 1145144 00:05:56.706 09:39:27 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 1145144 ']' 00:05:56.706 09:39:27 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.706 09:39:27 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:56.706 09:39:27 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.706 09:39:27 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:56.706 09:39:27 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:56.706 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1145144) - No such process 00:05:56.706 ERROR: process (pid: 1145144) is no longer running 00:05:56.706 09:39:27 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:56.706 09:39:27 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:56.706 09:39:27 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:56.706 09:39:27 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:56.706 09:39:27 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:56.706 09:39:27 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:56.706 09:39:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:56.706 09:39:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:56.706 09:39:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:56.706 09:39:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:56.706 00:05:56.706 real 0m1.523s 00:05:56.706 user 0m1.644s 00:05:56.706 sys 0m0.531s 00:05:56.706 09:39:27 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:56.706 09:39:27 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:56.706 ************************************ 00:05:56.706 END TEST default_locks 00:05:56.706 ************************************ 00:05:56.706 09:39:27 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:56.706 09:39:27 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:56.706 09:39:27 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:56.706 09:39:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:56.706 ************************************ 00:05:56.706 START TEST default_locks_via_rpc 00:05:56.706 ************************************ 00:05:56.706 09:39:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:56.706 09:39:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1145438 00:05:56.706 09:39:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1145438 00:05:56.706 09:39:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:56.706 09:39:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1145438 ']' 00:05:56.706 09:39:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.706 09:39:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:56.706 09:39:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.706 09:39:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:56.706 09:39:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.706 [2024-11-20 09:39:27.529004] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:05:56.706 [2024-11-20 09:39:27.529067] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1145438 ] 00:05:56.706 [2024-11-20 09:39:27.616444] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.967 [2024-11-20 09:39:27.651372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.538 09:39:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:57.538 09:39:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:57.538 09:39:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:57.538 09:39:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.538 09:39:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.538 09:39:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.538 09:39:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:57.538 09:39:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:57.538 09:39:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:57.538 09:39:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:57.538 09:39:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:57.538 09:39:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.538 09:39:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.538 09:39:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.538 09:39:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1145438 00:05:57.538 09:39:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1145438 00:05:57.538 09:39:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:58.108 09:39:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1145438 00:05:58.108 09:39:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 1145438 ']' 00:05:58.108 09:39:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 1145438 00:05:58.109 09:39:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:58.109 09:39:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:58.109 09:39:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1145438 00:05:58.109 09:39:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:58.109 09:39:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:58.109 09:39:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1145438' 00:05:58.109 killing process with pid 1145438 00:05:58.109 09:39:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 1145438 00:05:58.109 09:39:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 1145438 00:05:58.374 00:05:58.374 real 0m1.570s 00:05:58.374 user 0m1.694s 00:05:58.374 sys 0m0.545s 00:05:58.374 09:39:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:58.374 09:39:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.374 ************************************ 00:05:58.374 END TEST default_locks_via_rpc 00:05:58.374 ************************************ 00:05:58.374 09:39:29 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:58.374 09:39:29 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:58.374 09:39:29 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:58.374 09:39:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:58.374 ************************************ 00:05:58.374 START TEST non_locking_app_on_locked_coremask 00:05:58.374 ************************************ 00:05:58.374 09:39:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:58.374 09:39:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1145782 00:05:58.374 09:39:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1145782 /var/tmp/spdk.sock 00:05:58.374 09:39:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:58.374 09:39:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1145782 ']' 00:05:58.374 09:39:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.374 09:39:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:58.374 09:39:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.374 09:39:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:58.374 09:39:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:58.374 [2024-11-20 09:39:29.177619] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:05:58.374 [2024-11-20 09:39:29.177684] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1145782 ] 00:05:58.374 [2024-11-20 09:39:29.261866] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.691 [2024-11-20 09:39:29.295052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.282 09:39:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:59.282 09:39:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:59.282 09:39:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1145953 00:05:59.282 09:39:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1145953 /var/tmp/spdk2.sock 00:05:59.282 09:39:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1145953 ']' 00:05:59.282 09:39:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:59.282 09:39:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:59.282 09:39:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:59.282 09:39:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:59.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:59.282 09:39:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:59.282 09:39:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:59.282 [2024-11-20 09:39:30.025670] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:05:59.282 [2024-11-20 09:39:30.025726] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1145953 ] 00:05:59.282 [2024-11-20 09:39:30.113174] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:59.282 [2024-11-20 09:39:30.113195] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.282 [2024-11-20 09:39:30.171398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.222 09:39:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:00.222 09:39:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:00.222 09:39:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1145782 00:06:00.222 09:39:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1145782 00:06:00.222 09:39:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:01.164 lslocks: write error 00:06:01.164 09:39:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1145782 00:06:01.164 09:39:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1145782 ']' 00:06:01.164 09:39:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1145782 00:06:01.164 09:39:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:01.164 09:39:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:01.164 09:39:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1145782 00:06:01.164 09:39:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:01.164 09:39:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:01.164 09:39:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1145782' 00:06:01.164 killing process with pid 1145782 00:06:01.164 09:39:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1145782 00:06:01.164 09:39:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1145782 00:06:01.424 09:39:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1145953 00:06:01.424 09:39:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1145953 ']' 00:06:01.424 09:39:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1145953 00:06:01.424 09:39:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:01.424 09:39:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:01.424 09:39:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1145953 00:06:01.424 09:39:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:01.424 09:39:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:01.424 09:39:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1145953' 00:06:01.424 killing process with pid 1145953 00:06:01.424 09:39:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1145953 00:06:01.424 09:39:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1145953 00:06:01.685 00:06:01.685 real 0m3.287s 00:06:01.685 user 0m3.632s 00:06:01.685 sys 0m1.064s 00:06:01.685 09:39:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:01.685 09:39:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:01.685 ************************************ 00:06:01.685 END TEST non_locking_app_on_locked_coremask 00:06:01.685 ************************************ 00:06:01.685 09:39:32 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:01.685 09:39:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:01.685 09:39:32 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:01.685 09:39:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:01.685 ************************************ 00:06:01.685 START TEST locking_app_on_unlocked_coremask 00:06:01.685 ************************************ 00:06:01.685 09:39:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:01.685 09:39:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1146465 00:06:01.685 09:39:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1146465 /var/tmp/spdk.sock 00:06:01.685 09:39:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:01.685 09:39:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1146465 ']' 00:06:01.685 09:39:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.685 09:39:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:01.685 09:39:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.685 09:39:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:01.685 09:39:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:01.685 [2024-11-20 09:39:32.550806] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:06:01.685 [2024-11-20 09:39:32.550871] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1146465 ] 00:06:01.946 [2024-11-20 09:39:32.637973] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:01.946 [2024-11-20 09:39:32.638007] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.946 [2024-11-20 09:39:32.677549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.517 09:39:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:02.517 09:39:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:02.517 09:39:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:02.517 09:39:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1146664 00:06:02.517 09:39:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1146664 /var/tmp/spdk2.sock 00:06:02.517 09:39:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1146664 ']' 00:06:02.517 09:39:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:02.517 09:39:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:02.517 09:39:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:02.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:02.517 09:39:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:02.517 09:39:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:02.517 [2024-11-20 09:39:33.395801] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:06:02.517 [2024-11-20 09:39:33.395862] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1146664 ] 00:06:02.778 [2024-11-20 09:39:33.487216] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.778 [2024-11-20 09:39:33.549872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.348 09:39:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:03.348 09:39:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:03.348 09:39:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1146664 00:06:03.348 09:39:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1146664 00:06:03.348 09:39:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:03.921 lslocks: write error 00:06:03.921 09:39:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1146465 00:06:03.921 09:39:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1146465 ']' 00:06:03.921 09:39:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1146465 00:06:03.921 09:39:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:03.921 09:39:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:03.921 09:39:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1146465 00:06:03.921 09:39:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:03.921 09:39:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:03.921 09:39:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1146465' 00:06:03.921 killing process with pid 1146465 00:06:03.921 09:39:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1146465 00:06:03.921 09:39:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1146465 00:06:04.181 09:39:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1146664 00:06:04.181 09:39:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1146664 ']' 00:06:04.181 09:39:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1146664 00:06:04.181 09:39:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:04.181 09:39:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:04.181 09:39:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1146664 00:06:04.181 09:39:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:04.181 09:39:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:04.181 09:39:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1146664' 00:06:04.181 killing process with pid 1146664 00:06:04.181 09:39:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1146664 00:06:04.181 09:39:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1146664 00:06:04.442 00:06:04.442 real 0m2.742s 00:06:04.442 user 0m3.059s 00:06:04.442 sys 0m0.837s 00:06:04.442 09:39:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.442 09:39:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:04.442 ************************************ 00:06:04.442 END TEST locking_app_on_unlocked_coremask 00:06:04.442 ************************************ 00:06:04.442 09:39:35 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:04.442 09:39:35 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:04.442 09:39:35 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:04.442 09:39:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:04.442 ************************************ 00:06:04.442 START TEST locking_app_on_locked_coremask 00:06:04.442 ************************************ 00:06:04.442 09:39:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:04.442 09:39:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1147036 00:06:04.442 09:39:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1147036 /var/tmp/spdk.sock 00:06:04.442 09:39:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:04.442 09:39:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1147036 ']' 00:06:04.442 09:39:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.442 09:39:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:04.442 09:39:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.442 09:39:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:04.442 09:39:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:04.703 [2024-11-20 09:39:35.362890] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:06:04.703 [2024-11-20 09:39:35.362944] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1147036 ] 00:06:04.703 [2024-11-20 09:39:35.444697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.703 [2024-11-20 09:39:35.475831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.275 09:39:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:05.275 09:39:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:05.275 09:39:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1147322 00:06:05.275 09:39:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1147322 /var/tmp/spdk2.sock 00:06:05.275 09:39:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:05.275 09:39:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:05.275 09:39:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1147322 /var/tmp/spdk2.sock 00:06:05.275 09:39:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:05.275 09:39:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:05.275 09:39:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:05.275 09:39:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:05.275 09:39:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1147322 /var/tmp/spdk2.sock 00:06:05.275 09:39:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1147322 ']' 00:06:05.275 09:39:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:05.275 09:39:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:05.275 09:39:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:05.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:05.275 09:39:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:05.275 09:39:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:05.537 [2024-11-20 09:39:36.191507] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:06:05.537 [2024-11-20 09:39:36.191562] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1147322 ] 00:06:05.537 [2024-11-20 09:39:36.279519] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1147036 has claimed it. 00:06:05.537 [2024-11-20 09:39:36.279554] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:06.107 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1147322) - No such process 00:06:06.107 ERROR: process (pid: 1147322) is no longer running 00:06:06.107 09:39:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:06.107 09:39:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:06.108 09:39:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:06.108 09:39:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:06.108 09:39:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:06.108 09:39:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:06.108 09:39:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1147036 00:06:06.108 09:39:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1147036 00:06:06.108 09:39:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:06.368 lslocks: write error 00:06:06.368 09:39:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1147036 00:06:06.368 09:39:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1147036 ']' 00:06:06.368 09:39:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1147036 00:06:06.368 09:39:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:06.368 09:39:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:06.368 09:39:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1147036 00:06:06.628 09:39:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:06.628 09:39:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:06.628 09:39:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1147036' 00:06:06.628 killing process with pid 1147036 00:06:06.628 09:39:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1147036 00:06:06.628 09:39:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1147036 00:06:06.628 00:06:06.628 real 0m2.209s 00:06:06.628 user 0m2.475s 00:06:06.628 sys 0m0.638s 00:06:06.628 09:39:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:06.628 09:39:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:06.628 ************************************ 00:06:06.628 END TEST locking_app_on_locked_coremask 00:06:06.628 ************************************ 00:06:06.889 09:39:37 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:06.889 09:39:37 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:06.889 09:39:37 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.889 09:39:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:06.889 ************************************ 00:06:06.889 START TEST locking_overlapped_coremask 00:06:06.889 ************************************ 00:06:06.889 09:39:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:06.889 09:39:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1147556 00:06:06.889 09:39:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1147556 /var/tmp/spdk.sock 00:06:06.889 09:39:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:06.889 09:39:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1147556 ']' 00:06:06.889 09:39:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.889 09:39:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:06.889 09:39:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.889 09:39:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:06.889 09:39:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:06.889 [2024-11-20 09:39:37.639248] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:06:06.889 [2024-11-20 09:39:37.639305] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1147556 ] 00:06:06.889 [2024-11-20 09:39:37.724932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:06.889 [2024-11-20 09:39:37.761198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:06.889 [2024-11-20 09:39:37.761285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.889 [2024-11-20 09:39:37.761287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:07.829 09:39:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:07.829 09:39:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:07.829 09:39:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1147750 00:06:07.829 09:39:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1147750 /var/tmp/spdk2.sock 00:06:07.829 09:39:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:07.829 09:39:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:07.829 09:39:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1147750 /var/tmp/spdk2.sock 00:06:07.829 09:39:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:07.829 09:39:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:07.829 09:39:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:07.829 09:39:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:07.829 09:39:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1147750 /var/tmp/spdk2.sock 00:06:07.830 09:39:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1147750 ']' 00:06:07.830 09:39:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:07.830 09:39:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:07.830 09:39:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:07.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:07.830 09:39:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:07.830 09:39:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:07.830 [2024-11-20 09:39:38.499389] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:06:07.830 [2024-11-20 09:39:38.499443] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1147750 ] 00:06:07.830 [2024-11-20 09:39:38.610713] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1147556 has claimed it. 00:06:07.830 [2024-11-20 09:39:38.610753] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:08.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1147750) - No such process 00:06:08.402 ERROR: process (pid: 1147750) is no longer running 00:06:08.402 09:39:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:08.402 09:39:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:08.402 09:39:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:08.402 09:39:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:08.402 09:39:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:08.402 09:39:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:08.402 09:39:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:08.402 09:39:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:08.402 09:39:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:08.402 09:39:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:08.402 09:39:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1147556 00:06:08.402 09:39:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 1147556 ']' 00:06:08.402 09:39:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 1147556 00:06:08.402 09:39:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:08.402 09:39:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:08.402 09:39:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1147556 00:06:08.402 09:39:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:08.402 09:39:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:08.402 09:39:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1147556' 00:06:08.402 killing process with pid 1147556 00:06:08.402 09:39:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 1147556 00:06:08.402 09:39:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 1147556 00:06:08.663 00:06:08.663 real 0m1.783s 00:06:08.663 user 0m5.159s 00:06:08.663 sys 0m0.393s 00:06:08.663 09:39:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:08.663 09:39:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:08.663 ************************************ 00:06:08.663 END TEST locking_overlapped_coremask 00:06:08.663 ************************************ 00:06:08.663 09:39:39 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:08.663 09:39:39 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:08.663 09:39:39 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.663 09:39:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:08.663 ************************************ 00:06:08.663 START TEST locking_overlapped_coremask_via_rpc 00:06:08.664 ************************************ 00:06:08.664 09:39:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:08.664 09:39:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1148014 00:06:08.664 09:39:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1148014 /var/tmp/spdk.sock 00:06:08.664 09:39:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:08.664 09:39:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1148014 ']' 00:06:08.664 09:39:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.664 09:39:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:08.664 09:39:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.664 09:39:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:08.664 09:39:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.664 [2024-11-20 09:39:39.500936] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:06:08.664 [2024-11-20 09:39:39.500997] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1148014 ] 00:06:08.924 [2024-11-20 09:39:39.587258] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:08.924 [2024-11-20 09:39:39.587285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:08.924 [2024-11-20 09:39:39.624711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:08.924 [2024-11-20 09:39:39.624862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.924 [2024-11-20 09:39:39.624864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:09.494 09:39:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:09.495 09:39:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:09.495 09:39:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1148122 00:06:09.495 09:39:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1148122 /var/tmp/spdk2.sock 00:06:09.495 09:39:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1148122 ']' 00:06:09.495 09:39:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:09.495 09:39:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:09.495 09:39:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:09.495 09:39:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:09.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:09.495 09:39:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:09.495 09:39:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.495 [2024-11-20 09:39:40.361094] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:06:09.495 [2024-11-20 09:39:40.361150] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1148122 ] 00:06:09.755 [2024-11-20 09:39:40.472839] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:09.755 [2024-11-20 09:39:40.472868] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:09.755 [2024-11-20 09:39:40.550791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:09.755 [2024-11-20 09:39:40.550949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:09.755 [2024-11-20 09:39:40.550949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:10.326 09:39:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:10.326 09:39:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:10.326 09:39:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:10.326 09:39:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.326 09:39:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.326 09:39:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.326 09:39:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:10.326 09:39:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:10.326 09:39:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:10.326 09:39:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:10.326 09:39:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:10.326 09:39:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:10.326 09:39:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:10.326 09:39:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:10.326 09:39:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.326 09:39:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.326 [2024-11-20 09:39:41.159238] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1148014 has claimed it. 00:06:10.326 request: 00:06:10.326 { 00:06:10.326 "method": "framework_enable_cpumask_locks", 00:06:10.326 "req_id": 1 00:06:10.326 } 00:06:10.326 Got JSON-RPC error response 00:06:10.326 response: 00:06:10.326 { 00:06:10.326 "code": -32603, 00:06:10.326 "message": "Failed to claim CPU core: 2" 00:06:10.326 } 00:06:10.326 09:39:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:10.326 09:39:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:10.326 09:39:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:10.326 09:39:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:10.326 09:39:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:10.326 09:39:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1148014 /var/tmp/spdk.sock 00:06:10.326 09:39:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1148014 ']' 00:06:10.326 09:39:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.326 09:39:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:10.326 09:39:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.326 09:39:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:10.326 09:39:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.588 09:39:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:10.588 09:39:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:10.588 09:39:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1148122 /var/tmp/spdk2.sock 00:06:10.588 09:39:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1148122 ']' 00:06:10.588 09:39:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:10.588 09:39:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:10.588 09:39:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:10.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:10.588 09:39:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:10.588 09:39:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.849 09:39:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:10.849 09:39:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:10.849 09:39:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:10.849 09:39:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:10.849 09:39:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:10.849 09:39:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:10.849 00:06:10.849 real 0m2.094s 00:06:10.849 user 0m0.873s 00:06:10.849 sys 0m0.146s 00:06:10.849 09:39:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.849 09:39:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.849 ************************************ 00:06:10.849 END TEST locking_overlapped_coremask_via_rpc 00:06:10.849 ************************************ 00:06:10.849 09:39:41 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:10.849 09:39:41 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1148014 ]] 00:06:10.849 09:39:41 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1148014 00:06:10.849 09:39:41 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1148014 ']' 00:06:10.849 09:39:41 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1148014 00:06:10.849 09:39:41 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:10.849 09:39:41 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:10.849 09:39:41 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1148014 00:06:10.849 09:39:41 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:10.849 09:39:41 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:10.849 09:39:41 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1148014' 00:06:10.849 killing process with pid 1148014 00:06:10.849 09:39:41 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1148014 00:06:10.849 09:39:41 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1148014 00:06:11.110 09:39:41 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1148122 ]] 00:06:11.110 09:39:41 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1148122 00:06:11.110 09:39:41 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1148122 ']' 00:06:11.110 09:39:41 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1148122 00:06:11.110 09:39:41 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:11.110 09:39:41 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:11.110 09:39:41 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1148122 00:06:11.110 09:39:41 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:11.110 09:39:41 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:11.110 09:39:41 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1148122' 00:06:11.110 killing process with pid 1148122 00:06:11.110 09:39:41 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1148122 00:06:11.110 09:39:41 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1148122 00:06:11.371 09:39:42 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:11.371 09:39:42 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:11.371 09:39:42 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1148014 ]] 00:06:11.371 09:39:42 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1148014 00:06:11.371 09:39:42 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1148014 ']' 00:06:11.371 09:39:42 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1148014 00:06:11.371 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1148014) - No such process 00:06:11.371 09:39:42 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1148014 is not found' 00:06:11.371 Process with pid 1148014 is not found 00:06:11.371 09:39:42 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1148122 ]] 00:06:11.371 09:39:42 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1148122 00:06:11.371 09:39:42 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1148122 ']' 00:06:11.371 09:39:42 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1148122 00:06:11.371 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1148122) - No such process 00:06:11.371 09:39:42 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1148122 is not found' 00:06:11.371 Process with pid 1148122 is not found 00:06:11.371 09:39:42 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:11.371 00:06:11.371 real 0m16.452s 00:06:11.371 user 0m28.506s 00:06:11.371 sys 0m5.106s 00:06:11.371 09:39:42 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:11.371 09:39:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:11.371 ************************************ 00:06:11.371 END TEST cpu_locks 00:06:11.371 ************************************ 00:06:11.371 00:06:11.371 real 0m42.291s 00:06:11.371 user 1m22.649s 00:06:11.371 sys 0m8.454s 00:06:11.371 09:39:42 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:11.371 09:39:42 event -- common/autotest_common.sh@10 -- # set +x 00:06:11.371 ************************************ 00:06:11.371 END TEST event 00:06:11.371 ************************************ 00:06:11.371 09:39:42 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:11.371 09:39:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:11.371 09:39:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:11.371 09:39:42 -- common/autotest_common.sh@10 -- # set +x 00:06:11.371 ************************************ 00:06:11.371 START TEST thread 00:06:11.371 ************************************ 00:06:11.371 09:39:42 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:11.632 * Looking for test storage... 00:06:11.632 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:11.632 09:39:42 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:11.632 09:39:42 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:06:11.632 09:39:42 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:11.632 09:39:42 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:11.632 09:39:42 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:11.632 09:39:42 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:11.632 09:39:42 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:11.632 09:39:42 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:11.632 09:39:42 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:11.632 09:39:42 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:11.632 09:39:42 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:11.632 09:39:42 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:11.632 09:39:42 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:11.632 09:39:42 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:11.632 09:39:42 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:11.632 09:39:42 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:11.632 09:39:42 thread -- scripts/common.sh@345 -- # : 1 00:06:11.632 09:39:42 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:11.632 09:39:42 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:11.632 09:39:42 thread -- scripts/common.sh@365 -- # decimal 1 00:06:11.632 09:39:42 thread -- scripts/common.sh@353 -- # local d=1 00:06:11.632 09:39:42 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:11.632 09:39:42 thread -- scripts/common.sh@355 -- # echo 1 00:06:11.632 09:39:42 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:11.632 09:39:42 thread -- scripts/common.sh@366 -- # decimal 2 00:06:11.632 09:39:42 thread -- scripts/common.sh@353 -- # local d=2 00:06:11.632 09:39:42 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:11.632 09:39:42 thread -- scripts/common.sh@355 -- # echo 2 00:06:11.632 09:39:42 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:11.632 09:39:42 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:11.632 09:39:42 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:11.632 09:39:42 thread -- scripts/common.sh@368 -- # return 0 00:06:11.632 09:39:42 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:11.632 09:39:42 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:11.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.632 --rc genhtml_branch_coverage=1 00:06:11.632 --rc genhtml_function_coverage=1 00:06:11.632 --rc genhtml_legend=1 00:06:11.632 --rc geninfo_all_blocks=1 00:06:11.632 --rc geninfo_unexecuted_blocks=1 00:06:11.632 00:06:11.632 ' 00:06:11.632 09:39:42 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:11.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.632 --rc genhtml_branch_coverage=1 00:06:11.632 --rc genhtml_function_coverage=1 00:06:11.632 --rc genhtml_legend=1 00:06:11.632 --rc geninfo_all_blocks=1 00:06:11.632 --rc geninfo_unexecuted_blocks=1 00:06:11.632 00:06:11.632 ' 00:06:11.632 09:39:42 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:11.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.632 --rc genhtml_branch_coverage=1 00:06:11.632 --rc genhtml_function_coverage=1 00:06:11.632 --rc genhtml_legend=1 00:06:11.632 --rc geninfo_all_blocks=1 00:06:11.632 --rc geninfo_unexecuted_blocks=1 00:06:11.632 00:06:11.632 ' 00:06:11.632 09:39:42 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:11.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.632 --rc genhtml_branch_coverage=1 00:06:11.632 --rc genhtml_function_coverage=1 00:06:11.632 --rc genhtml_legend=1 00:06:11.632 --rc geninfo_all_blocks=1 00:06:11.632 --rc geninfo_unexecuted_blocks=1 00:06:11.632 00:06:11.632 ' 00:06:11.632 09:39:42 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:11.633 09:39:42 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:11.633 09:39:42 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:11.633 09:39:42 thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.633 ************************************ 00:06:11.633 START TEST thread_poller_perf 00:06:11.633 ************************************ 00:06:11.633 09:39:42 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:11.633 [2024-11-20 09:39:42.452152] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:06:11.633 [2024-11-20 09:39:42.452265] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1148596 ] 00:06:11.633 [2024-11-20 09:39:42.541536] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.893 [2024-11-20 09:39:42.581278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.893 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:12.833 [2024-11-20T08:39:43.749Z] ====================================== 00:06:12.833 [2024-11-20T08:39:43.749Z] busy:2407234110 (cyc) 00:06:12.833 [2024-11-20T08:39:43.749Z] total_run_count: 418000 00:06:12.833 [2024-11-20T08:39:43.749Z] tsc_hz: 2400000000 (cyc) 00:06:12.833 [2024-11-20T08:39:43.749Z] ====================================== 00:06:12.833 [2024-11-20T08:39:43.749Z] poller_cost: 5758 (cyc), 2399 (nsec) 00:06:12.833 00:06:12.833 real 0m1.184s 00:06:12.833 user 0m1.094s 00:06:12.833 sys 0m0.086s 00:06:12.833 09:39:43 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.833 09:39:43 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:12.833 ************************************ 00:06:12.833 END TEST thread_poller_perf 00:06:12.833 ************************************ 00:06:12.833 09:39:43 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:12.833 09:39:43 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:12.833 09:39:43 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.833 09:39:43 thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.833 ************************************ 00:06:12.833 START TEST thread_poller_perf 00:06:12.833 ************************************ 00:06:12.833 09:39:43 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:12.833 [2024-11-20 09:39:43.712354] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:06:12.833 [2024-11-20 09:39:43.712453] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1148921 ] 00:06:13.094 [2024-11-20 09:39:43.809873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.094 [2024-11-20 09:39:43.844402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.094 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:14.036 [2024-11-20T08:39:44.952Z] ====================================== 00:06:14.036 [2024-11-20T08:39:44.952Z] busy:2401316706 (cyc) 00:06:14.036 [2024-11-20T08:39:44.952Z] total_run_count: 5561000 00:06:14.036 [2024-11-20T08:39:44.952Z] tsc_hz: 2400000000 (cyc) 00:06:14.036 [2024-11-20T08:39:44.952Z] ====================================== 00:06:14.036 [2024-11-20T08:39:44.952Z] poller_cost: 431 (cyc), 179 (nsec) 00:06:14.036 00:06:14.036 real 0m1.181s 00:06:14.036 user 0m1.082s 00:06:14.036 sys 0m0.094s 00:06:14.036 09:39:44 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:14.036 09:39:44 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:14.036 ************************************ 00:06:14.036 END TEST thread_poller_perf 00:06:14.036 ************************************ 00:06:14.036 09:39:44 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:14.036 00:06:14.036 real 0m2.713s 00:06:14.036 user 0m2.347s 00:06:14.036 sys 0m0.379s 00:06:14.036 09:39:44 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:14.036 09:39:44 thread -- common/autotest_common.sh@10 -- # set +x 00:06:14.036 ************************************ 00:06:14.036 END TEST thread 00:06:14.036 ************************************ 00:06:14.036 09:39:44 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:14.037 09:39:44 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:14.037 09:39:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:14.297 09:39:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:14.297 09:39:44 -- common/autotest_common.sh@10 -- # set +x 00:06:14.297 ************************************ 00:06:14.297 START TEST app_cmdline 00:06:14.297 ************************************ 00:06:14.297 09:39:44 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:14.297 * Looking for test storage... 00:06:14.297 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:14.297 09:39:45 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:14.297 09:39:45 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:06:14.297 09:39:45 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:14.297 09:39:45 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:14.297 09:39:45 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:14.297 09:39:45 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:14.297 09:39:45 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:14.297 09:39:45 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:14.297 09:39:45 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:14.297 09:39:45 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:14.297 09:39:45 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:14.298 09:39:45 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:14.298 09:39:45 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:14.298 09:39:45 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:14.298 09:39:45 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:14.298 09:39:45 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:14.298 09:39:45 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:14.298 09:39:45 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:14.298 09:39:45 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:14.298 09:39:45 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:14.298 09:39:45 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:14.298 09:39:45 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:14.298 09:39:45 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:14.298 09:39:45 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:14.298 09:39:45 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:14.298 09:39:45 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:14.298 09:39:45 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:14.298 09:39:45 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:14.298 09:39:45 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:14.298 09:39:45 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:14.298 09:39:45 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:14.298 09:39:45 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:14.298 09:39:45 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:14.298 09:39:45 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:14.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.298 --rc genhtml_branch_coverage=1 00:06:14.298 --rc genhtml_function_coverage=1 00:06:14.298 --rc genhtml_legend=1 00:06:14.298 --rc geninfo_all_blocks=1 00:06:14.298 --rc geninfo_unexecuted_blocks=1 00:06:14.298 00:06:14.298 ' 00:06:14.298 09:39:45 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:14.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.298 --rc genhtml_branch_coverage=1 00:06:14.298 --rc genhtml_function_coverage=1 00:06:14.298 --rc genhtml_legend=1 00:06:14.298 --rc geninfo_all_blocks=1 00:06:14.298 --rc geninfo_unexecuted_blocks=1 00:06:14.298 00:06:14.298 ' 00:06:14.298 09:39:45 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:14.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.298 --rc genhtml_branch_coverage=1 00:06:14.298 --rc genhtml_function_coverage=1 00:06:14.298 --rc genhtml_legend=1 00:06:14.298 --rc geninfo_all_blocks=1 00:06:14.298 --rc geninfo_unexecuted_blocks=1 00:06:14.298 00:06:14.298 ' 00:06:14.298 09:39:45 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:14.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.298 --rc genhtml_branch_coverage=1 00:06:14.298 --rc genhtml_function_coverage=1 00:06:14.298 --rc genhtml_legend=1 00:06:14.298 --rc geninfo_all_blocks=1 00:06:14.298 --rc geninfo_unexecuted_blocks=1 00:06:14.298 00:06:14.298 ' 00:06:14.298 09:39:45 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:14.298 09:39:45 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1149327 00:06:14.298 09:39:45 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1149327 00:06:14.298 09:39:45 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:14.298 09:39:45 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 1149327 ']' 00:06:14.298 09:39:45 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.298 09:39:45 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:14.298 09:39:45 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.298 09:39:45 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:14.298 09:39:45 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:14.560 [2024-11-20 09:39:45.251531] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:06:14.560 [2024-11-20 09:39:45.251602] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1149327 ] 00:06:14.560 [2024-11-20 09:39:45.337215] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.560 [2024-11-20 09:39:45.367920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.501 09:39:46 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:15.501 09:39:46 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:15.501 09:39:46 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:15.501 { 00:06:15.501 "version": "SPDK v25.01-pre git sha1 6fc96a60f", 00:06:15.501 "fields": { 00:06:15.501 "major": 25, 00:06:15.501 "minor": 1, 00:06:15.501 "patch": 0, 00:06:15.501 "suffix": "-pre", 00:06:15.501 "commit": "6fc96a60f" 00:06:15.501 } 00:06:15.501 } 00:06:15.501 09:39:46 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:15.501 09:39:46 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:15.501 09:39:46 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:15.501 09:39:46 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:15.501 09:39:46 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:15.501 09:39:46 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:15.501 09:39:46 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.501 09:39:46 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:15.501 09:39:46 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:15.501 09:39:46 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.501 09:39:46 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:15.501 09:39:46 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:15.501 09:39:46 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:15.501 09:39:46 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:15.501 09:39:46 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:15.501 09:39:46 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:15.501 09:39:46 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:15.501 09:39:46 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:15.501 09:39:46 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:15.501 09:39:46 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:15.501 09:39:46 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:15.501 09:39:46 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:15.501 09:39:46 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:15.501 09:39:46 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:15.762 request: 00:06:15.762 { 00:06:15.762 "method": "env_dpdk_get_mem_stats", 00:06:15.762 "req_id": 1 00:06:15.762 } 00:06:15.762 Got JSON-RPC error response 00:06:15.762 response: 00:06:15.762 { 00:06:15.762 "code": -32601, 00:06:15.762 "message": "Method not found" 00:06:15.762 } 00:06:15.762 09:39:46 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:15.762 09:39:46 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:15.762 09:39:46 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:15.762 09:39:46 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:15.762 09:39:46 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1149327 00:06:15.762 09:39:46 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 1149327 ']' 00:06:15.762 09:39:46 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 1149327 00:06:15.762 09:39:46 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:15.762 09:39:46 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:15.762 09:39:46 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1149327 00:06:15.762 09:39:46 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:15.762 09:39:46 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:15.762 09:39:46 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1149327' 00:06:15.762 killing process with pid 1149327 00:06:15.762 09:39:46 app_cmdline -- common/autotest_common.sh@973 -- # kill 1149327 00:06:15.762 09:39:46 app_cmdline -- common/autotest_common.sh@978 -- # wait 1149327 00:06:16.044 00:06:16.044 real 0m1.727s 00:06:16.044 user 0m2.109s 00:06:16.044 sys 0m0.438s 00:06:16.044 09:39:46 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.044 09:39:46 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:16.044 ************************************ 00:06:16.044 END TEST app_cmdline 00:06:16.044 ************************************ 00:06:16.044 09:39:46 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:16.044 09:39:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:16.044 09:39:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.044 09:39:46 -- common/autotest_common.sh@10 -- # set +x 00:06:16.044 ************************************ 00:06:16.044 START TEST version 00:06:16.044 ************************************ 00:06:16.044 09:39:46 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:16.044 * Looking for test storage... 00:06:16.044 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:16.044 09:39:46 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:16.044 09:39:46 version -- common/autotest_common.sh@1693 -- # lcov --version 00:06:16.044 09:39:46 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:16.310 09:39:46 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:16.310 09:39:46 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:16.310 09:39:46 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:16.310 09:39:46 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:16.310 09:39:46 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:16.310 09:39:46 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:16.310 09:39:46 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:16.310 09:39:46 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:16.310 09:39:46 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:16.310 09:39:46 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:16.310 09:39:46 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:16.310 09:39:46 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:16.310 09:39:46 version -- scripts/common.sh@344 -- # case "$op" in 00:06:16.310 09:39:46 version -- scripts/common.sh@345 -- # : 1 00:06:16.310 09:39:46 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:16.310 09:39:46 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:16.310 09:39:46 version -- scripts/common.sh@365 -- # decimal 1 00:06:16.310 09:39:46 version -- scripts/common.sh@353 -- # local d=1 00:06:16.310 09:39:46 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:16.310 09:39:46 version -- scripts/common.sh@355 -- # echo 1 00:06:16.310 09:39:46 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:16.310 09:39:46 version -- scripts/common.sh@366 -- # decimal 2 00:06:16.311 09:39:46 version -- scripts/common.sh@353 -- # local d=2 00:06:16.311 09:39:46 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:16.311 09:39:46 version -- scripts/common.sh@355 -- # echo 2 00:06:16.311 09:39:46 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:16.311 09:39:46 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:16.311 09:39:46 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:16.311 09:39:46 version -- scripts/common.sh@368 -- # return 0 00:06:16.311 09:39:46 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:16.311 09:39:46 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:16.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.311 --rc genhtml_branch_coverage=1 00:06:16.311 --rc genhtml_function_coverage=1 00:06:16.311 --rc genhtml_legend=1 00:06:16.311 --rc geninfo_all_blocks=1 00:06:16.311 --rc geninfo_unexecuted_blocks=1 00:06:16.311 00:06:16.311 ' 00:06:16.311 09:39:46 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:16.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.311 --rc genhtml_branch_coverage=1 00:06:16.311 --rc genhtml_function_coverage=1 00:06:16.311 --rc genhtml_legend=1 00:06:16.311 --rc geninfo_all_blocks=1 00:06:16.311 --rc geninfo_unexecuted_blocks=1 00:06:16.311 00:06:16.311 ' 00:06:16.311 09:39:46 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:16.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.311 --rc genhtml_branch_coverage=1 00:06:16.311 --rc genhtml_function_coverage=1 00:06:16.311 --rc genhtml_legend=1 00:06:16.311 --rc geninfo_all_blocks=1 00:06:16.311 --rc geninfo_unexecuted_blocks=1 00:06:16.311 00:06:16.311 ' 00:06:16.311 09:39:46 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:16.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.311 --rc genhtml_branch_coverage=1 00:06:16.311 --rc genhtml_function_coverage=1 00:06:16.311 --rc genhtml_legend=1 00:06:16.311 --rc geninfo_all_blocks=1 00:06:16.311 --rc geninfo_unexecuted_blocks=1 00:06:16.311 00:06:16.311 ' 00:06:16.311 09:39:46 version -- app/version.sh@17 -- # get_header_version major 00:06:16.311 09:39:46 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:16.311 09:39:46 version -- app/version.sh@14 -- # cut -f2 00:06:16.311 09:39:46 version -- app/version.sh@14 -- # tr -d '"' 00:06:16.311 09:39:46 version -- app/version.sh@17 -- # major=25 00:06:16.311 09:39:46 version -- app/version.sh@18 -- # get_header_version minor 00:06:16.311 09:39:46 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:16.311 09:39:46 version -- app/version.sh@14 -- # cut -f2 00:06:16.311 09:39:46 version -- app/version.sh@14 -- # tr -d '"' 00:06:16.311 09:39:47 version -- app/version.sh@18 -- # minor=1 00:06:16.311 09:39:47 version -- app/version.sh@19 -- # get_header_version patch 00:06:16.311 09:39:47 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:16.311 09:39:47 version -- app/version.sh@14 -- # cut -f2 00:06:16.311 09:39:47 version -- app/version.sh@14 -- # tr -d '"' 00:06:16.311 09:39:47 version -- app/version.sh@19 -- # patch=0 00:06:16.311 09:39:47 version -- app/version.sh@20 -- # get_header_version suffix 00:06:16.311 09:39:47 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:16.311 09:39:47 version -- app/version.sh@14 -- # tr -d '"' 00:06:16.311 09:39:47 version -- app/version.sh@14 -- # cut -f2 00:06:16.311 09:39:47 version -- app/version.sh@20 -- # suffix=-pre 00:06:16.311 09:39:47 version -- app/version.sh@22 -- # version=25.1 00:06:16.311 09:39:47 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:16.311 09:39:47 version -- app/version.sh@28 -- # version=25.1rc0 00:06:16.311 09:39:47 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:16.311 09:39:47 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:16.311 09:39:47 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:16.311 09:39:47 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:16.311 00:06:16.311 real 0m0.274s 00:06:16.311 user 0m0.164s 00:06:16.311 sys 0m0.155s 00:06:16.311 09:39:47 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.311 09:39:47 version -- common/autotest_common.sh@10 -- # set +x 00:06:16.311 ************************************ 00:06:16.311 END TEST version 00:06:16.311 ************************************ 00:06:16.311 09:39:47 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:16.311 09:39:47 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:16.311 09:39:47 -- spdk/autotest.sh@194 -- # uname -s 00:06:16.311 09:39:47 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:16.311 09:39:47 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:16.311 09:39:47 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:16.311 09:39:47 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:16.311 09:39:47 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:16.311 09:39:47 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:16.311 09:39:47 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:16.311 09:39:47 -- common/autotest_common.sh@10 -- # set +x 00:06:16.311 09:39:47 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:16.311 09:39:47 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:06:16.311 09:39:47 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:06:16.311 09:39:47 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:06:16.311 09:39:47 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:06:16.311 09:39:47 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:06:16.311 09:39:47 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:16.311 09:39:47 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:16.311 09:39:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.311 09:39:47 -- common/autotest_common.sh@10 -- # set +x 00:06:16.311 ************************************ 00:06:16.311 START TEST nvmf_tcp 00:06:16.311 ************************************ 00:06:16.311 09:39:47 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:16.574 * Looking for test storage... 00:06:16.574 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:16.574 09:39:47 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:16.574 09:39:47 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:06:16.574 09:39:47 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:16.574 09:39:47 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:16.574 09:39:47 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:16.574 09:39:47 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:16.574 09:39:47 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:16.574 09:39:47 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:16.574 09:39:47 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:16.574 09:39:47 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:16.574 09:39:47 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:16.574 09:39:47 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:16.574 09:39:47 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:16.574 09:39:47 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:16.574 09:39:47 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:16.574 09:39:47 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:16.574 09:39:47 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:06:16.574 09:39:47 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:16.574 09:39:47 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:16.574 09:39:47 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:16.574 09:39:47 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:06:16.574 09:39:47 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:16.574 09:39:47 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:06:16.574 09:39:47 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:16.574 09:39:47 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:16.574 09:39:47 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:06:16.575 09:39:47 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:16.575 09:39:47 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:06:16.575 09:39:47 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:16.575 09:39:47 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:16.575 09:39:47 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:16.575 09:39:47 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:06:16.575 09:39:47 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:16.575 09:39:47 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:16.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.575 --rc genhtml_branch_coverage=1 00:06:16.575 --rc genhtml_function_coverage=1 00:06:16.575 --rc genhtml_legend=1 00:06:16.575 --rc geninfo_all_blocks=1 00:06:16.575 --rc geninfo_unexecuted_blocks=1 00:06:16.575 00:06:16.575 ' 00:06:16.575 09:39:47 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:16.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.575 --rc genhtml_branch_coverage=1 00:06:16.575 --rc genhtml_function_coverage=1 00:06:16.575 --rc genhtml_legend=1 00:06:16.575 --rc geninfo_all_blocks=1 00:06:16.575 --rc geninfo_unexecuted_blocks=1 00:06:16.575 00:06:16.575 ' 00:06:16.575 09:39:47 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:16.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.575 --rc genhtml_branch_coverage=1 00:06:16.575 --rc genhtml_function_coverage=1 00:06:16.575 --rc genhtml_legend=1 00:06:16.575 --rc geninfo_all_blocks=1 00:06:16.575 --rc geninfo_unexecuted_blocks=1 00:06:16.575 00:06:16.575 ' 00:06:16.575 09:39:47 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:16.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.575 --rc genhtml_branch_coverage=1 00:06:16.575 --rc genhtml_function_coverage=1 00:06:16.575 --rc genhtml_legend=1 00:06:16.575 --rc geninfo_all_blocks=1 00:06:16.575 --rc geninfo_unexecuted_blocks=1 00:06:16.575 00:06:16.575 ' 00:06:16.575 09:39:47 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:16.575 09:39:47 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:16.575 09:39:47 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:16.575 09:39:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:16.575 09:39:47 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.575 09:39:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:16.575 ************************************ 00:06:16.575 START TEST nvmf_target_core 00:06:16.575 ************************************ 00:06:16.575 09:39:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:16.837 * Looking for test storage... 00:06:16.837 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:16.837 09:39:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:16.837 09:39:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:06:16.837 09:39:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:16.837 09:39:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:16.838 09:39:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:16.838 09:39:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:16.838 09:39:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:16.838 09:39:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:06:16.838 09:39:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:06:16.838 09:39:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:06:16.838 09:39:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:06:16.838 09:39:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:06:16.838 09:39:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:06:16.838 09:39:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:06:16.838 09:39:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:16.838 09:39:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:06:16.838 09:39:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:06:16.838 09:39:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:16.838 09:39:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:16.838 09:39:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:06:16.838 09:39:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:06:16.838 09:39:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:16.838 09:39:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:06:16.838 09:39:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:06:16.838 09:39:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:06:16.838 09:39:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:06:16.838 09:39:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:16.838 09:39:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:06:16.838 09:39:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:06:16.838 09:39:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:16.838 09:39:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:16.838 09:39:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:06:16.838 09:39:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:16.838 09:39:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:16.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.838 --rc genhtml_branch_coverage=1 00:06:16.838 --rc genhtml_function_coverage=1 00:06:16.838 --rc genhtml_legend=1 00:06:16.838 --rc geninfo_all_blocks=1 00:06:16.838 --rc geninfo_unexecuted_blocks=1 00:06:16.838 00:06:16.838 ' 00:06:16.838 09:39:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:16.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.838 --rc genhtml_branch_coverage=1 00:06:16.838 --rc genhtml_function_coverage=1 00:06:16.838 --rc genhtml_legend=1 00:06:16.838 --rc geninfo_all_blocks=1 00:06:16.838 --rc geninfo_unexecuted_blocks=1 00:06:16.838 00:06:16.838 ' 00:06:16.838 09:39:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:16.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.838 --rc genhtml_branch_coverage=1 00:06:16.838 --rc genhtml_function_coverage=1 00:06:16.838 --rc genhtml_legend=1 00:06:16.838 --rc geninfo_all_blocks=1 00:06:16.838 --rc geninfo_unexecuted_blocks=1 00:06:16.838 00:06:16.838 ' 00:06:16.838 09:39:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:16.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.838 --rc genhtml_branch_coverage=1 00:06:16.838 --rc genhtml_function_coverage=1 00:06:16.838 --rc genhtml_legend=1 00:06:16.838 --rc geninfo_all_blocks=1 00:06:16.838 --rc geninfo_unexecuted_blocks=1 00:06:16.838 00:06:16.838 ' 00:06:16.838 09:39:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:16.838 09:39:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:16.838 09:39:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:16.838 09:39:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:16.838 09:39:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:16.838 09:39:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:16.838 09:39:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:16.838 09:39:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:16.838 09:39:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:16.838 09:39:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:16.838 09:39:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:16.838 09:39:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:16.838 09:39:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:16.838 09:39:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:16.838 09:39:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:16.838 09:39:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:16.838 09:39:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:16.838 09:39:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:16.838 09:39:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:16.838 09:39:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:16.838 09:39:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:16.838 09:39:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:06:16.838 09:39:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:16.838 09:39:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:16.838 09:39:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:16.838 09:39:47 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.838 09:39:47 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.838 09:39:47 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.838 09:39:47 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:16.839 09:39:47 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.839 09:39:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:06:16.839 09:39:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:16.839 09:39:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:16.839 09:39:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:16.839 09:39:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:16.839 09:39:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:16.839 09:39:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:16.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:16.839 09:39:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:16.839 09:39:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:16.839 09:39:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:16.839 09:39:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:16.839 09:39:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:16.839 09:39:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:06:16.839 09:39:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:16.839 09:39:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:16.839 09:39:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.839 09:39:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:16.839 ************************************ 00:06:16.839 START TEST nvmf_abort 00:06:16.839 ************************************ 00:06:16.839 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:17.101 * Looking for test storage... 00:06:17.101 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:17.101 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:17.101 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:06:17.101 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:17.102 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:17.102 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:17.102 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:17.102 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:17.102 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:06:17.102 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:06:17.102 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:06:17.102 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:06:17.102 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:06:17.102 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:06:17.102 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:06:17.102 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:17.102 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:06:17.102 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:06:17.102 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:17.102 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:17.102 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:06:17.102 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:06:17.102 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:17.102 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:06:17.102 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:06:17.102 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:06:17.102 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:06:17.102 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:17.102 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:06:17.102 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:06:17.102 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:17.102 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:17.102 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:06:17.102 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:17.102 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:17.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.102 --rc genhtml_branch_coverage=1 00:06:17.102 --rc genhtml_function_coverage=1 00:06:17.102 --rc genhtml_legend=1 00:06:17.102 --rc geninfo_all_blocks=1 00:06:17.102 --rc geninfo_unexecuted_blocks=1 00:06:17.102 00:06:17.102 ' 00:06:17.102 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:17.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.102 --rc genhtml_branch_coverage=1 00:06:17.102 --rc genhtml_function_coverage=1 00:06:17.102 --rc genhtml_legend=1 00:06:17.102 --rc geninfo_all_blocks=1 00:06:17.102 --rc geninfo_unexecuted_blocks=1 00:06:17.102 00:06:17.102 ' 00:06:17.102 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:17.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.102 --rc genhtml_branch_coverage=1 00:06:17.102 --rc genhtml_function_coverage=1 00:06:17.102 --rc genhtml_legend=1 00:06:17.102 --rc geninfo_all_blocks=1 00:06:17.102 --rc geninfo_unexecuted_blocks=1 00:06:17.102 00:06:17.102 ' 00:06:17.102 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:17.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.102 --rc genhtml_branch_coverage=1 00:06:17.102 --rc genhtml_function_coverage=1 00:06:17.102 --rc genhtml_legend=1 00:06:17.102 --rc geninfo_all_blocks=1 00:06:17.102 --rc geninfo_unexecuted_blocks=1 00:06:17.102 00:06:17.102 ' 00:06:17.102 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:17.102 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:06:17.102 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:17.102 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:17.102 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:17.102 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:17.102 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:17.102 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:17.102 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:17.102 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:17.102 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:17.102 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:17.102 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:17.102 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:17.102 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:17.102 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:17.102 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:17.102 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:17.102 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:17.102 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:06:17.102 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:17.102 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:17.102 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:17.102 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.102 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.102 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.102 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:06:17.103 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.103 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:06:17.103 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:17.103 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:17.103 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:17.103 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:17.103 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:17.103 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:17.103 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:17.103 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:17.103 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:17.103 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:17.103 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:17.103 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:06:17.103 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:06:17.103 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:17.103 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:17.103 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:17.103 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:17.103 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:17.103 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:17.103 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:17.103 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:17.103 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:17.103 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:17.103 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:06:17.103 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:25.247 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:25.247 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:06:25.247 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:06:25.248 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:06:25.248 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:06:25.248 Found net devices under 0000:4b:00.0: cvl_0_0 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:06:25.248 Found net devices under 0000:4b:00.1: cvl_0_1 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:25.248 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:25.248 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.688 ms 00:06:25.248 00:06:25.248 --- 10.0.0.2 ping statistics --- 00:06:25.248 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:25.248 rtt min/avg/max/mdev = 0.688/0.688/0.688/0.000 ms 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:25.248 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:25.248 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:06:25.248 00:06:25.248 --- 10.0.0.1 ping statistics --- 00:06:25.248 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:25.248 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:25.248 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:25.249 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:25.249 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:25.249 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:25.249 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:25.249 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:25.249 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:25.249 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=1153813 00:06:25.249 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 1153813 00:06:25.249 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:25.249 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 1153813 ']' 00:06:25.249 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.249 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:25.249 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.249 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:25.249 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:25.249 [2024-11-20 09:39:55.560433] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:06:25.249 [2024-11-20 09:39:55.560500] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:25.249 [2024-11-20 09:39:55.661316] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:25.249 [2024-11-20 09:39:55.714995] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:25.249 [2024-11-20 09:39:55.715047] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:25.249 [2024-11-20 09:39:55.715056] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:25.249 [2024-11-20 09:39:55.715063] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:25.249 [2024-11-20 09:39:55.715069] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:25.249 [2024-11-20 09:39:55.717139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:25.249 [2024-11-20 09:39:55.717302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:25.249 [2024-11-20 09:39:55.717446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:25.510 09:39:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:25.510 09:39:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:06:25.510 09:39:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:25.510 09:39:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:25.510 09:39:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:25.771 09:39:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:25.771 09:39:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:06:25.771 09:39:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.771 09:39:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:25.771 [2024-11-20 09:39:56.440689] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:25.771 09:39:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:25.771 09:39:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:25.771 09:39:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.771 09:39:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:25.771 Malloc0 00:06:25.771 09:39:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:25.771 09:39:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:25.771 09:39:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.771 09:39:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:25.771 Delay0 00:06:25.771 09:39:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:25.771 09:39:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:25.771 09:39:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.771 09:39:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:25.771 09:39:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:25.771 09:39:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:25.771 09:39:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.771 09:39:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:25.771 09:39:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:25.771 09:39:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:25.771 09:39:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.771 09:39:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:25.771 [2024-11-20 09:39:56.528066] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:25.771 09:39:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:25.771 09:39:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:25.771 09:39:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.771 09:39:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:25.771 09:39:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:25.771 09:39:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:25.771 [2024-11-20 09:39:56.678321] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:28.317 Initializing NVMe Controllers 00:06:28.317 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:28.317 controller IO queue size 128 less than required 00:06:28.317 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:28.317 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:28.317 Initialization complete. Launching workers. 00:06:28.317 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28656 00:06:28.317 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28717, failed to submit 62 00:06:28.317 success 28660, unsuccessful 57, failed 0 00:06:28.317 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:28.317 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.317 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:28.317 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.317 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:28.317 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:28.317 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:28.317 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:06:28.317 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:28.317 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:06:28.317 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:28.317 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:28.317 rmmod nvme_tcp 00:06:28.317 rmmod nvme_fabrics 00:06:28.317 rmmod nvme_keyring 00:06:28.317 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:28.317 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:06:28.317 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:06:28.317 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 1153813 ']' 00:06:28.317 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 1153813 00:06:28.317 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 1153813 ']' 00:06:28.317 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 1153813 00:06:28.317 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:06:28.317 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:28.317 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1153813 00:06:28.317 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:28.317 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:28.317 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1153813' 00:06:28.317 killing process with pid 1153813 00:06:28.317 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 1153813 00:06:28.317 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 1153813 00:06:28.317 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:28.317 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:28.317 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:28.317 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:06:28.317 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:06:28.317 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:28.317 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:06:28.317 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:28.317 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:28.317 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:28.317 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:28.317 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:30.860 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:30.860 00:06:30.860 real 0m13.459s 00:06:30.860 user 0m14.234s 00:06:30.860 sys 0m6.628s 00:06:30.860 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:30.860 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:30.860 ************************************ 00:06:30.860 END TEST nvmf_abort 00:06:30.860 ************************************ 00:06:30.860 09:40:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:30.860 09:40:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:30.861 09:40:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:30.861 09:40:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:30.861 ************************************ 00:06:30.861 START TEST nvmf_ns_hotplug_stress 00:06:30.861 ************************************ 00:06:30.861 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:30.861 * Looking for test storage... 00:06:30.861 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:30.861 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:30.861 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:06:30.861 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:30.861 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:30.861 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:30.861 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:30.861 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:30.861 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:06:30.861 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:06:30.861 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:06:30.861 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:06:30.861 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:06:30.861 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:06:30.861 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:06:30.861 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:30.861 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:06:30.861 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:06:30.861 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:30.861 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:30.861 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:06:30.861 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:06:30.861 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:30.861 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:06:30.861 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:06:30.861 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:06:30.861 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:06:30.861 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:30.861 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:06:30.861 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:06:30.861 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:30.861 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:30.861 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:06:30.861 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:30.861 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:30.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.861 --rc genhtml_branch_coverage=1 00:06:30.861 --rc genhtml_function_coverage=1 00:06:30.861 --rc genhtml_legend=1 00:06:30.861 --rc geninfo_all_blocks=1 00:06:30.861 --rc geninfo_unexecuted_blocks=1 00:06:30.861 00:06:30.861 ' 00:06:30.861 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:30.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.861 --rc genhtml_branch_coverage=1 00:06:30.861 --rc genhtml_function_coverage=1 00:06:30.861 --rc genhtml_legend=1 00:06:30.861 --rc geninfo_all_blocks=1 00:06:30.861 --rc geninfo_unexecuted_blocks=1 00:06:30.861 00:06:30.861 ' 00:06:30.861 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:30.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.861 --rc genhtml_branch_coverage=1 00:06:30.861 --rc genhtml_function_coverage=1 00:06:30.861 --rc genhtml_legend=1 00:06:30.861 --rc geninfo_all_blocks=1 00:06:30.861 --rc geninfo_unexecuted_blocks=1 00:06:30.861 00:06:30.861 ' 00:06:30.861 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:30.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.861 --rc genhtml_branch_coverage=1 00:06:30.861 --rc genhtml_function_coverage=1 00:06:30.861 --rc genhtml_legend=1 00:06:30.861 --rc geninfo_all_blocks=1 00:06:30.861 --rc geninfo_unexecuted_blocks=1 00:06:30.861 00:06:30.861 ' 00:06:30.861 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:30.861 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:30.861 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:30.861 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:30.861 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:30.861 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:30.861 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:30.861 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:30.861 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:30.861 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:30.861 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:30.861 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:30.861 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:30.861 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:30.861 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:30.861 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:30.861 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:30.861 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:30.861 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:30.861 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:06:30.861 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:30.861 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:30.861 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:30.861 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.862 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.862 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.862 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:30.862 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.862 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:06:30.862 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:30.862 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:30.862 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:30.862 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:30.862 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:30.862 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:30.862 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:30.862 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:30.862 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:30.862 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:30.862 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:30.862 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:30.862 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:30.862 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:30.862 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:30.862 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:30.862 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:30.862 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:30.862 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:30.862 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:30.862 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:30.862 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:30.862 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:06:30.862 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:39.001 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:39.001 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:06:39.001 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:39.001 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:39.001 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:39.001 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:39.001 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:39.001 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:06:39.001 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:39.001 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:06:39.001 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:06:39.001 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:06:39.001 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:06:39.001 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:06:39.001 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:06:39.001 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:39.001 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:39.001 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:39.001 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:39.001 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:39.001 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:39.001 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:39.001 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:39.001 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:39.001 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:39.001 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:39.001 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:39.001 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:39.001 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:39.001 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:39.001 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:39.001 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:39.001 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:39.001 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:39.001 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:06:39.001 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:06:39.001 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:39.001 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:39.001 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:39.001 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:39.001 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:39.001 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:39.001 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:06:39.001 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:06:39.001 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:39.001 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:39.001 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:39.001 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:39.001 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:39.001 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:39.001 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:39.001 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:39.001 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:39.001 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:39.001 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:39.001 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:39.001 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:39.001 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:39.001 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:39.001 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:06:39.001 Found net devices under 0000:4b:00.0: cvl_0_0 00:06:39.001 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:39.001 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:39.001 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:39.001 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:39.001 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:39.001 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:39.001 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:39.001 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:39.001 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:06:39.001 Found net devices under 0000:4b:00.1: cvl_0_1 00:06:39.001 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:39.001 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:39.001 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:06:39.001 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:39.001 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:39.001 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:39.001 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:39.001 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:39.001 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:39.001 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:39.001 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:39.001 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:39.001 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:39.001 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:39.001 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:39.001 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:39.001 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:39.001 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:39.001 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:39.001 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:39.001 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:39.001 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:39.001 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:39.001 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:39.001 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:39.001 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:39.001 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:39.001 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:39.001 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:39.001 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:39.002 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.681 ms 00:06:39.002 00:06:39.002 --- 10.0.0.2 ping statistics --- 00:06:39.002 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:39.002 rtt min/avg/max/mdev = 0.681/0.681/0.681/0.000 ms 00:06:39.002 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:39.002 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:39.002 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:06:39.002 00:06:39.002 --- 10.0.0.1 ping statistics --- 00:06:39.002 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:39.002 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:06:39.002 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:39.002 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:06:39.002 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:39.002 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:39.002 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:39.002 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:39.002 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:39.002 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:39.002 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:39.002 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:39.002 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:39.002 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:39.002 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:39.002 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=1158822 00:06:39.002 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 1158822 00:06:39.002 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:39.002 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 1158822 ']' 00:06:39.002 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.002 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:39.002 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.002 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:39.002 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:39.002 [2024-11-20 09:40:09.071222] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:06:39.002 [2024-11-20 09:40:09.071293] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:39.002 [2024-11-20 09:40:09.170213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:39.002 [2024-11-20 09:40:09.221063] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:39.002 [2024-11-20 09:40:09.221113] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:39.002 [2024-11-20 09:40:09.221121] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:39.002 [2024-11-20 09:40:09.221129] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:39.002 [2024-11-20 09:40:09.221136] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:39.002 [2024-11-20 09:40:09.223237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:39.002 [2024-11-20 09:40:09.223405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:39.002 [2024-11-20 09:40:09.223405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:39.002 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:39.002 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:06:39.002 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:39.002 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:39.002 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:39.262 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:39.262 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:39.262 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:39.262 [2024-11-20 09:40:10.115306] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:39.262 09:40:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:39.556 09:40:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:39.877 [2024-11-20 09:40:10.510363] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:39.877 09:40:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:39.877 09:40:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:40.155 Malloc0 00:06:40.155 09:40:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:40.416 Delay0 00:06:40.416 09:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:40.677 09:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:40.677 NULL1 00:06:40.677 09:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:40.937 09:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:40.937 09:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1159238 00:06:40.937 09:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1159238 00:06:40.937 09:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:41.197 09:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:41.197 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:41.197 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:41.457 true 00:06:41.457 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1159238 00:06:41.458 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:41.718 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:41.978 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:41.978 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:41.978 true 00:06:41.978 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1159238 00:06:41.978 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:43.360 Read completed with error (sct=0, sc=11) 00:06:43.360 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:43.360 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:43.360 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:43.360 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:43.360 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:43.360 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:43.360 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:43.360 09:40:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:43.360 09:40:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:43.620 true 00:06:43.620 09:40:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1159238 00:06:43.620 09:40:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:44.561 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:44.561 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:44.561 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:44.820 true 00:06:44.820 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1159238 00:06:44.820 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:44.820 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:45.079 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:45.079 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:45.339 true 00:06:45.339 09:40:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1159238 00:06:45.339 09:40:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:46.277 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:46.277 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:46.277 09:40:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:46.537 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:46.537 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:46.537 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:46.537 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:46.537 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:46.537 09:40:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:46.537 09:40:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:46.796 true 00:06:46.796 09:40:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1159238 00:06:46.796 09:40:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:47.733 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:47.733 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:47.733 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:47.992 true 00:06:47.992 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1159238 00:06:47.992 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:48.251 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:48.251 09:40:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:48.251 09:40:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:48.511 true 00:06:48.511 09:40:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1159238 00:06:48.511 09:40:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:49.450 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:49.711 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:49.711 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:49.711 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:49.711 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:49.711 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:49.711 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:49.711 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:49.711 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:49.711 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:49.711 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:49.970 true 00:06:49.970 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1159238 00:06:49.970 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:50.908 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:50.908 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:06:50.908 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:06:51.168 true 00:06:51.168 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1159238 00:06:51.168 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:51.427 09:40:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:51.427 09:40:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:06:51.427 09:40:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:06:51.686 true 00:06:51.686 09:40:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1159238 00:06:51.686 09:40:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:53.068 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:53.068 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:53.068 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:53.068 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:53.068 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:53.068 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:53.068 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:53.068 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:06:53.068 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:06:53.329 true 00:06:53.329 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1159238 00:06:53.329 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:54.270 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:54.270 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:54.270 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:54.270 09:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:06:54.270 09:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:06:54.531 true 00:06:54.531 09:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1159238 00:06:54.531 09:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:54.531 09:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:54.791 09:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:06:54.791 09:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:06:55.052 true 00:06:55.052 09:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1159238 00:06:55.052 09:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:55.052 09:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:55.312 09:40:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:06:55.312 09:40:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:06:55.572 true 00:06:55.572 09:40:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1159238 00:06:55.572 09:40:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:55.832 09:40:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:55.833 09:40:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:06:55.833 09:40:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:06:56.093 true 00:06:56.093 09:40:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1159238 00:06:56.093 09:40:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:57.478 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:57.478 09:40:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:57.478 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:57.478 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:57.478 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:57.478 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:57.478 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:57.478 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:57.478 09:40:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:06:57.478 09:40:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:06:57.478 true 00:06:57.478 09:40:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1159238 00:06:57.478 09:40:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:58.419 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:58.678 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:06:58.678 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:06:58.678 true 00:06:58.678 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1159238 00:06:58.678 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:58.938 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:59.199 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:06:59.199 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:06:59.199 true 00:06:59.199 09:40:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1159238 00:06:59.199 09:40:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:00.584 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:00.584 09:40:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:00.584 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:00.584 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:00.584 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:00.584 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:00.584 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:00.584 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:00.584 09:40:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:07:00.584 09:40:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:07:00.844 true 00:07:00.844 09:40:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1159238 00:07:00.844 09:40:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:01.784 09:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:01.784 09:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:07:01.784 09:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:07:02.044 true 00:07:02.044 09:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1159238 00:07:02.044 09:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:02.303 09:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:02.303 09:40:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:07:02.303 09:40:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:07:02.589 true 00:07:02.589 09:40:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1159238 00:07:02.589 09:40:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:03.970 09:40:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:03.970 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:03.970 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:03.970 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:03.970 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:03.970 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:03.970 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:03.970 09:40:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:07:03.970 09:40:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:07:03.970 true 00:07:03.970 09:40:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1159238 00:07:03.970 09:40:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:04.911 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:04.911 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:04.911 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:05.171 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:07:05.171 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:07:05.171 true 00:07:05.171 09:40:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1159238 00:07:05.171 09:40:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:05.431 09:40:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:05.690 09:40:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:07:05.690 09:40:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:07:05.690 true 00:07:05.950 09:40:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1159238 00:07:05.950 09:40:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:05.950 09:40:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:06.208 09:40:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:07:06.208 09:40:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:07:06.208 true 00:07:06.468 09:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1159238 00:07:06.468 09:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:06.468 09:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:06.729 09:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:07:06.729 09:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:07:06.989 true 00:07:06.989 09:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1159238 00:07:06.989 09:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:06.989 09:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:07.249 09:40:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:07:07.249 09:40:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:07:07.508 true 00:07:07.508 09:40:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1159238 00:07:07.508 09:40:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.508 09:40:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:07.768 09:40:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:07:07.768 09:40:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:07:08.027 true 00:07:08.027 09:40:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1159238 00:07:08.027 09:40:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:08.289 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:08.289 09:40:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:08.289 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:08.289 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:08.289 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:08.289 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:08.289 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:08.289 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:08.289 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:07:08.289 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:07:08.548 true 00:07:08.549 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1159238 00:07:08.549 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.491 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:09.491 09:40:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:09.491 09:40:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:07:09.491 09:40:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:07:09.751 true 00:07:09.751 09:40:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1159238 00:07:09.751 09:40:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:10.012 09:40:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:10.012 09:40:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:07:10.012 09:40:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:07:10.272 true 00:07:10.272 09:40:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1159238 00:07:10.272 09:40:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:11.768 Initializing NVMe Controllers 00:07:11.768 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:11.768 Controller IO queue size 128, less than required. 00:07:11.768 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:11.768 Controller IO queue size 128, less than required. 00:07:11.768 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:11.768 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:11.768 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:11.768 Initialization complete. Launching workers. 00:07:11.768 ======================================================== 00:07:11.768 Latency(us) 00:07:11.768 Device Information : IOPS MiB/s Average min max 00:07:11.768 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2067.73 1.01 37347.88 1304.74 1082802.68 00:07:11.768 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 17943.98 8.76 7109.79 1258.09 400589.48 00:07:11.768 ======================================================== 00:07:11.768 Total : 20011.71 9.77 10234.17 1258.09 1082802.68 00:07:11.768 00:07:11.768 09:40:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:11.768 09:40:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:07:11.768 09:40:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:07:11.768 true 00:07:11.768 09:40:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1159238 00:07:11.768 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1159238) - No such process 00:07:11.768 09:40:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1159238 00:07:11.768 09:40:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:12.029 09:40:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:12.290 09:40:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:12.290 09:40:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:12.290 09:40:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:12.290 09:40:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:12.290 09:40:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:12.290 null0 00:07:12.290 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:12.290 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:12.290 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:12.550 null1 00:07:12.550 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:12.550 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:12.550 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:12.811 null2 00:07:12.811 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:12.811 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:12.811 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:12.811 null3 00:07:12.811 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:12.811 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:12.811 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:13.073 null4 00:07:13.073 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:13.073 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:13.073 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:07:13.334 null5 00:07:13.334 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:13.334 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:13.334 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:07:13.334 null6 00:07:13.334 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:13.334 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:13.334 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:07:13.595 null7 00:07:13.595 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:13.595 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:13.595 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:07:13.595 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:13.595 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:13.595 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:13.595 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:07:13.595 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:13.595 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:07:13.595 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:13.595 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.595 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:13.595 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:13.595 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:13.595 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:13.595 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:07:13.595 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:07:13.595 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:13.595 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.595 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:13.595 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:13.595 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:13.595 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:07:13.596 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:13.596 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:07:13.596 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:13.596 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.596 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:13.596 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:13.596 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:07:13.596 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:13.596 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:07:13.596 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:13.596 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:13.596 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.596 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:13.596 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:13.596 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:13.596 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:13.596 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:07:13.596 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:07:13.596 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:13.596 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.596 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:13.596 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:13.596 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:13.596 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:13.596 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:07:13.596 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:07:13.596 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:13.596 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.596 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:13.596 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:13.596 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:13.596 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:13.596 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:07:13.596 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:07:13.596 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:13.596 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.596 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:13.596 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:13.596 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:13.596 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:13.596 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1165969 1165971 1165974 1165977 1165980 1165983 1165986 1165989 00:07:13.596 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:07:13.596 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:07:13.596 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:13.596 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.596 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:13.858 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:13.858 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:13.858 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:13.858 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:13.858 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:13.858 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:13.858 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:13.858 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:14.120 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.120 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.120 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:14.120 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.120 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.120 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:14.120 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.120 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.120 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.120 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.120 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:14.120 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:14.120 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.120 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.120 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:14.120 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.120 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.120 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:14.120 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.120 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.120 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:14.120 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.120 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.120 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:14.120 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:14.120 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:14.120 09:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:14.120 09:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:14.120 09:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:14.120 09:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:14.382 09:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:14.382 09:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:14.382 09:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.382 09:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.382 09:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:14.382 09:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.382 09:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.382 09:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:14.382 09:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.382 09:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.382 09:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:14.382 09:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.382 09:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.382 09:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:14.382 09:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.382 09:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.382 09:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:14.382 09:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.382 09:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.382 09:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:14.382 09:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.382 09:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.382 09:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:14.382 09:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.382 09:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.382 09:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:14.644 09:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:14.644 09:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:14.644 09:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:14.644 09:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:14.644 09:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:14.644 09:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:14.644 09:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:14.644 09:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:14.644 09:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.644 09:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.644 09:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:14.644 09:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.644 09:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.644 09:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:14.644 09:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.644 09:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.644 09:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:14.906 09:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.906 09:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.906 09:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:14.906 09:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.906 09:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.906 09:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:14.906 09:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.906 09:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.906 09:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:14.906 09:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.906 09:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.906 09:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:14.906 09:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.906 09:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.906 09:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:14.906 09:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:14.906 09:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:14.906 09:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:14.906 09:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:14.906 09:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:14.906 09:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:15.168 09:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:15.168 09:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:15.168 09:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.168 09:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.168 09:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:15.168 09:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.168 09:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.168 09:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:15.168 09:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.168 09:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.168 09:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:15.168 09:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.168 09:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.168 09:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:15.168 09:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.168 09:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.168 09:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:15.168 09:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.168 09:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.168 09:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:15.168 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.168 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.169 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:15.169 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.169 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.169 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:15.169 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:15.431 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:15.431 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:15.431 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:15.431 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:15.431 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:15.431 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:15.431 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:15.431 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.431 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.431 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:15.431 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.431 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.431 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:15.431 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.431 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.431 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:15.431 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.431 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.431 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:15.431 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.431 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.431 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:15.694 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.694 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.694 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:15.694 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.694 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.694 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:15.694 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.694 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.694 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:15.694 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:15.694 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:15.694 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:15.694 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:15.694 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:15.694 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:15.694 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:15.694 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:15.694 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.694 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.694 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:15.954 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.954 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.954 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:15.954 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.954 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.954 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:15.954 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.954 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.954 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:15.955 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.955 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.955 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:15.955 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.955 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.955 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:15.955 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.955 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.955 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:15.955 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.955 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.955 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:15.955 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:15.955 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:16.215 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:16.216 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:16.216 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:16.216 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:16.216 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:16.216 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:16.216 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:16.216 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:16.216 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:16.216 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:16.216 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:16.216 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:16.216 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:16.216 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:16.216 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:16.216 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:16.216 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:16.216 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:16.216 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:16.216 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:16.216 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:16.216 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:16.216 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:16.216 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:16.216 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:16.477 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:16.477 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:16.477 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:16.477 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:16.477 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:16.477 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:16.477 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:16.477 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:16.477 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:16.477 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:16.477 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:16.477 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:16.478 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:16.478 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:16.478 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:16.478 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:16.837 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:16.837 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:16.837 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:16.837 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:16.837 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:16.837 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:16.837 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:16.837 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:16.837 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:16.837 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:16.837 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:16.837 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:16.837 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:16.837 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:16.837 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:16.838 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:16.838 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:16.838 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:16.838 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:16.838 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:16.838 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:16.838 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:16.838 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:16.838 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:16.838 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:16.838 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:16.838 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:16.838 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:16.838 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:16.838 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:16.838 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:16.838 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:16.838 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:16.838 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:16.838 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:17.100 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:17.100 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.100 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:17.100 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:17.100 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.100 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:17.100 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:17.100 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.100 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:17.100 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:17.100 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.100 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:17.100 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:17.100 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:17.100 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.100 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:17.100 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:17.100 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.100 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:17.100 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:17.100 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:17.100 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:17.100 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:17.362 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:17.362 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:17.362 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.362 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:17.362 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.362 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:17.362 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.362 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:17.362 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:17.362 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.362 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:17.362 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:17.362 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.362 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:17.362 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.623 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:17.623 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.623 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:17.623 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.623 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:07:17.623 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:07:17.623 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:17.623 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:07:17.623 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:17.623 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:07:17.623 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:17.623 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:17.623 rmmod nvme_tcp 00:07:17.623 rmmod nvme_fabrics 00:07:17.623 rmmod nvme_keyring 00:07:17.623 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:17.623 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:07:17.623 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:07:17.623 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 1158822 ']' 00:07:17.623 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 1158822 00:07:17.623 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 1158822 ']' 00:07:17.623 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 1158822 00:07:17.623 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:07:17.623 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:17.623 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1158822 00:07:17.623 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:17.624 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:17.624 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1158822' 00:07:17.624 killing process with pid 1158822 00:07:17.624 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 1158822 00:07:17.624 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 1158822 00:07:17.886 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:17.886 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:17.886 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:17.886 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:07:17.886 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:07:17.886 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:17.886 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:07:17.886 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:17.886 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:17.886 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:17.886 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:17.886 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:19.804 09:40:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:19.804 00:07:19.804 real 0m49.395s 00:07:19.804 user 3m13.931s 00:07:19.804 sys 0m16.165s 00:07:19.804 09:40:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:19.804 09:40:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:19.804 ************************************ 00:07:19.804 END TEST nvmf_ns_hotplug_stress 00:07:19.804 ************************************ 00:07:19.804 09:40:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:19.804 09:40:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:19.804 09:40:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:19.804 09:40:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:20.066 ************************************ 00:07:20.066 START TEST nvmf_delete_subsystem 00:07:20.066 ************************************ 00:07:20.066 09:40:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:20.066 * Looking for test storage... 00:07:20.066 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:20.066 09:40:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:20.066 09:40:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:07:20.066 09:40:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:20.066 09:40:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:20.066 09:40:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:20.066 09:40:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:20.066 09:40:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:20.066 09:40:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:07:20.066 09:40:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:07:20.066 09:40:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:07:20.066 09:40:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:07:20.066 09:40:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:07:20.066 09:40:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:07:20.066 09:40:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:07:20.066 09:40:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:20.066 09:40:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:07:20.066 09:40:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:07:20.066 09:40:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:20.066 09:40:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:20.066 09:40:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:07:20.067 09:40:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:07:20.067 09:40:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:20.067 09:40:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:07:20.067 09:40:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:07:20.067 09:40:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:07:20.067 09:40:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:07:20.067 09:40:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:20.067 09:40:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:07:20.067 09:40:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:07:20.067 09:40:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:20.067 09:40:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:20.067 09:40:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:07:20.067 09:40:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:20.067 09:40:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:20.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.067 --rc genhtml_branch_coverage=1 00:07:20.067 --rc genhtml_function_coverage=1 00:07:20.067 --rc genhtml_legend=1 00:07:20.067 --rc geninfo_all_blocks=1 00:07:20.067 --rc geninfo_unexecuted_blocks=1 00:07:20.067 00:07:20.067 ' 00:07:20.067 09:40:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:20.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.067 --rc genhtml_branch_coverage=1 00:07:20.067 --rc genhtml_function_coverage=1 00:07:20.067 --rc genhtml_legend=1 00:07:20.067 --rc geninfo_all_blocks=1 00:07:20.067 --rc geninfo_unexecuted_blocks=1 00:07:20.067 00:07:20.067 ' 00:07:20.067 09:40:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:20.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.067 --rc genhtml_branch_coverage=1 00:07:20.067 --rc genhtml_function_coverage=1 00:07:20.067 --rc genhtml_legend=1 00:07:20.067 --rc geninfo_all_blocks=1 00:07:20.067 --rc geninfo_unexecuted_blocks=1 00:07:20.067 00:07:20.067 ' 00:07:20.067 09:40:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:20.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.067 --rc genhtml_branch_coverage=1 00:07:20.067 --rc genhtml_function_coverage=1 00:07:20.067 --rc genhtml_legend=1 00:07:20.067 --rc geninfo_all_blocks=1 00:07:20.067 --rc geninfo_unexecuted_blocks=1 00:07:20.067 00:07:20.067 ' 00:07:20.067 09:40:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:20.067 09:40:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:07:20.067 09:40:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:20.067 09:40:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:20.067 09:40:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:20.067 09:40:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:20.067 09:40:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:20.067 09:40:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:20.067 09:40:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:20.067 09:40:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:20.067 09:40:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:20.067 09:40:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:20.067 09:40:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:20.067 09:40:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:20.067 09:40:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:20.067 09:40:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:20.067 09:40:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:20.067 09:40:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:20.067 09:40:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:20.329 09:40:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:07:20.329 09:40:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:20.329 09:40:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:20.329 09:40:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:20.329 09:40:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.329 09:40:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.329 09:40:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.329 09:40:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:07:20.329 09:40:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.329 09:40:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:07:20.329 09:40:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:20.329 09:40:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:20.329 09:40:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:20.329 09:40:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:20.329 09:40:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:20.329 09:40:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:20.329 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:20.329 09:40:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:20.329 09:40:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:20.329 09:40:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:20.329 09:40:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:07:20.329 09:40:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:20.329 09:40:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:20.330 09:40:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:20.330 09:40:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:20.330 09:40:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:20.330 09:40:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:20.330 09:40:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:20.330 09:40:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:20.330 09:40:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:20.330 09:40:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:20.330 09:40:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:07:20.330 09:40:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:28.471 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:28.471 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:07:28.471 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:28.471 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:28.471 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:28.471 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:28.471 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:28.471 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:07:28.471 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:28.471 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:07:28.471 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:07:28.471 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:07:28.471 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:07:28.471 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:07:28.471 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:07:28.471 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:28.471 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:28.471 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:28.471 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:28.471 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:28.471 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:28.471 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:28.471 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:28.471 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:28.471 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:28.471 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:28.471 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:28.471 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:28.471 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:28.471 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:28.471 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:28.471 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:28.471 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:28.471 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:28.471 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:28.471 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:28.471 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:28.471 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:28.471 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:28.471 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:28.471 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:28.471 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:28.471 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:28.471 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:28.471 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:28.471 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:28.471 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:28.471 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:28.471 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:28.471 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:28.471 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:28.471 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:28.471 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:28.471 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:28.471 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:28.471 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:28.471 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:28.471 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:28.471 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:28.471 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:28.471 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:28.471 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:28.471 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:28.471 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:28.471 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:28.471 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:28.471 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:28.471 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:28.471 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:28.471 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:28.471 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:28.471 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:28.471 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:28.471 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:07:28.471 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:28.471 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:28.471 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:28.471 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:28.471 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:28.471 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:28.471 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:28.471 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:28.471 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:28.471 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:28.471 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:28.472 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:28.472 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:28.472 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:28.472 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:28.472 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:28.472 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:28.472 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:28.472 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:28.472 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:28.472 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:28.472 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:28.472 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:28.472 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:28.472 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:28.472 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:28.472 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:28.472 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.620 ms 00:07:28.472 00:07:28.472 --- 10.0.0.2 ping statistics --- 00:07:28.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:28.472 rtt min/avg/max/mdev = 0.620/0.620/0.620/0.000 ms 00:07:28.472 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:28.472 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:28.472 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.368 ms 00:07:28.472 00:07:28.472 --- 10.0.0.1 ping statistics --- 00:07:28.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:28.472 rtt min/avg/max/mdev = 0.368/0.368/0.368/0.000 ms 00:07:28.472 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:28.472 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:07:28.472 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:28.472 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:28.472 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:28.472 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:28.472 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:28.472 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:28.472 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:28.472 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:28.472 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:28.472 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:28.472 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:28.472 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=1171229 00:07:28.472 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 1171229 00:07:28.472 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:28.472 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 1171229 ']' 00:07:28.472 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.472 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:28.472 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.472 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:28.472 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:28.472 [2024-11-20 09:40:58.512264] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:07:28.472 [2024-11-20 09:40:58.512332] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:28.472 [2024-11-20 09:40:58.610251] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:28.472 [2024-11-20 09:40:58.661865] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:28.472 [2024-11-20 09:40:58.661917] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:28.472 [2024-11-20 09:40:58.661925] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:28.472 [2024-11-20 09:40:58.661933] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:28.472 [2024-11-20 09:40:58.661939] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:28.472 [2024-11-20 09:40:58.663659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:28.472 [2024-11-20 09:40:58.663664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.472 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:28.472 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:07:28.472 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:28.472 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:28.472 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:28.472 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:28.733 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:28.733 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.733 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:28.733 [2024-11-20 09:40:59.389855] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:28.733 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.733 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:28.733 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.733 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:28.733 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.733 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:28.733 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.733 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:28.733 [2024-11-20 09:40:59.414125] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:28.733 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.733 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:28.733 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.734 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:28.734 NULL1 00:07:28.734 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.734 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:28.734 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.734 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:28.734 Delay0 00:07:28.734 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.734 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:28.734 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.734 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:28.734 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.734 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1171284 00:07:28.734 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:28.734 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:28.734 [2024-11-20 09:40:59.541133] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:30.648 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:30.648 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.648 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:30.909 Read completed with error (sct=0, sc=8) 00:07:30.909 Write completed with error (sct=0, sc=8) 00:07:30.909 Read completed with error (sct=0, sc=8) 00:07:30.909 starting I/O failed: -6 00:07:30.909 Write completed with error (sct=0, sc=8) 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 starting I/O failed: -6 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 Write completed with error (sct=0, sc=8) 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 starting I/O failed: -6 00:07:30.910 Write completed with error (sct=0, sc=8) 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 Write completed with error (sct=0, sc=8) 00:07:30.910 Write completed with error (sct=0, sc=8) 00:07:30.910 starting I/O failed: -6 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 Write completed with error (sct=0, sc=8) 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 starting I/O failed: -6 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 Write completed with error (sct=0, sc=8) 00:07:30.910 Write completed with error (sct=0, sc=8) 00:07:30.910 starting I/O failed: -6 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 Write completed with error (sct=0, sc=8) 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 starting I/O failed: -6 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 Write completed with error (sct=0, sc=8) 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 starting I/O failed: -6 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 starting I/O failed: -6 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 Write completed with error (sct=0, sc=8) 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 starting I/O failed: -6 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 starting I/O failed: -6 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 Write completed with error (sct=0, sc=8) 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 starting I/O failed: -6 00:07:30.910 Write completed with error (sct=0, sc=8) 00:07:30.910 [2024-11-20 09:41:01.707951] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173d680 is same with the state(6) to be set 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 Write completed with error (sct=0, sc=8) 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 Write completed with error (sct=0, sc=8) 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 Write completed with error (sct=0, sc=8) 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 Write completed with error (sct=0, sc=8) 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 Write completed with error (sct=0, sc=8) 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 Write completed with error (sct=0, sc=8) 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 Write completed with error (sct=0, sc=8) 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 Write completed with error (sct=0, sc=8) 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 Write completed with error (sct=0, sc=8) 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 Write completed with error (sct=0, sc=8) 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 Write completed with error (sct=0, sc=8) 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 Write completed with error (sct=0, sc=8) 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 Write completed with error (sct=0, sc=8) 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 Write completed with error (sct=0, sc=8) 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 Write completed with error (sct=0, sc=8) 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 Write completed with error (sct=0, sc=8) 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 Write completed with error (sct=0, sc=8) 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 Write completed with error (sct=0, sc=8) 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 Write completed with error (sct=0, sc=8) 00:07:30.910 Write completed with error (sct=0, sc=8) 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 starting I/O failed: -6 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 Write completed with error (sct=0, sc=8) 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 starting I/O failed: -6 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 starting I/O failed: -6 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 Write completed with error (sct=0, sc=8) 00:07:30.910 Write completed with error (sct=0, sc=8) 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 starting I/O failed: -6 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 Write completed with error (sct=0, sc=8) 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 starting I/O failed: -6 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 starting I/O failed: -6 00:07:30.910 Write completed with error (sct=0, sc=8) 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 starting I/O failed: -6 00:07:30.910 Write completed with error (sct=0, sc=8) 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 starting I/O failed: -6 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 Write completed with error (sct=0, sc=8) 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 Write completed with error (sct=0, sc=8) 00:07:30.910 starting I/O failed: -6 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 Write completed with error (sct=0, sc=8) 00:07:30.910 starting I/O failed: -6 00:07:30.910 Read completed with error (sct=0, sc=8) 00:07:30.910 Write completed with error (sct=0, sc=8) 00:07:30.910 [2024-11-20 09:41:01.711462] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f51c800d020 is same with the state(6) to be set 00:07:30.911 Read completed with error (sct=0, sc=8) 00:07:30.911 Read completed with error (sct=0, sc=8) 00:07:30.911 Read completed with error (sct=0, sc=8) 00:07:30.911 Read completed with error (sct=0, sc=8) 00:07:30.911 Read completed with error (sct=0, sc=8) 00:07:30.911 Write completed with error (sct=0, sc=8) 00:07:30.911 Write completed with error (sct=0, sc=8) 00:07:30.911 Read completed with error (sct=0, sc=8) 00:07:30.911 Read completed with error (sct=0, sc=8) 00:07:30.911 Read completed with error (sct=0, sc=8) 00:07:30.911 Write completed with error (sct=0, sc=8) 00:07:30.911 Read completed with error (sct=0, sc=8) 00:07:30.911 Write completed with error (sct=0, sc=8) 00:07:30.911 Read completed with error (sct=0, sc=8) 00:07:30.911 Read completed with error (sct=0, sc=8) 00:07:30.911 Write completed with error (sct=0, sc=8) 00:07:30.911 Write completed with error (sct=0, sc=8) 00:07:30.911 Write completed with error (sct=0, sc=8) 00:07:30.911 Read completed with error (sct=0, sc=8) 00:07:30.911 Write completed with error (sct=0, sc=8) 00:07:30.911 Write completed with error (sct=0, sc=8) 00:07:30.911 Read completed with error (sct=0, sc=8) 00:07:30.911 Read completed with error (sct=0, sc=8) 00:07:30.911 Read completed with error (sct=0, sc=8) 00:07:30.911 Read completed with error (sct=0, sc=8) 00:07:30.911 Write completed with error (sct=0, sc=8) 00:07:30.911 Write completed with error (sct=0, sc=8) 00:07:30.911 Write completed with error (sct=0, sc=8) 00:07:30.911 Read completed with error (sct=0, sc=8) 00:07:30.911 Read completed with error (sct=0, sc=8) 00:07:30.911 Read completed with error (sct=0, sc=8) 00:07:30.911 Read completed with error (sct=0, sc=8) 00:07:30.911 Read completed with error (sct=0, sc=8) 00:07:30.911 Read completed with error (sct=0, sc=8) 00:07:30.911 Write completed with error (sct=0, sc=8) 00:07:30.911 Read completed with error (sct=0, sc=8) 00:07:30.911 Read completed with error (sct=0, sc=8) 00:07:30.911 Read completed with error (sct=0, sc=8) 00:07:30.911 Write completed with error (sct=0, sc=8) 00:07:30.911 Write completed with error (sct=0, sc=8) 00:07:30.911 Read completed with error (sct=0, sc=8) 00:07:30.911 Read completed with error (sct=0, sc=8) 00:07:30.911 Write completed with error (sct=0, sc=8) 00:07:30.911 Read completed with error (sct=0, sc=8) 00:07:30.911 Read completed with error (sct=0, sc=8) 00:07:30.911 Read completed with error (sct=0, sc=8) 00:07:30.911 Read completed with error (sct=0, sc=8) 00:07:30.911 Read completed with error (sct=0, sc=8) 00:07:30.911 Read completed with error (sct=0, sc=8) 00:07:30.911 Read completed with error (sct=0, sc=8) 00:07:30.911 Write completed with error (sct=0, sc=8) 00:07:30.911 Write completed with error (sct=0, sc=8) 00:07:31.854 [2024-11-20 09:41:02.681645] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173e9a0 is same with the state(6) to be set 00:07:31.854 Write completed with error (sct=0, sc=8) 00:07:31.854 Read completed with error (sct=0, sc=8) 00:07:31.854 Read completed with error (sct=0, sc=8) 00:07:31.854 Read completed with error (sct=0, sc=8) 00:07:31.854 Read completed with error (sct=0, sc=8) 00:07:31.854 Write completed with error (sct=0, sc=8) 00:07:31.854 Read completed with error (sct=0, sc=8) 00:07:31.854 Write completed with error (sct=0, sc=8) 00:07:31.854 Read completed with error (sct=0, sc=8) 00:07:31.854 Read completed with error (sct=0, sc=8) 00:07:31.854 Write completed with error (sct=0, sc=8) 00:07:31.854 Write completed with error (sct=0, sc=8) 00:07:31.854 Write completed with error (sct=0, sc=8) 00:07:31.854 Read completed with error (sct=0, sc=8) 00:07:31.854 Read completed with error (sct=0, sc=8) 00:07:31.854 Write completed with error (sct=0, sc=8) 00:07:31.854 Write completed with error (sct=0, sc=8) 00:07:31.854 Read completed with error (sct=0, sc=8) 00:07:31.854 Write completed with error (sct=0, sc=8) 00:07:31.854 Write completed with error (sct=0, sc=8) 00:07:31.854 Read completed with error (sct=0, sc=8) 00:07:31.854 Read completed with error (sct=0, sc=8) 00:07:31.854 Write completed with error (sct=0, sc=8) 00:07:31.854 Read completed with error (sct=0, sc=8) 00:07:31.854 Read completed with error (sct=0, sc=8) 00:07:31.854 Read completed with error (sct=0, sc=8) 00:07:31.854 Write completed with error (sct=0, sc=8) 00:07:31.854 Write completed with error (sct=0, sc=8) 00:07:31.854 [2024-11-20 09:41:02.712105] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173d4a0 is same with the state(6) to be set 00:07:31.854 Write completed with error (sct=0, sc=8) 00:07:31.854 Read completed with error (sct=0, sc=8) 00:07:31.854 Read completed with error (sct=0, sc=8) 00:07:31.854 Read completed with error (sct=0, sc=8) 00:07:31.854 Read completed with error (sct=0, sc=8) 00:07:31.854 Read completed with error (sct=0, sc=8) 00:07:31.854 Write completed with error (sct=0, sc=8) 00:07:31.854 Read completed with error (sct=0, sc=8) 00:07:31.854 Read completed with error (sct=0, sc=8) 00:07:31.854 Read completed with error (sct=0, sc=8) 00:07:31.854 Read completed with error (sct=0, sc=8) 00:07:31.854 Write completed with error (sct=0, sc=8) 00:07:31.854 Write completed with error (sct=0, sc=8) 00:07:31.854 Write completed with error (sct=0, sc=8) 00:07:31.854 Read completed with error (sct=0, sc=8) 00:07:31.854 Write completed with error (sct=0, sc=8) 00:07:31.854 Read completed with error (sct=0, sc=8) 00:07:31.854 Read completed with error (sct=0, sc=8) 00:07:31.854 Read completed with error (sct=0, sc=8) 00:07:31.854 Read completed with error (sct=0, sc=8) 00:07:31.854 Write completed with error (sct=0, sc=8) 00:07:31.854 Read completed with error (sct=0, sc=8) 00:07:31.854 Write completed with error (sct=0, sc=8) 00:07:31.854 Read completed with error (sct=0, sc=8) 00:07:31.854 Read completed with error (sct=0, sc=8) 00:07:31.854 Read completed with error (sct=0, sc=8) 00:07:31.854 Write completed with error (sct=0, sc=8) 00:07:31.854 Write completed with error (sct=0, sc=8) 00:07:31.854 [2024-11-20 09:41:02.712254] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173d860 is same with the state(6) to be set 00:07:31.854 Read completed with error (sct=0, sc=8) 00:07:31.854 Read completed with error (sct=0, sc=8) 00:07:31.854 Read completed with error (sct=0, sc=8) 00:07:31.854 Write completed with error (sct=0, sc=8) 00:07:31.854 Read completed with error (sct=0, sc=8) 00:07:31.854 Read completed with error (sct=0, sc=8) 00:07:31.854 Write completed with error (sct=0, sc=8) 00:07:31.854 Write completed with error (sct=0, sc=8) 00:07:31.854 Read completed with error (sct=0, sc=8) 00:07:31.854 Write completed with error (sct=0, sc=8) 00:07:31.854 Read completed with error (sct=0, sc=8) 00:07:31.854 Write completed with error (sct=0, sc=8) 00:07:31.854 Write completed with error (sct=0, sc=8) 00:07:31.854 Read completed with error (sct=0, sc=8) 00:07:31.854 Write completed with error (sct=0, sc=8) 00:07:31.854 Write completed with error (sct=0, sc=8) 00:07:31.854 Read completed with error (sct=0, sc=8) 00:07:31.854 Read completed with error (sct=0, sc=8) 00:07:31.854 Read completed with error (sct=0, sc=8) 00:07:31.854 Read completed with error (sct=0, sc=8) 00:07:31.854 [2024-11-20 09:41:02.713405] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f51c800d350 is same with the state(6) to be set 00:07:31.854 Write completed with error (sct=0, sc=8) 00:07:31.854 Write completed with error (sct=0, sc=8) 00:07:31.854 Read completed with error (sct=0, sc=8) 00:07:31.854 Read completed with error (sct=0, sc=8) 00:07:31.854 Read completed with error (sct=0, sc=8) 00:07:31.854 Read completed with error (sct=0, sc=8) 00:07:31.854 Write completed with error (sct=0, sc=8) 00:07:31.854 Read completed with error (sct=0, sc=8) 00:07:31.854 Write completed with error (sct=0, sc=8) 00:07:31.854 Read completed with error (sct=0, sc=8) 00:07:31.854 Write completed with error (sct=0, sc=8) 00:07:31.854 Read completed with error (sct=0, sc=8) 00:07:31.854 Read completed with error (sct=0, sc=8) 00:07:31.854 Read completed with error (sct=0, sc=8) 00:07:31.854 Write completed with error (sct=0, sc=8) 00:07:31.854 Read completed with error (sct=0, sc=8) 00:07:31.854 Read completed with error (sct=0, sc=8) 00:07:31.854 Read completed with error (sct=0, sc=8) 00:07:31.854 Write completed with error (sct=0, sc=8) 00:07:31.854 Read completed with error (sct=0, sc=8) 00:07:31.854 Read completed with error (sct=0, sc=8) 00:07:31.854 Read completed with error (sct=0, sc=8) 00:07:31.854 Read completed with error (sct=0, sc=8) 00:07:31.854 Read completed with error (sct=0, sc=8) 00:07:31.854 Read completed with error (sct=0, sc=8) 00:07:31.854 [2024-11-20 09:41:02.713808] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f51c8000c40 is same with the state(6) to be set 00:07:31.854 Initializing NVMe Controllers 00:07:31.854 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:31.854 Controller IO queue size 128, less than required. 00:07:31.854 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:31.854 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:31.854 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:31.854 Initialization complete. Launching workers. 00:07:31.854 ======================================================== 00:07:31.854 Latency(us) 00:07:31.854 Device Information : IOPS MiB/s Average min max 00:07:31.854 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 177.16 0.09 880033.50 362.19 1007618.17 00:07:31.854 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 162.23 0.08 927990.02 311.89 2001769.33 00:07:31.854 ======================================================== 00:07:31.854 Total : 339.40 0.17 902957.00 311.89 2001769.33 00:07:31.854 00:07:31.854 [2024-11-20 09:41:02.714406] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173e9a0 (9): Bad file descriptor 00:07:31.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:31.854 09:41:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.854 09:41:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:31.854 09:41:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1171284 00:07:31.854 09:41:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:32.425 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:32.425 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1171284 00:07:32.426 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1171284) - No such process 00:07:32.426 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1171284 00:07:32.426 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:07:32.426 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1171284 00:07:32.426 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:07:32.426 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:32.426 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:07:32.426 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:32.426 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 1171284 00:07:32.426 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:07:32.426 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:32.426 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:32.426 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:32.426 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:32.426 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.426 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:32.426 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.426 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:32.426 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.426 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:32.426 [2024-11-20 09:41:03.243939] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:32.426 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.426 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:32.426 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.426 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:32.426 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.426 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1172123 00:07:32.426 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:32.426 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:32.426 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1172123 00:07:32.426 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:32.686 [2024-11-20 09:41:03.342339] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:32.946 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:32.946 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1172123 00:07:32.946 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:33.517 09:41:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:33.517 09:41:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1172123 00:07:33.517 09:41:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:34.088 09:41:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:34.088 09:41:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1172123 00:07:34.088 09:41:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:34.660 09:41:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:34.660 09:41:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1172123 00:07:34.660 09:41:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:34.920 09:41:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:34.920 09:41:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1172123 00:07:34.920 09:41:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:35.498 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:35.498 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1172123 00:07:35.498 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:35.761 Initializing NVMe Controllers 00:07:35.761 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:35.761 Controller IO queue size 128, less than required. 00:07:35.761 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:35.761 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:35.761 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:35.761 Initialization complete. Launching workers. 00:07:35.761 ======================================================== 00:07:35.761 Latency(us) 00:07:35.761 Device Information : IOPS MiB/s Average min max 00:07:35.761 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002516.15 1000165.32 1041938.22 00:07:35.761 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002832.17 1000236.55 1007975.15 00:07:35.761 ======================================================== 00:07:35.761 Total : 256.00 0.12 1002674.16 1000165.32 1041938.22 00:07:35.761 00:07:36.022 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:36.022 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1172123 00:07:36.022 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1172123) - No such process 00:07:36.022 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1172123 00:07:36.022 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:36.022 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:36.022 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:36.022 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:07:36.022 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:36.022 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:07:36.022 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:36.022 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:36.022 rmmod nvme_tcp 00:07:36.022 rmmod nvme_fabrics 00:07:36.022 rmmod nvme_keyring 00:07:36.022 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:36.022 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:07:36.022 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:07:36.022 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 1171229 ']' 00:07:36.022 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 1171229 00:07:36.022 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 1171229 ']' 00:07:36.022 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 1171229 00:07:36.022 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:07:36.022 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:36.022 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1171229 00:07:36.283 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:36.283 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:36.283 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1171229' 00:07:36.283 killing process with pid 1171229 00:07:36.283 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 1171229 00:07:36.283 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 1171229 00:07:36.283 09:41:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:36.283 09:41:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:36.283 09:41:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:36.283 09:41:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:07:36.283 09:41:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:07:36.283 09:41:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:36.283 09:41:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:07:36.283 09:41:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:36.283 09:41:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:36.283 09:41:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:36.283 09:41:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:36.283 09:41:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:38.831 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:38.831 00:07:38.831 real 0m18.366s 00:07:38.831 user 0m31.046s 00:07:38.831 sys 0m6.692s 00:07:38.831 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:38.831 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:38.831 ************************************ 00:07:38.831 END TEST nvmf_delete_subsystem 00:07:38.831 ************************************ 00:07:38.831 09:41:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:38.831 09:41:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:38.831 09:41:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:38.831 09:41:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:38.831 ************************************ 00:07:38.831 START TEST nvmf_host_management 00:07:38.831 ************************************ 00:07:38.831 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:38.831 * Looking for test storage... 00:07:38.831 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:38.831 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:38.831 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:07:38.831 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:38.831 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:38.831 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:38.831 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:38.831 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:38.831 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:07:38.831 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:07:38.831 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:07:38.831 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:07:38.831 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:07:38.831 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:07:38.831 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:07:38.831 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:38.831 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:07:38.831 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:07:38.831 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:38.831 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:38.831 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:07:38.831 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:07:38.831 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:38.831 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:07:38.831 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:07:38.831 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:07:38.831 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:07:38.831 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:38.831 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:07:38.831 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:07:38.831 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:38.831 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:38.831 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:07:38.831 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:38.831 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:38.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.831 --rc genhtml_branch_coverage=1 00:07:38.831 --rc genhtml_function_coverage=1 00:07:38.831 --rc genhtml_legend=1 00:07:38.831 --rc geninfo_all_blocks=1 00:07:38.831 --rc geninfo_unexecuted_blocks=1 00:07:38.831 00:07:38.831 ' 00:07:38.831 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:38.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.831 --rc genhtml_branch_coverage=1 00:07:38.831 --rc genhtml_function_coverage=1 00:07:38.831 --rc genhtml_legend=1 00:07:38.831 --rc geninfo_all_blocks=1 00:07:38.831 --rc geninfo_unexecuted_blocks=1 00:07:38.831 00:07:38.831 ' 00:07:38.831 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:38.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.831 --rc genhtml_branch_coverage=1 00:07:38.831 --rc genhtml_function_coverage=1 00:07:38.831 --rc genhtml_legend=1 00:07:38.831 --rc geninfo_all_blocks=1 00:07:38.831 --rc geninfo_unexecuted_blocks=1 00:07:38.831 00:07:38.831 ' 00:07:38.831 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:38.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.831 --rc genhtml_branch_coverage=1 00:07:38.831 --rc genhtml_function_coverage=1 00:07:38.831 --rc genhtml_legend=1 00:07:38.831 --rc geninfo_all_blocks=1 00:07:38.831 --rc geninfo_unexecuted_blocks=1 00:07:38.831 00:07:38.831 ' 00:07:38.831 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:38.832 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:38.832 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:38.832 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:38.832 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:38.832 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:38.832 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:38.832 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:38.832 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:38.832 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:38.832 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:38.832 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:38.832 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:38.832 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:38.832 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:38.832 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:38.832 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:38.832 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:38.832 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:38.832 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:07:38.832 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:38.832 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:38.832 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:38.832 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.832 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.832 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.832 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:38.832 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.832 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:07:38.832 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:38.832 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:38.832 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:38.832 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:38.832 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:38.832 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:38.832 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:38.832 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:38.832 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:38.832 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:38.832 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:38.832 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:38.832 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:38.832 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:38.832 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:38.832 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:38.832 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:38.832 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:38.832 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:38.832 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:38.832 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:38.832 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:38.832 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:38.832 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:07:38.832 09:41:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:46.979 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:46.979 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:07:46.979 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:46.979 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:46.979 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:46.979 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:46.979 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:46.979 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:07:46.979 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:46.979 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:07:46.979 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:07:46.979 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:07:46.979 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:07:46.979 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:07:46.979 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:07:46.979 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:46.979 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:46.979 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:46.979 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:46.979 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:46.979 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:46.979 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:46.979 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:46.979 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:46.979 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:46.979 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:46.979 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:46.979 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:46.979 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:46.979 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:46.979 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:46.979 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:46.979 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:46.979 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:46.979 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:46.979 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:46.979 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:46.979 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:46.979 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:46.979 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:46.979 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:46.979 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:46.979 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:46.979 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:46.979 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:46.979 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:46.979 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:46.979 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:46.979 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:46.979 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:46.979 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:46.979 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:46.979 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:46.979 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:46.979 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:46.979 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:46.979 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:46.979 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:46.979 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:46.979 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:46.979 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:46.979 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:46.979 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:46.979 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:46.979 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:46.979 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:46.979 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:46.979 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:46.979 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:46.979 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:46.979 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:46.979 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:46.979 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:46.980 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:07:46.980 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:46.980 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:46.980 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:46.980 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:46.980 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:46.980 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:46.980 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:46.980 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:46.980 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:46.980 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:46.980 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:46.980 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:46.980 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:46.980 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:46.980 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:46.980 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:46.980 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:46.980 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:46.980 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:46.980 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:46.980 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:46.980 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:46.980 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:46.980 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:46.980 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:46.980 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:46.980 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:46.980 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.636 ms 00:07:46.980 00:07:46.980 --- 10.0.0.2 ping statistics --- 00:07:46.980 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:46.980 rtt min/avg/max/mdev = 0.636/0.636/0.636/0.000 ms 00:07:46.980 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:46.980 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:46.980 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:07:46.980 00:07:46.980 --- 10.0.0.1 ping statistics --- 00:07:46.980 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:46.980 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:07:46.980 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:46.980 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:07:46.980 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:46.980 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:46.980 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:46.980 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:46.980 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:46.980 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:46.980 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:46.980 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:46.980 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:46.980 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:46.980 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:46.980 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:46.980 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:46.980 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=1177084 00:07:46.980 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 1177084 00:07:46.980 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:46.980 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1177084 ']' 00:07:46.980 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:46.980 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:46.980 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:46.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:46.980 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:46.980 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:46.980 [2024-11-20 09:41:17.024460] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:07:46.980 [2024-11-20 09:41:17.024525] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:46.980 [2024-11-20 09:41:17.124135] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:46.980 [2024-11-20 09:41:17.178792] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:46.980 [2024-11-20 09:41:17.178844] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:46.980 [2024-11-20 09:41:17.178853] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:46.980 [2024-11-20 09:41:17.178860] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:46.980 [2024-11-20 09:41:17.178866] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:46.980 [2024-11-20 09:41:17.180937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:46.980 [2024-11-20 09:41:17.181100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:46.980 [2024-11-20 09:41:17.181266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:46.980 [2024-11-20 09:41:17.181425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:46.980 09:41:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:46.980 09:41:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:46.980 09:41:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:46.980 09:41:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:46.980 09:41:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:47.243 09:41:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:47.243 09:41:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:47.243 09:41:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.243 09:41:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:47.243 [2024-11-20 09:41:17.905508] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:47.243 09:41:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.243 09:41:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:47.243 09:41:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:47.243 09:41:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:47.243 09:41:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:47.243 09:41:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:47.243 09:41:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:47.243 09:41:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.243 09:41:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:47.243 Malloc0 00:07:47.243 [2024-11-20 09:41:17.994292] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:47.243 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.243 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:47.243 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:47.243 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:47.243 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1177338 00:07:47.243 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1177338 /var/tmp/bdevperf.sock 00:07:47.243 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1177338 ']' 00:07:47.243 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:47.243 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:47.243 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:47.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:47.243 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:47.243 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:47.243 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:47.243 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:47.243 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:47.243 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:47.243 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:47.243 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:47.243 { 00:07:47.243 "params": { 00:07:47.243 "name": "Nvme$subsystem", 00:07:47.243 "trtype": "$TEST_TRANSPORT", 00:07:47.243 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:47.243 "adrfam": "ipv4", 00:07:47.243 "trsvcid": "$NVMF_PORT", 00:07:47.243 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:47.243 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:47.243 "hdgst": ${hdgst:-false}, 00:07:47.243 "ddgst": ${ddgst:-false} 00:07:47.243 }, 00:07:47.243 "method": "bdev_nvme_attach_controller" 00:07:47.243 } 00:07:47.243 EOF 00:07:47.243 )") 00:07:47.243 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:47.244 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:47.244 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:47.244 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:47.244 "params": { 00:07:47.244 "name": "Nvme0", 00:07:47.244 "trtype": "tcp", 00:07:47.244 "traddr": "10.0.0.2", 00:07:47.244 "adrfam": "ipv4", 00:07:47.244 "trsvcid": "4420", 00:07:47.244 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:47.244 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:47.244 "hdgst": false, 00:07:47.244 "ddgst": false 00:07:47.244 }, 00:07:47.244 "method": "bdev_nvme_attach_controller" 00:07:47.244 }' 00:07:47.244 [2024-11-20 09:41:18.105389] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:07:47.244 [2024-11-20 09:41:18.105457] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1177338 ] 00:07:47.505 [2024-11-20 09:41:18.198631] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.506 [2024-11-20 09:41:18.252024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.767 Running I/O for 10 seconds... 00:07:48.028 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:48.028 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:48.028 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:48.028 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.028 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:48.291 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.291 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:48.291 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:48.291 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:48.291 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:48.291 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:48.291 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:48.291 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:48.291 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:48.291 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:48.291 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:48.291 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.291 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:48.291 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.291 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=767 00:07:48.291 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 767 -ge 100 ']' 00:07:48.291 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:48.291 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:48.291 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:48.291 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:48.291 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.291 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:48.291 [2024-11-20 09:41:18.998205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2348130 is same with the state(6) to be set 00:07:48.291 [2024-11-20 09:41:18.998708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:107136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.291 [2024-11-20 09:41:18.998771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.292 [2024-11-20 09:41:18.998794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:107264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.292 [2024-11-20 09:41:18.998803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.292 [2024-11-20 09:41:18.998813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:107392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.292 [2024-11-20 09:41:18.998822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.292 [2024-11-20 09:41:18.998832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:107520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.292 [2024-11-20 09:41:18.998839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.292 [2024-11-20 09:41:18.998849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:107648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.292 [2024-11-20 09:41:18.998857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.292 [2024-11-20 09:41:18.998867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:107776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.292 [2024-11-20 09:41:18.998876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.292 [2024-11-20 09:41:18.998887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:107904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.292 [2024-11-20 09:41:18.998895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.292 [2024-11-20 09:41:18.998906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:108032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.292 [2024-11-20 09:41:18.998914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.292 [2024-11-20 09:41:18.998923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:108160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.292 [2024-11-20 09:41:18.998932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.292 [2024-11-20 09:41:18.998944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:108288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.292 [2024-11-20 09:41:18.998961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.292 [2024-11-20 09:41:18.998971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:108416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.292 [2024-11-20 09:41:18.998978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.292 [2024-11-20 09:41:18.998989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:108544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.292 [2024-11-20 09:41:18.998997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.292 [2024-11-20 09:41:18.999008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:108672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.292 [2024-11-20 09:41:18.999018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.292 [2024-11-20 09:41:18.999029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:108800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.292 [2024-11-20 09:41:18.999037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.292 [2024-11-20 09:41:18.999049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:108928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.292 [2024-11-20 09:41:18.999058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.292 [2024-11-20 09:41:18.999069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:109056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.292 [2024-11-20 09:41:18.999081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.292 [2024-11-20 09:41:18.999091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:109184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.292 [2024-11-20 09:41:18.999099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.292 [2024-11-20 09:41:18.999109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:109312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.292 [2024-11-20 09:41:18.999116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.292 [2024-11-20 09:41:18.999125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:109440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.292 [2024-11-20 09:41:18.999133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.292 [2024-11-20 09:41:18.999142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:109568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.292 [2024-11-20 09:41:18.999149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.292 [2024-11-20 09:41:18.999166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:109696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.292 [2024-11-20 09:41:18.999174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.292 [2024-11-20 09:41:18.999183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:109824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.292 [2024-11-20 09:41:18.999193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.292 [2024-11-20 09:41:18.999203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:109952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.292 [2024-11-20 09:41:18.999210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.292 [2024-11-20 09:41:18.999220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:110080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.292 [2024-11-20 09:41:18.999227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.292 [2024-11-20 09:41:18.999236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:110208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.292 [2024-11-20 09:41:18.999244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.292 [2024-11-20 09:41:18.999253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:110336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.292 [2024-11-20 09:41:18.999261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.292 [2024-11-20 09:41:18.999270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:110464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.292 [2024-11-20 09:41:18.999279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.292 [2024-11-20 09:41:18.999291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:110592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.292 [2024-11-20 09:41:18.999299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.292 [2024-11-20 09:41:18.999308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:110720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.292 [2024-11-20 09:41:18.999317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.292 [2024-11-20 09:41:18.999327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:110848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.292 [2024-11-20 09:41:18.999334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.292 [2024-11-20 09:41:18.999344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:110976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.292 [2024-11-20 09:41:18.999351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.292 [2024-11-20 09:41:18.999360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:111104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.292 [2024-11-20 09:41:18.999368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.292 [2024-11-20 09:41:18.999377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:111232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.292 [2024-11-20 09:41:18.999386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.292 [2024-11-20 09:41:18.999397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:111360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.292 [2024-11-20 09:41:18.999406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.292 [2024-11-20 09:41:18.999418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:111488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.292 [2024-11-20 09:41:18.999426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.292 [2024-11-20 09:41:18.999435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:111616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.292 [2024-11-20 09:41:18.999443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.292 [2024-11-20 09:41:18.999453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:111744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.292 [2024-11-20 09:41:18.999461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.292 [2024-11-20 09:41:18.999471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:111872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.292 [2024-11-20 09:41:18.999479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.292 [2024-11-20 09:41:18.999488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:112000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.292 [2024-11-20 09:41:18.999495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.292 [2024-11-20 09:41:18.999505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:112128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.292 [2024-11-20 09:41:18.999513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.293 [2024-11-20 09:41:18.999523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:112256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.293 [2024-11-20 09:41:18.999531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.293 [2024-11-20 09:41:18.999540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:112384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.293 [2024-11-20 09:41:18.999547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.293 [2024-11-20 09:41:18.999557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:112512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.293 [2024-11-20 09:41:18.999568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.293 [2024-11-20 09:41:18.999578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:112640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.293 [2024-11-20 09:41:18.999586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.293 [2024-11-20 09:41:18.999595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:112768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.293 [2024-11-20 09:41:18.999602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.293 [2024-11-20 09:41:18.999611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:112896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.293 [2024-11-20 09:41:18.999619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.293 [2024-11-20 09:41:18.999629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:113024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.293 [2024-11-20 09:41:18.999639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.293 [2024-11-20 09:41:18.999649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:113152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.293 [2024-11-20 09:41:18.999656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.293 [2024-11-20 09:41:18.999665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:113280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.293 [2024-11-20 09:41:18.999673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.293 [2024-11-20 09:41:18.999683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:113408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.293 [2024-11-20 09:41:18.999691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.293 [2024-11-20 09:41:18.999700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:113536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.293 [2024-11-20 09:41:18.999708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.293 [2024-11-20 09:41:18.999717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:113664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.293 [2024-11-20 09:41:18.999724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.293 [2024-11-20 09:41:18.999733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:113792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.293 [2024-11-20 09:41:18.999742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.293 [2024-11-20 09:41:18.999752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:113920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.293 [2024-11-20 09:41:18.999761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.293 [2024-11-20 09:41:18.999771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:114048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.293 [2024-11-20 09:41:18.999779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.293 [2024-11-20 09:41:18.999788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:114176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.293 [2024-11-20 09:41:18.999797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.293 [2024-11-20 09:41:18.999807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:114304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.293 [2024-11-20 09:41:18.999815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.293 [2024-11-20 09:41:18.999824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:114432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.293 [2024-11-20 09:41:18.999832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.293 [2024-11-20 09:41:18.999841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:114560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.293 [2024-11-20 09:41:18.999849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.293 [2024-11-20 09:41:18.999861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.293 [2024-11-20 09:41:18.999868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.293 [2024-11-20 09:41:18.999878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:114816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.293 [2024-11-20 09:41:18.999886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.293 [2024-11-20 09:41:18.999895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:114944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.293 [2024-11-20 09:41:18.999902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.293 [2024-11-20 09:41:18.999915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:115072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.293 [2024-11-20 09:41:18.999923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.293 [2024-11-20 09:41:18.999932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:115200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.293 [2024-11-20 09:41:18.999939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.293 [2024-11-20 09:41:18.999975] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:07:48.293 [2024-11-20 09:41:19.001262] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:07:48.293 task offset: 107136 on job bdev=Nvme0n1 fails 00:07:48.293 00:07:48.293 Latency(us) 00:07:48.293 [2024-11-20T08:41:19.209Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:48.293 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:48.293 Job: Nvme0n1 ended in about 0.58 seconds with error 00:07:48.293 Verification LBA range: start 0x0 length 0x400 00:07:48.293 Nvme0n1 : 0.58 1446.62 90.41 110.61 0.00 40099.23 1802.24 36481.71 00:07:48.293 [2024-11-20T08:41:19.209Z] =================================================================================================================== 00:07:48.293 [2024-11-20T08:41:19.209Z] Total : 1446.62 90.41 110.61 0.00 40099.23 1802.24 36481.71 00:07:48.293 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.293 [2024-11-20 09:41:19.003511] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:48.293 [2024-11-20 09:41:19.003550] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f87000 (9): Bad file descriptor 00:07:48.293 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:48.293 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.293 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:48.293 [2024-11-20 09:41:19.007637] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:07:48.293 [2024-11-20 09:41:19.007761] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:07:48.293 [2024-11-20 09:41:19.007805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.293 [2024-11-20 09:41:19.007824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:07:48.293 [2024-11-20 09:41:19.007842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:07:48.293 [2024-11-20 09:41:19.007851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:07:48.293 [2024-11-20 09:41:19.007859] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f87000 00:07:48.293 [2024-11-20 09:41:19.007882] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f87000 (9): Bad file descriptor 00:07:48.293 [2024-11-20 09:41:19.007897] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:07:48.293 [2024-11-20 09:41:19.007904] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:07:48.293 [2024-11-20 09:41:19.007916] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:07:48.293 [2024-11-20 09:41:19.007928] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:07:48.293 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.293 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:49.237 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1177338 00:07:49.237 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1177338) - No such process 00:07:49.237 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:49.237 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:49.237 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:49.237 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:49.237 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:49.237 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:49.237 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:49.237 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:49.237 { 00:07:49.237 "params": { 00:07:49.237 "name": "Nvme$subsystem", 00:07:49.237 "trtype": "$TEST_TRANSPORT", 00:07:49.237 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:49.237 "adrfam": "ipv4", 00:07:49.237 "trsvcid": "$NVMF_PORT", 00:07:49.237 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:49.237 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:49.237 "hdgst": ${hdgst:-false}, 00:07:49.237 "ddgst": ${ddgst:-false} 00:07:49.237 }, 00:07:49.237 "method": "bdev_nvme_attach_controller" 00:07:49.237 } 00:07:49.237 EOF 00:07:49.237 )") 00:07:49.237 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:49.237 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:49.237 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:49.237 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:49.237 "params": { 00:07:49.237 "name": "Nvme0", 00:07:49.237 "trtype": "tcp", 00:07:49.237 "traddr": "10.0.0.2", 00:07:49.237 "adrfam": "ipv4", 00:07:49.237 "trsvcid": "4420", 00:07:49.237 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:49.237 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:49.237 "hdgst": false, 00:07:49.237 "ddgst": false 00:07:49.237 }, 00:07:49.237 "method": "bdev_nvme_attach_controller" 00:07:49.237 }' 00:07:49.237 [2024-11-20 09:41:20.080881] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:07:49.237 [2024-11-20 09:41:20.080946] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1177691 ] 00:07:49.498 [2024-11-20 09:41:20.170576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.498 [2024-11-20 09:41:20.204860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.498 Running I/O for 1 seconds... 00:07:50.882 1600.00 IOPS, 100.00 MiB/s 00:07:50.882 Latency(us) 00:07:50.882 [2024-11-20T08:41:21.798Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:50.882 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:50.882 Verification LBA range: start 0x0 length 0x400 00:07:50.882 Nvme0n1 : 1.04 1607.24 100.45 0.00 0.00 39129.32 6580.91 32986.45 00:07:50.882 [2024-11-20T08:41:21.798Z] =================================================================================================================== 00:07:50.882 [2024-11-20T08:41:21.798Z] Total : 1607.24 100.45 0.00 0.00 39129.32 6580.91 32986.45 00:07:50.882 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:50.882 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:50.882 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:50.882 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:50.882 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:50.882 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:50.882 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:50.882 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:50.882 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:50.882 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:50.882 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:50.882 rmmod nvme_tcp 00:07:50.882 rmmod nvme_fabrics 00:07:50.882 rmmod nvme_keyring 00:07:50.882 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:50.882 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:50.882 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:50.882 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 1177084 ']' 00:07:50.883 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 1177084 00:07:50.883 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 1177084 ']' 00:07:50.883 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 1177084 00:07:50.883 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:07:50.883 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:50.883 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1177084 00:07:50.883 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:50.883 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:50.883 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1177084' 00:07:50.883 killing process with pid 1177084 00:07:50.883 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 1177084 00:07:50.883 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 1177084 00:07:50.883 [2024-11-20 09:41:21.760170] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:50.883 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:50.883 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:50.883 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:50.883 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:07:50.883 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:07:50.883 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:50.883 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:07:51.144 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:51.144 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:51.144 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:51.144 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:51.144 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:53.060 09:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:53.060 09:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:53.060 00:07:53.060 real 0m14.677s 00:07:53.060 user 0m23.016s 00:07:53.060 sys 0m6.832s 00:07:53.060 09:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:53.060 09:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:53.060 ************************************ 00:07:53.060 END TEST nvmf_host_management 00:07:53.060 ************************************ 00:07:53.060 09:41:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:53.060 09:41:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:53.060 09:41:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:53.060 09:41:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:53.060 ************************************ 00:07:53.060 START TEST nvmf_lvol 00:07:53.060 ************************************ 00:07:53.060 09:41:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:53.321 * Looking for test storage... 00:07:53.321 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:53.321 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:53.321 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:07:53.321 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:53.321 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:53.321 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:53.321 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:53.321 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:53.321 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:53.321 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:53.321 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:53.321 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:53.321 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:53.321 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:53.321 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:53.321 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:53.321 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:53.321 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:53.321 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:53.321 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:53.321 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:53.321 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:53.321 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:53.321 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:53.321 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:53.321 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:53.321 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:53.321 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:53.321 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:53.321 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:53.321 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:53.321 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:53.321 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:53.321 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:53.321 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:53.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.321 --rc genhtml_branch_coverage=1 00:07:53.321 --rc genhtml_function_coverage=1 00:07:53.321 --rc genhtml_legend=1 00:07:53.321 --rc geninfo_all_blocks=1 00:07:53.321 --rc geninfo_unexecuted_blocks=1 00:07:53.321 00:07:53.321 ' 00:07:53.321 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:53.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.321 --rc genhtml_branch_coverage=1 00:07:53.321 --rc genhtml_function_coverage=1 00:07:53.321 --rc genhtml_legend=1 00:07:53.321 --rc geninfo_all_blocks=1 00:07:53.321 --rc geninfo_unexecuted_blocks=1 00:07:53.321 00:07:53.321 ' 00:07:53.321 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:53.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.321 --rc genhtml_branch_coverage=1 00:07:53.321 --rc genhtml_function_coverage=1 00:07:53.321 --rc genhtml_legend=1 00:07:53.321 --rc geninfo_all_blocks=1 00:07:53.321 --rc geninfo_unexecuted_blocks=1 00:07:53.321 00:07:53.321 ' 00:07:53.321 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:53.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.321 --rc genhtml_branch_coverage=1 00:07:53.321 --rc genhtml_function_coverage=1 00:07:53.321 --rc genhtml_legend=1 00:07:53.321 --rc geninfo_all_blocks=1 00:07:53.321 --rc geninfo_unexecuted_blocks=1 00:07:53.321 00:07:53.321 ' 00:07:53.321 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:53.321 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:53.321 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:53.321 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:53.321 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:53.321 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:53.321 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:53.321 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:53.321 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:53.321 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:53.321 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:53.321 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:53.321 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:53.321 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:53.321 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:53.321 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:53.321 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:53.321 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:53.321 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:53.321 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:53.322 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:53.322 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:53.322 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:53.322 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.322 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.322 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.322 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:53.322 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.322 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:53.322 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:53.322 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:53.322 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:53.322 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:53.322 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:53.322 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:53.322 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:53.322 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:53.322 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:53.322 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:53.322 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:53.322 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:53.322 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:53.322 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:53.322 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:53.322 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:53.322 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:53.322 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:53.322 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:53.322 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:53.322 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:53.322 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:53.322 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:53.322 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:53.322 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:53.322 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:53.322 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:07:53.322 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:01.498 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:01.498 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:08:01.498 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:01.498 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:01.498 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:01.498 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:01.498 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:01.498 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:08:01.498 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:01.498 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:08:01.498 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:08:01.498 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:08:01.498 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:08:01.498 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:08:01.498 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:08:01.498 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:01.498 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:01.498 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:01.498 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:01.498 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:01.498 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:01.498 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:01.498 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:01.498 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:01.498 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:01.498 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:01.498 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:01.498 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:01.498 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:01.498 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:01.498 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:01.498 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:01.498 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:01.498 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:01.498 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:01.498 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:01.498 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:01.498 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:01.498 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:01.498 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:01.498 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:01.498 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:01.498 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:01.498 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:01.498 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:01.498 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:01.498 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:01.498 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:01.498 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:01.498 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:01.498 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:01.498 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:01.498 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:01.498 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:01.498 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:01.498 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:01.498 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:01.498 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:01.498 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:01.498 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:01.498 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:01.498 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:01.498 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:01.498 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:01.498 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:01.498 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:01.498 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:01.498 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:01.498 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:01.498 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:01.498 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:01.498 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:01.498 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:01.498 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:08:01.498 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:01.498 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:01.498 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:01.498 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:01.498 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:01.498 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:01.498 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:01.498 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:01.499 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:01.499 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:01.499 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:01.499 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:01.499 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:01.499 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:01.499 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:01.499 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:01.499 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:01.499 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:01.499 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:01.499 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:01.499 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:01.499 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:01.499 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:01.499 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:01.499 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:01.499 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:01.499 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:01.499 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.626 ms 00:08:01.499 00:08:01.499 --- 10.0.0.2 ping statistics --- 00:08:01.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:01.499 rtt min/avg/max/mdev = 0.626/0.626/0.626/0.000 ms 00:08:01.499 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:01.499 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:01.499 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.299 ms 00:08:01.499 00:08:01.499 --- 10.0.0.1 ping statistics --- 00:08:01.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:01.499 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:08:01.499 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:01.499 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:08:01.499 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:01.499 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:01.499 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:01.499 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:01.499 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:01.499 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:01.499 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:01.499 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:01.499 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:01.499 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:01.499 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:01.499 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=1182375 00:08:01.499 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 1182375 00:08:01.499 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:01.499 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 1182375 ']' 00:08:01.499 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:01.499 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:01.499 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:01.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:01.499 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:01.499 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:01.499 [2024-11-20 09:41:31.773784] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:08:01.499 [2024-11-20 09:41:31.773854] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:01.499 [2024-11-20 09:41:31.870701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:01.499 [2024-11-20 09:41:31.923703] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:01.499 [2024-11-20 09:41:31.923751] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:01.499 [2024-11-20 09:41:31.923760] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:01.499 [2024-11-20 09:41:31.923768] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:01.499 [2024-11-20 09:41:31.923774] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:01.499 [2024-11-20 09:41:31.925630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:01.499 [2024-11-20 09:41:31.925794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.499 [2024-11-20 09:41:31.925796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:01.761 09:41:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:01.761 09:41:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:08:01.761 09:41:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:01.761 09:41:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:01.761 09:41:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:01.761 09:41:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:01.761 09:41:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:02.023 [2024-11-20 09:41:32.801385] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:02.023 09:41:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:02.283 09:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:02.283 09:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:02.544 09:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:02.544 09:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:02.805 09:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:02.805 09:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=475633e6-0d9f-4c81-91cc-b91e02ec2851 00:08:02.805 09:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 475633e6-0d9f-4c81-91cc-b91e02ec2851 lvol 20 00:08:03.064 09:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=46aa816d-5593-427b-8440-ac16b36549f0 00:08:03.065 09:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:03.325 09:41:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 46aa816d-5593-427b-8440-ac16b36549f0 00:08:03.586 09:41:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:03.586 [2024-11-20 09:41:34.465814] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:03.586 09:41:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:03.848 09:41:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1183031 00:08:03.848 09:41:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:03.848 09:41:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:04.790 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 46aa816d-5593-427b-8440-ac16b36549f0 MY_SNAPSHOT 00:08:05.052 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=5ffa8a6e-f635-4043-a67a-aeec5d5d123d 00:08:05.052 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 46aa816d-5593-427b-8440-ac16b36549f0 30 00:08:05.314 09:41:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 5ffa8a6e-f635-4043-a67a-aeec5d5d123d MY_CLONE 00:08:05.574 09:41:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=a0e650e8-9bb9-4f29-a63d-d2c64771bf63 00:08:05.574 09:41:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate a0e650e8-9bb9-4f29-a63d-d2c64771bf63 00:08:05.835 09:41:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1183031 00:08:15.839 Initializing NVMe Controllers 00:08:15.839 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:15.839 Controller IO queue size 128, less than required. 00:08:15.839 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:15.839 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:15.839 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:15.839 Initialization complete. Launching workers. 00:08:15.839 ======================================================== 00:08:15.839 Latency(us) 00:08:15.840 Device Information : IOPS MiB/s Average min max 00:08:15.840 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 16249.90 63.48 7878.77 1580.95 44626.68 00:08:15.840 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 17184.28 67.13 7449.36 599.17 52415.64 00:08:15.840 ======================================================== 00:08:15.840 Total : 33434.18 130.60 7658.06 599.17 52415.64 00:08:15.840 00:08:15.840 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:15.840 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 46aa816d-5593-427b-8440-ac16b36549f0 00:08:15.840 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 475633e6-0d9f-4c81-91cc-b91e02ec2851 00:08:15.840 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:15.840 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:15.840 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:15.840 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:15.840 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:08:15.840 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:15.840 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:08:15.840 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:15.840 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:15.840 rmmod nvme_tcp 00:08:15.840 rmmod nvme_fabrics 00:08:15.840 rmmod nvme_keyring 00:08:15.840 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:15.840 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:08:15.840 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:08:15.840 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 1182375 ']' 00:08:15.840 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 1182375 00:08:15.840 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 1182375 ']' 00:08:15.840 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 1182375 00:08:15.840 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:08:15.840 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:15.840 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1182375 00:08:15.840 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:15.840 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:15.840 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1182375' 00:08:15.840 killing process with pid 1182375 00:08:15.840 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 1182375 00:08:15.840 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 1182375 00:08:15.840 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:15.840 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:15.840 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:15.840 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:08:15.840 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:08:15.840 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:15.840 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:08:15.840 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:15.840 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:15.840 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:15.840 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:15.840 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:17.224 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:17.224 00:08:17.224 real 0m24.008s 00:08:17.224 user 1m5.128s 00:08:17.224 sys 0m8.685s 00:08:17.224 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:17.224 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:17.224 ************************************ 00:08:17.224 END TEST nvmf_lvol 00:08:17.224 ************************************ 00:08:17.224 09:41:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:17.224 09:41:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:17.224 09:41:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:17.224 09:41:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:17.224 ************************************ 00:08:17.224 START TEST nvmf_lvs_grow 00:08:17.224 ************************************ 00:08:17.224 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:17.486 * Looking for test storage... 00:08:17.486 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:17.486 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:17.486 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:08:17.486 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:17.486 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:17.486 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:17.486 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:17.486 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:17.486 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:08:17.486 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:08:17.486 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:08:17.486 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:08:17.486 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:08:17.486 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:08:17.486 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:08:17.486 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:17.486 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:08:17.486 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:08:17.486 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:17.486 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:17.486 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:08:17.486 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:08:17.486 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:17.486 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:08:17.486 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:08:17.486 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:08:17.486 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:08:17.486 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:17.486 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:08:17.486 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:08:17.486 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:17.486 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:17.486 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:08:17.486 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:17.486 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:17.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.486 --rc genhtml_branch_coverage=1 00:08:17.486 --rc genhtml_function_coverage=1 00:08:17.486 --rc genhtml_legend=1 00:08:17.486 --rc geninfo_all_blocks=1 00:08:17.486 --rc geninfo_unexecuted_blocks=1 00:08:17.486 00:08:17.486 ' 00:08:17.486 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:17.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.486 --rc genhtml_branch_coverage=1 00:08:17.486 --rc genhtml_function_coverage=1 00:08:17.486 --rc genhtml_legend=1 00:08:17.486 --rc geninfo_all_blocks=1 00:08:17.486 --rc geninfo_unexecuted_blocks=1 00:08:17.486 00:08:17.486 ' 00:08:17.486 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:17.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.487 --rc genhtml_branch_coverage=1 00:08:17.487 --rc genhtml_function_coverage=1 00:08:17.487 --rc genhtml_legend=1 00:08:17.487 --rc geninfo_all_blocks=1 00:08:17.487 --rc geninfo_unexecuted_blocks=1 00:08:17.487 00:08:17.487 ' 00:08:17.487 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:17.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.487 --rc genhtml_branch_coverage=1 00:08:17.487 --rc genhtml_function_coverage=1 00:08:17.487 --rc genhtml_legend=1 00:08:17.487 --rc geninfo_all_blocks=1 00:08:17.487 --rc geninfo_unexecuted_blocks=1 00:08:17.487 00:08:17.487 ' 00:08:17.487 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:17.487 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:17.487 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:17.487 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:17.487 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:17.487 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:17.487 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:17.487 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:17.487 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:17.487 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:17.487 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:17.487 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:17.487 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:17.487 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:17.487 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:17.487 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:17.487 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:17.487 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:17.487 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:17.487 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:08:17.487 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:17.487 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:17.487 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:17.487 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.487 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.487 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.487 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:17.487 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.487 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:08:17.487 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:17.487 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:17.487 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:17.487 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:17.487 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:17.487 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:17.487 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:17.487 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:17.487 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:17.487 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:17.487 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:17.487 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:17.487 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:17.487 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:17.487 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:17.487 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:17.487 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:17.487 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:17.487 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:17.487 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:17.487 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:17.487 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:17.487 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:17.487 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:08:17.487 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:25.637 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:25.637 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:08:25.637 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:25.637 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:25.637 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:25.637 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:25.637 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:25.637 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:08:25.637 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:25.637 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:08:25.637 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:08:25.637 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:08:25.637 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:08:25.637 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:08:25.637 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:08:25.637 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:25.637 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:25.637 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:25.637 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:25.637 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:25.637 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:25.637 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:25.637 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:25.637 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:25.637 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:25.637 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:25.637 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:25.637 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:25.637 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:25.637 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:25.638 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:25.638 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:25.638 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:25.638 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:25.638 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:25.638 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:25.638 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:25.638 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:25.638 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:25.638 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:25.638 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:25.638 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:25.638 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:25.638 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:25.638 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:25.638 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:25.638 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:25.638 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:25.638 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:25.638 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:25.638 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:25.638 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:25.638 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:25.638 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:25.638 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:25.638 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:25.638 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:25.638 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:25.638 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:25.638 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:25.638 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:25.638 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:25.638 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:25.638 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:25.638 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:25.638 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:25.638 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:25.638 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:25.638 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:25.638 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:25.638 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:25.638 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:25.638 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:25.638 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:08:25.638 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:25.638 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:25.638 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:25.638 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:25.638 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:25.638 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:25.638 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:25.638 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:25.638 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:25.638 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:25.638 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:25.638 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:25.638 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:25.638 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:25.638 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:25.638 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:25.638 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:25.638 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:25.638 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:25.638 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:25.638 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:25.638 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:25.638 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:25.638 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:25.638 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:25.638 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:25.638 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:25.638 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.541 ms 00:08:25.638 00:08:25.638 --- 10.0.0.2 ping statistics --- 00:08:25.638 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:25.638 rtt min/avg/max/mdev = 0.541/0.541/0.541/0.000 ms 00:08:25.638 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:25.638 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:25.638 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:08:25.638 00:08:25.638 --- 10.0.0.1 ping statistics --- 00:08:25.638 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:25.638 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:08:25.638 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:25.638 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:08:25.638 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:25.638 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:25.638 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:25.638 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:25.638 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:25.638 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:25.638 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:25.638 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:25.638 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:25.638 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:25.638 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:25.638 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=1189447 00:08:25.638 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 1189447 00:08:25.638 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:25.638 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 1189447 ']' 00:08:25.638 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:25.638 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:25.638 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:25.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:25.638 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:25.638 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:25.638 [2024-11-20 09:41:55.837609] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:08:25.638 [2024-11-20 09:41:55.837675] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:25.638 [2024-11-20 09:41:55.935755] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.638 [2024-11-20 09:41:55.986701] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:25.638 [2024-11-20 09:41:55.986751] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:25.638 [2024-11-20 09:41:55.986760] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:25.638 [2024-11-20 09:41:55.986768] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:25.639 [2024-11-20 09:41:55.986775] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:25.639 [2024-11-20 09:41:55.987582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.901 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:25.901 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:08:25.901 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:25.901 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:25.901 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:25.901 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:25.901 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:26.162 [2024-11-20 09:41:56.857486] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:26.162 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:26.162 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:26.162 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:26.162 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:26.162 ************************************ 00:08:26.162 START TEST lvs_grow_clean 00:08:26.162 ************************************ 00:08:26.162 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:08:26.162 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:26.162 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:26.162 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:26.162 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:26.162 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:26.162 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:26.162 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:26.162 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:26.162 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:26.424 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:26.424 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:26.686 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=100a57dd-eab6-41eb-93ae-e7198e96d8e1 00:08:26.686 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 100a57dd-eab6-41eb-93ae-e7198e96d8e1 00:08:26.686 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:26.686 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:26.686 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:26.686 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 100a57dd-eab6-41eb-93ae-e7198e96d8e1 lvol 150 00:08:26.947 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=f257cb05-d800-4cb4-9fd6-07561d357fe9 00:08:26.947 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:26.947 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:27.208 [2024-11-20 09:41:57.886699] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:27.208 [2024-11-20 09:41:57.886782] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:27.208 true 00:08:27.208 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 100a57dd-eab6-41eb-93ae-e7198e96d8e1 00:08:27.208 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:27.208 09:41:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:27.208 09:41:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:27.469 09:41:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f257cb05-d800-4cb4-9fd6-07561d357fe9 00:08:27.729 09:41:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:27.729 [2024-11-20 09:41:58.625055] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:27.990 09:41:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:27.990 09:41:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1190109 00:08:27.990 09:41:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:27.990 09:41:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:27.990 09:41:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1190109 /var/tmp/bdevperf.sock 00:08:27.990 09:41:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 1190109 ']' 00:08:27.990 09:41:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:27.990 09:41:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:27.990 09:41:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:27.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:27.990 09:41:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:27.990 09:41:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:27.991 [2024-11-20 09:41:58.870155] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:08:27.991 [2024-11-20 09:41:58.870244] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1190109 ] 00:08:28.251 [2024-11-20 09:41:58.960675] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.251 [2024-11-20 09:41:59.013295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:28.850 09:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:28.850 09:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:08:28.850 09:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:29.112 Nvme0n1 00:08:29.112 09:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:29.372 [ 00:08:29.372 { 00:08:29.372 "name": "Nvme0n1", 00:08:29.372 "aliases": [ 00:08:29.372 "f257cb05-d800-4cb4-9fd6-07561d357fe9" 00:08:29.372 ], 00:08:29.372 "product_name": "NVMe disk", 00:08:29.372 "block_size": 4096, 00:08:29.372 "num_blocks": 38912, 00:08:29.372 "uuid": "f257cb05-d800-4cb4-9fd6-07561d357fe9", 00:08:29.372 "numa_id": 0, 00:08:29.372 "assigned_rate_limits": { 00:08:29.372 "rw_ios_per_sec": 0, 00:08:29.372 "rw_mbytes_per_sec": 0, 00:08:29.372 "r_mbytes_per_sec": 0, 00:08:29.372 "w_mbytes_per_sec": 0 00:08:29.372 }, 00:08:29.372 "claimed": false, 00:08:29.372 "zoned": false, 00:08:29.372 "supported_io_types": { 00:08:29.372 "read": true, 00:08:29.372 "write": true, 00:08:29.372 "unmap": true, 00:08:29.372 "flush": true, 00:08:29.372 "reset": true, 00:08:29.372 "nvme_admin": true, 00:08:29.372 "nvme_io": true, 00:08:29.372 "nvme_io_md": false, 00:08:29.372 "write_zeroes": true, 00:08:29.372 "zcopy": false, 00:08:29.372 "get_zone_info": false, 00:08:29.372 "zone_management": false, 00:08:29.372 "zone_append": false, 00:08:29.372 "compare": true, 00:08:29.372 "compare_and_write": true, 00:08:29.372 "abort": true, 00:08:29.372 "seek_hole": false, 00:08:29.372 "seek_data": false, 00:08:29.372 "copy": true, 00:08:29.372 "nvme_iov_md": false 00:08:29.372 }, 00:08:29.372 "memory_domains": [ 00:08:29.372 { 00:08:29.372 "dma_device_id": "system", 00:08:29.372 "dma_device_type": 1 00:08:29.372 } 00:08:29.372 ], 00:08:29.372 "driver_specific": { 00:08:29.372 "nvme": [ 00:08:29.372 { 00:08:29.372 "trid": { 00:08:29.372 "trtype": "TCP", 00:08:29.372 "adrfam": "IPv4", 00:08:29.372 "traddr": "10.0.0.2", 00:08:29.372 "trsvcid": "4420", 00:08:29.372 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:29.372 }, 00:08:29.372 "ctrlr_data": { 00:08:29.372 "cntlid": 1, 00:08:29.372 "vendor_id": "0x8086", 00:08:29.372 "model_number": "SPDK bdev Controller", 00:08:29.372 "serial_number": "SPDK0", 00:08:29.372 "firmware_revision": "25.01", 00:08:29.372 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:29.372 "oacs": { 00:08:29.372 "security": 0, 00:08:29.372 "format": 0, 00:08:29.372 "firmware": 0, 00:08:29.372 "ns_manage": 0 00:08:29.372 }, 00:08:29.372 "multi_ctrlr": true, 00:08:29.372 "ana_reporting": false 00:08:29.372 }, 00:08:29.372 "vs": { 00:08:29.372 "nvme_version": "1.3" 00:08:29.372 }, 00:08:29.372 "ns_data": { 00:08:29.372 "id": 1, 00:08:29.372 "can_share": true 00:08:29.372 } 00:08:29.372 } 00:08:29.372 ], 00:08:29.372 "mp_policy": "active_passive" 00:08:29.372 } 00:08:29.372 } 00:08:29.372 ] 00:08:29.372 09:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1190275 00:08:29.372 09:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:29.372 09:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:29.372 Running I/O for 10 seconds... 00:08:30.755 Latency(us) 00:08:30.755 [2024-11-20T08:42:01.671Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:30.755 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:30.755 Nvme0n1 : 1.00 25093.00 98.02 0.00 0.00 0.00 0.00 0.00 00:08:30.755 [2024-11-20T08:42:01.671Z] =================================================================================================================== 00:08:30.755 [2024-11-20T08:42:01.671Z] Total : 25093.00 98.02 0.00 0.00 0.00 0.00 0.00 00:08:30.755 00:08:31.326 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 100a57dd-eab6-41eb-93ae-e7198e96d8e1 00:08:31.587 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:31.587 Nvme0n1 : 2.00 25218.50 98.51 0.00 0.00 0.00 0.00 0.00 00:08:31.587 [2024-11-20T08:42:02.503Z] =================================================================================================================== 00:08:31.587 [2024-11-20T08:42:02.503Z] Total : 25218.50 98.51 0.00 0.00 0.00 0.00 0.00 00:08:31.587 00:08:31.587 true 00:08:31.587 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 100a57dd-eab6-41eb-93ae-e7198e96d8e1 00:08:31.587 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:31.848 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:31.848 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:31.848 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1190275 00:08:32.419 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:32.419 Nvme0n1 : 3.00 25313.67 98.88 0.00 0.00 0.00 0.00 0.00 00:08:32.419 [2024-11-20T08:42:03.335Z] =================================================================================================================== 00:08:32.419 [2024-11-20T08:42:03.335Z] Total : 25313.67 98.88 0.00 0.00 0.00 0.00 0.00 00:08:32.419 00:08:33.804 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:33.804 Nvme0n1 : 4.00 25368.25 99.09 0.00 0.00 0.00 0.00 0.00 00:08:33.804 [2024-11-20T08:42:04.720Z] =================================================================================================================== 00:08:33.804 [2024-11-20T08:42:04.720Z] Total : 25368.25 99.09 0.00 0.00 0.00 0.00 0.00 00:08:33.804 00:08:34.376 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:34.376 Nvme0n1 : 5.00 25405.00 99.24 0.00 0.00 0.00 0.00 0.00 00:08:34.376 [2024-11-20T08:42:05.292Z] =================================================================================================================== 00:08:34.376 [2024-11-20T08:42:05.292Z] Total : 25405.00 99.24 0.00 0.00 0.00 0.00 0.00 00:08:34.376 00:08:35.761 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:35.762 Nvme0n1 : 6.00 25437.00 99.36 0.00 0.00 0.00 0.00 0.00 00:08:35.762 [2024-11-20T08:42:06.678Z] =================================================================================================================== 00:08:35.762 [2024-11-20T08:42:06.678Z] Total : 25437.00 99.36 0.00 0.00 0.00 0.00 0.00 00:08:35.762 00:08:36.704 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:36.705 Nvme0n1 : 7.00 25459.71 99.45 0.00 0.00 0.00 0.00 0.00 00:08:36.705 [2024-11-20T08:42:07.621Z] =================================================================================================================== 00:08:36.705 [2024-11-20T08:42:07.621Z] Total : 25459.71 99.45 0.00 0.00 0.00 0.00 0.00 00:08:36.705 00:08:37.647 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:37.647 Nvme0n1 : 8.00 25482.62 99.54 0.00 0.00 0.00 0.00 0.00 00:08:37.647 [2024-11-20T08:42:08.563Z] =================================================================================================================== 00:08:37.647 [2024-11-20T08:42:08.563Z] Total : 25482.62 99.54 0.00 0.00 0.00 0.00 0.00 00:08:37.647 00:08:38.590 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:38.590 Nvme0n1 : 9.00 25494.78 99.59 0.00 0.00 0.00 0.00 0.00 00:08:38.590 [2024-11-20T08:42:09.506Z] =================================================================================================================== 00:08:38.590 [2024-11-20T08:42:09.506Z] Total : 25494.78 99.59 0.00 0.00 0.00 0.00 0.00 00:08:38.590 00:08:39.532 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:39.532 Nvme0n1 : 10.00 25505.10 99.63 0.00 0.00 0.00 0.00 0.00 00:08:39.532 [2024-11-20T08:42:10.448Z] =================================================================================================================== 00:08:39.532 [2024-11-20T08:42:10.448Z] Total : 25505.10 99.63 0.00 0.00 0.00 0.00 0.00 00:08:39.532 00:08:39.532 00:08:39.532 Latency(us) 00:08:39.532 [2024-11-20T08:42:10.448Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:39.532 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:39.532 Nvme0n1 : 10.00 25501.86 99.62 0.00 0.00 5015.81 2184.53 8792.75 00:08:39.532 [2024-11-20T08:42:10.448Z] =================================================================================================================== 00:08:39.532 [2024-11-20T08:42:10.448Z] Total : 25501.86 99.62 0.00 0.00 5015.81 2184.53 8792.75 00:08:39.532 { 00:08:39.532 "results": [ 00:08:39.532 { 00:08:39.532 "job": "Nvme0n1", 00:08:39.532 "core_mask": "0x2", 00:08:39.532 "workload": "randwrite", 00:08:39.532 "status": "finished", 00:08:39.532 "queue_depth": 128, 00:08:39.532 "io_size": 4096, 00:08:39.532 "runtime": 10.003742, 00:08:39.532 "iops": 25501.857205033877, 00:08:39.532 "mibps": 99.61662970716358, 00:08:39.532 "io_failed": 0, 00:08:39.532 "io_timeout": 0, 00:08:39.532 "avg_latency_us": 5015.806057945337, 00:08:39.532 "min_latency_us": 2184.5333333333333, 00:08:39.532 "max_latency_us": 8792.746666666666 00:08:39.532 } 00:08:39.532 ], 00:08:39.532 "core_count": 1 00:08:39.532 } 00:08:39.532 09:42:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1190109 00:08:39.532 09:42:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 1190109 ']' 00:08:39.532 09:42:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 1190109 00:08:39.532 09:42:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:08:39.532 09:42:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:39.532 09:42:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1190109 00:08:39.532 09:42:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:39.532 09:42:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:39.532 09:42:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1190109' 00:08:39.532 killing process with pid 1190109 00:08:39.532 09:42:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 1190109 00:08:39.532 Received shutdown signal, test time was about 10.000000 seconds 00:08:39.532 00:08:39.532 Latency(us) 00:08:39.532 [2024-11-20T08:42:10.448Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:39.532 [2024-11-20T08:42:10.448Z] =================================================================================================================== 00:08:39.532 [2024-11-20T08:42:10.448Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:39.532 09:42:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 1190109 00:08:39.793 09:42:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:39.793 09:42:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:40.056 09:42:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 100a57dd-eab6-41eb-93ae-e7198e96d8e1 00:08:40.056 09:42:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:40.317 09:42:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:40.317 09:42:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:40.317 09:42:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:40.317 [2024-11-20 09:42:11.161382] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:40.317 09:42:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 100a57dd-eab6-41eb-93ae-e7198e96d8e1 00:08:40.317 09:42:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:08:40.317 09:42:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 100a57dd-eab6-41eb-93ae-e7198e96d8e1 00:08:40.317 09:42:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:40.317 09:42:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:40.317 09:42:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:40.317 09:42:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:40.317 09:42:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:40.317 09:42:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:40.317 09:42:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:40.317 09:42:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:40.317 09:42:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 100a57dd-eab6-41eb-93ae-e7198e96d8e1 00:08:40.579 request: 00:08:40.579 { 00:08:40.579 "uuid": "100a57dd-eab6-41eb-93ae-e7198e96d8e1", 00:08:40.579 "method": "bdev_lvol_get_lvstores", 00:08:40.579 "req_id": 1 00:08:40.579 } 00:08:40.579 Got JSON-RPC error response 00:08:40.579 response: 00:08:40.579 { 00:08:40.579 "code": -19, 00:08:40.579 "message": "No such device" 00:08:40.579 } 00:08:40.579 09:42:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:08:40.579 09:42:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:40.579 09:42:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:40.579 09:42:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:40.579 09:42:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:40.840 aio_bdev 00:08:40.840 09:42:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev f257cb05-d800-4cb4-9fd6-07561d357fe9 00:08:40.840 09:42:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=f257cb05-d800-4cb4-9fd6-07561d357fe9 00:08:40.840 09:42:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:40.840 09:42:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:08:40.840 09:42:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:40.840 09:42:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:40.840 09:42:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:40.840 09:42:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f257cb05-d800-4cb4-9fd6-07561d357fe9 -t 2000 00:08:41.101 [ 00:08:41.101 { 00:08:41.101 "name": "f257cb05-d800-4cb4-9fd6-07561d357fe9", 00:08:41.101 "aliases": [ 00:08:41.101 "lvs/lvol" 00:08:41.101 ], 00:08:41.101 "product_name": "Logical Volume", 00:08:41.101 "block_size": 4096, 00:08:41.101 "num_blocks": 38912, 00:08:41.101 "uuid": "f257cb05-d800-4cb4-9fd6-07561d357fe9", 00:08:41.101 "assigned_rate_limits": { 00:08:41.101 "rw_ios_per_sec": 0, 00:08:41.101 "rw_mbytes_per_sec": 0, 00:08:41.101 "r_mbytes_per_sec": 0, 00:08:41.101 "w_mbytes_per_sec": 0 00:08:41.101 }, 00:08:41.101 "claimed": false, 00:08:41.101 "zoned": false, 00:08:41.101 "supported_io_types": { 00:08:41.101 "read": true, 00:08:41.101 "write": true, 00:08:41.101 "unmap": true, 00:08:41.101 "flush": false, 00:08:41.101 "reset": true, 00:08:41.101 "nvme_admin": false, 00:08:41.101 "nvme_io": false, 00:08:41.101 "nvme_io_md": false, 00:08:41.101 "write_zeroes": true, 00:08:41.101 "zcopy": false, 00:08:41.101 "get_zone_info": false, 00:08:41.101 "zone_management": false, 00:08:41.101 "zone_append": false, 00:08:41.101 "compare": false, 00:08:41.101 "compare_and_write": false, 00:08:41.101 "abort": false, 00:08:41.101 "seek_hole": true, 00:08:41.101 "seek_data": true, 00:08:41.101 "copy": false, 00:08:41.101 "nvme_iov_md": false 00:08:41.101 }, 00:08:41.101 "driver_specific": { 00:08:41.101 "lvol": { 00:08:41.101 "lvol_store_uuid": "100a57dd-eab6-41eb-93ae-e7198e96d8e1", 00:08:41.101 "base_bdev": "aio_bdev", 00:08:41.101 "thin_provision": false, 00:08:41.101 "num_allocated_clusters": 38, 00:08:41.101 "snapshot": false, 00:08:41.101 "clone": false, 00:08:41.101 "esnap_clone": false 00:08:41.101 } 00:08:41.101 } 00:08:41.101 } 00:08:41.101 ] 00:08:41.101 09:42:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:08:41.101 09:42:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 100a57dd-eab6-41eb-93ae-e7198e96d8e1 00:08:41.101 09:42:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:41.362 09:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:41.362 09:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:41.362 09:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 100a57dd-eab6-41eb-93ae-e7198e96d8e1 00:08:41.362 09:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:41.362 09:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f257cb05-d800-4cb4-9fd6-07561d357fe9 00:08:41.623 09:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 100a57dd-eab6-41eb-93ae-e7198e96d8e1 00:08:41.884 09:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:41.884 09:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:41.884 00:08:41.884 real 0m15.850s 00:08:41.884 user 0m15.590s 00:08:41.884 sys 0m1.430s 00:08:41.884 09:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:41.884 09:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:41.884 ************************************ 00:08:41.884 END TEST lvs_grow_clean 00:08:41.884 ************************************ 00:08:42.146 09:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:42.146 09:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:42.146 09:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:42.146 09:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:42.146 ************************************ 00:08:42.146 START TEST lvs_grow_dirty 00:08:42.146 ************************************ 00:08:42.146 09:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:08:42.146 09:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:42.146 09:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:42.146 09:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:42.146 09:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:42.146 09:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:42.146 09:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:42.146 09:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:42.146 09:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:42.146 09:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:42.146 09:42:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:42.146 09:42:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:42.407 09:42:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=d6f0ffc2-ab90-4820-b5a3-3ed9cc0b5e5a 00:08:42.407 09:42:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d6f0ffc2-ab90-4820-b5a3-3ed9cc0b5e5a 00:08:42.407 09:42:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:42.668 09:42:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:42.668 09:42:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:42.668 09:42:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d6f0ffc2-ab90-4820-b5a3-3ed9cc0b5e5a lvol 150 00:08:42.668 09:42:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=2545f511-7ac0-410e-ad55-f22834333901 00:08:42.668 09:42:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:42.668 09:42:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:42.929 [2024-11-20 09:42:13.698594] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:42.929 [2024-11-20 09:42:13.698636] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:42.929 true 00:08:42.929 09:42:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d6f0ffc2-ab90-4820-b5a3-3ed9cc0b5e5a 00:08:42.929 09:42:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:43.190 09:42:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:43.190 09:42:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:43.190 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 2545f511-7ac0-410e-ad55-f22834333901 00:08:43.451 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:43.451 [2024-11-20 09:42:14.340464] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:43.451 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:43.712 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1193821 00:08:43.712 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:43.712 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1193821 /var/tmp/bdevperf.sock 00:08:43.712 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:43.712 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1193821 ']' 00:08:43.712 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:43.712 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:43.712 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:43.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:43.712 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:43.712 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:43.712 [2024-11-20 09:42:14.568587] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:08:43.712 [2024-11-20 09:42:14.568637] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1193821 ] 00:08:43.974 [2024-11-20 09:42:14.649104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.974 [2024-11-20 09:42:14.678896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:44.547 09:42:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:44.547 09:42:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:44.547 09:42:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:45.119 Nvme0n1 00:08:45.119 09:42:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:45.119 [ 00:08:45.119 { 00:08:45.119 "name": "Nvme0n1", 00:08:45.119 "aliases": [ 00:08:45.119 "2545f511-7ac0-410e-ad55-f22834333901" 00:08:45.119 ], 00:08:45.119 "product_name": "NVMe disk", 00:08:45.119 "block_size": 4096, 00:08:45.119 "num_blocks": 38912, 00:08:45.119 "uuid": "2545f511-7ac0-410e-ad55-f22834333901", 00:08:45.119 "numa_id": 0, 00:08:45.119 "assigned_rate_limits": { 00:08:45.119 "rw_ios_per_sec": 0, 00:08:45.119 "rw_mbytes_per_sec": 0, 00:08:45.119 "r_mbytes_per_sec": 0, 00:08:45.119 "w_mbytes_per_sec": 0 00:08:45.119 }, 00:08:45.119 "claimed": false, 00:08:45.119 "zoned": false, 00:08:45.119 "supported_io_types": { 00:08:45.119 "read": true, 00:08:45.119 "write": true, 00:08:45.119 "unmap": true, 00:08:45.119 "flush": true, 00:08:45.119 "reset": true, 00:08:45.119 "nvme_admin": true, 00:08:45.119 "nvme_io": true, 00:08:45.119 "nvme_io_md": false, 00:08:45.119 "write_zeroes": true, 00:08:45.119 "zcopy": false, 00:08:45.119 "get_zone_info": false, 00:08:45.119 "zone_management": false, 00:08:45.119 "zone_append": false, 00:08:45.119 "compare": true, 00:08:45.119 "compare_and_write": true, 00:08:45.119 "abort": true, 00:08:45.119 "seek_hole": false, 00:08:45.119 "seek_data": false, 00:08:45.119 "copy": true, 00:08:45.119 "nvme_iov_md": false 00:08:45.119 }, 00:08:45.119 "memory_domains": [ 00:08:45.119 { 00:08:45.119 "dma_device_id": "system", 00:08:45.119 "dma_device_type": 1 00:08:45.119 } 00:08:45.119 ], 00:08:45.119 "driver_specific": { 00:08:45.119 "nvme": [ 00:08:45.119 { 00:08:45.119 "trid": { 00:08:45.119 "trtype": "TCP", 00:08:45.119 "adrfam": "IPv4", 00:08:45.119 "traddr": "10.0.0.2", 00:08:45.119 "trsvcid": "4420", 00:08:45.119 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:45.119 }, 00:08:45.119 "ctrlr_data": { 00:08:45.119 "cntlid": 1, 00:08:45.119 "vendor_id": "0x8086", 00:08:45.119 "model_number": "SPDK bdev Controller", 00:08:45.119 "serial_number": "SPDK0", 00:08:45.119 "firmware_revision": "25.01", 00:08:45.119 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:45.119 "oacs": { 00:08:45.119 "security": 0, 00:08:45.119 "format": 0, 00:08:45.119 "firmware": 0, 00:08:45.119 "ns_manage": 0 00:08:45.119 }, 00:08:45.119 "multi_ctrlr": true, 00:08:45.119 "ana_reporting": false 00:08:45.119 }, 00:08:45.120 "vs": { 00:08:45.120 "nvme_version": "1.3" 00:08:45.120 }, 00:08:45.120 "ns_data": { 00:08:45.120 "id": 1, 00:08:45.120 "can_share": true 00:08:45.120 } 00:08:45.120 } 00:08:45.120 ], 00:08:45.120 "mp_policy": "active_passive" 00:08:45.120 } 00:08:45.120 } 00:08:45.120 ] 00:08:45.120 09:42:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1194118 00:08:45.120 09:42:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:45.120 09:42:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:45.120 Running I/O for 10 seconds... 00:08:46.505 Latency(us) 00:08:46.505 [2024-11-20T08:42:17.421Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:46.505 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:46.505 Nvme0n1 : 1.00 25175.00 98.34 0.00 0.00 0.00 0.00 0.00 00:08:46.505 [2024-11-20T08:42:17.421Z] =================================================================================================================== 00:08:46.505 [2024-11-20T08:42:17.421Z] Total : 25175.00 98.34 0.00 0.00 0.00 0.00 0.00 00:08:46.505 00:08:47.076 09:42:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u d6f0ffc2-ab90-4820-b5a3-3ed9cc0b5e5a 00:08:47.337 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:47.337 Nvme0n1 : 2.00 25289.00 98.79 0.00 0.00 0.00 0.00 0.00 00:08:47.337 [2024-11-20T08:42:18.253Z] =================================================================================================================== 00:08:47.337 [2024-11-20T08:42:18.253Z] Total : 25289.00 98.79 0.00 0.00 0.00 0.00 0.00 00:08:47.337 00:08:47.337 true 00:08:47.337 09:42:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d6f0ffc2-ab90-4820-b5a3-3ed9cc0b5e5a 00:08:47.337 09:42:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:47.599 09:42:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:47.599 09:42:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:47.599 09:42:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1194118 00:08:48.172 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:48.172 Nvme0n1 : 3.00 25350.00 99.02 0.00 0.00 0.00 0.00 0.00 00:08:48.172 [2024-11-20T08:42:19.088Z] =================================================================================================================== 00:08:48.172 [2024-11-20T08:42:19.088Z] Total : 25350.00 99.02 0.00 0.00 0.00 0.00 0.00 00:08:48.172 00:08:49.115 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:49.115 Nvme0n1 : 4.00 25396.00 99.20 0.00 0.00 0.00 0.00 0.00 00:08:49.115 [2024-11-20T08:42:20.031Z] =================================================================================================================== 00:08:49.115 [2024-11-20T08:42:20.031Z] Total : 25396.00 99.20 0.00 0.00 0.00 0.00 0.00 00:08:49.115 00:08:50.503 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:50.503 Nvme0n1 : 5.00 25436.20 99.36 0.00 0.00 0.00 0.00 0.00 00:08:50.503 [2024-11-20T08:42:21.419Z] =================================================================================================================== 00:08:50.503 [2024-11-20T08:42:21.419Z] Total : 25436.20 99.36 0.00 0.00 0.00 0.00 0.00 00:08:50.503 00:08:51.445 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:51.445 Nvme0n1 : 6.00 25454.67 99.43 0.00 0.00 0.00 0.00 0.00 00:08:51.445 [2024-11-20T08:42:22.361Z] =================================================================================================================== 00:08:51.445 [2024-11-20T08:42:22.361Z] Total : 25454.67 99.43 0.00 0.00 0.00 0.00 0.00 00:08:51.445 00:08:52.388 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:52.388 Nvme0n1 : 7.00 25472.86 99.50 0.00 0.00 0.00 0.00 0.00 00:08:52.388 [2024-11-20T08:42:23.304Z] =================================================================================================================== 00:08:52.388 [2024-11-20T08:42:23.304Z] Total : 25472.86 99.50 0.00 0.00 0.00 0.00 0.00 00:08:52.388 00:08:53.331 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:53.331 Nvme0n1 : 8.00 25496.00 99.59 0.00 0.00 0.00 0.00 0.00 00:08:53.331 [2024-11-20T08:42:24.247Z] =================================================================================================================== 00:08:53.331 [2024-11-20T08:42:24.247Z] Total : 25496.00 99.59 0.00 0.00 0.00 0.00 0.00 00:08:53.331 00:08:54.273 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:54.273 Nvme0n1 : 9.00 25507.67 99.64 0.00 0.00 0.00 0.00 0.00 00:08:54.273 [2024-11-20T08:42:25.189Z] =================================================================================================================== 00:08:54.273 [2024-11-20T08:42:25.189Z] Total : 25507.67 99.64 0.00 0.00 0.00 0.00 0.00 00:08:54.273 00:08:55.216 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:55.216 Nvme0n1 : 10.00 25516.90 99.68 0.00 0.00 0.00 0.00 0.00 00:08:55.216 [2024-11-20T08:42:26.132Z] =================================================================================================================== 00:08:55.216 [2024-11-20T08:42:26.132Z] Total : 25516.90 99.68 0.00 0.00 0.00 0.00 0.00 00:08:55.216 00:08:55.216 00:08:55.216 Latency(us) 00:08:55.216 [2024-11-20T08:42:26.132Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:55.216 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:55.216 Nvme0n1 : 10.00 25520.64 99.69 0.00 0.00 5012.71 2771.63 8738.13 00:08:55.216 [2024-11-20T08:42:26.132Z] =================================================================================================================== 00:08:55.216 [2024-11-20T08:42:26.132Z] Total : 25520.64 99.69 0.00 0.00 5012.71 2771.63 8738.13 00:08:55.216 { 00:08:55.216 "results": [ 00:08:55.216 { 00:08:55.216 "job": "Nvme0n1", 00:08:55.216 "core_mask": "0x2", 00:08:55.216 "workload": "randwrite", 00:08:55.216 "status": "finished", 00:08:55.216 "queue_depth": 128, 00:08:55.216 "io_size": 4096, 00:08:55.216 "runtime": 10.003552, 00:08:55.216 "iops": 25520.635070422984, 00:08:55.216 "mibps": 99.68998074383978, 00:08:55.216 "io_failed": 0, 00:08:55.216 "io_timeout": 0, 00:08:55.216 "avg_latency_us": 5012.706321500057, 00:08:55.216 "min_latency_us": 2771.6266666666666, 00:08:55.216 "max_latency_us": 8738.133333333333 00:08:55.216 } 00:08:55.216 ], 00:08:55.216 "core_count": 1 00:08:55.216 } 00:08:55.216 09:42:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1193821 00:08:55.216 09:42:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 1193821 ']' 00:08:55.216 09:42:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 1193821 00:08:55.217 09:42:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:08:55.217 09:42:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:55.217 09:42:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1193821 00:08:55.541 09:42:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:55.541 09:42:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:55.541 09:42:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1193821' 00:08:55.541 killing process with pid 1193821 00:08:55.541 09:42:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 1193821 00:08:55.541 Received shutdown signal, test time was about 10.000000 seconds 00:08:55.541 00:08:55.541 Latency(us) 00:08:55.541 [2024-11-20T08:42:26.457Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:55.541 [2024-11-20T08:42:26.457Z] =================================================================================================================== 00:08:55.541 [2024-11-20T08:42:26.457Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:55.541 09:42:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 1193821 00:08:55.541 09:42:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:55.541 09:42:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:55.841 09:42:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d6f0ffc2-ab90-4820-b5a3-3ed9cc0b5e5a 00:08:55.841 09:42:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:55.841 09:42:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:55.841 09:42:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:55.841 09:42:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1189447 00:08:55.841 09:42:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1189447 00:08:56.154 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1189447 Killed "${NVMF_APP[@]}" "$@" 00:08:56.154 09:42:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:56.154 09:42:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:56.154 09:42:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:56.154 09:42:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:56.154 09:42:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:56.154 09:42:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=1196197 00:08:56.154 09:42:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 1196197 00:08:56.154 09:42:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:56.154 09:42:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1196197 ']' 00:08:56.154 09:42:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:56.154 09:42:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:56.154 09:42:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:56.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:56.154 09:42:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:56.154 09:42:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:56.154 [2024-11-20 09:42:26.849858] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:08:56.154 [2024-11-20 09:42:26.849938] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:56.154 [2024-11-20 09:42:26.945176] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.154 [2024-11-20 09:42:26.974272] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:56.154 [2024-11-20 09:42:26.974299] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:56.154 [2024-11-20 09:42:26.974305] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:56.154 [2024-11-20 09:42:26.974309] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:56.154 [2024-11-20 09:42:26.974313] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:56.154 [2024-11-20 09:42:26.974746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.746 09:42:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:56.746 09:42:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:56.746 09:42:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:56.746 09:42:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:56.746 09:42:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:57.006 09:42:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:57.006 09:42:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:57.006 [2024-11-20 09:42:27.827732] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:57.006 [2024-11-20 09:42:27.827809] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:57.006 [2024-11-20 09:42:27.827833] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:57.006 09:42:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:57.006 09:42:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 2545f511-7ac0-410e-ad55-f22834333901 00:08:57.006 09:42:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=2545f511-7ac0-410e-ad55-f22834333901 00:08:57.006 09:42:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:57.006 09:42:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:57.006 09:42:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:57.006 09:42:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:57.006 09:42:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:57.267 09:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 2545f511-7ac0-410e-ad55-f22834333901 -t 2000 00:08:57.528 [ 00:08:57.528 { 00:08:57.528 "name": "2545f511-7ac0-410e-ad55-f22834333901", 00:08:57.528 "aliases": [ 00:08:57.528 "lvs/lvol" 00:08:57.528 ], 00:08:57.528 "product_name": "Logical Volume", 00:08:57.528 "block_size": 4096, 00:08:57.528 "num_blocks": 38912, 00:08:57.528 "uuid": "2545f511-7ac0-410e-ad55-f22834333901", 00:08:57.528 "assigned_rate_limits": { 00:08:57.528 "rw_ios_per_sec": 0, 00:08:57.528 "rw_mbytes_per_sec": 0, 00:08:57.528 "r_mbytes_per_sec": 0, 00:08:57.528 "w_mbytes_per_sec": 0 00:08:57.528 }, 00:08:57.528 "claimed": false, 00:08:57.528 "zoned": false, 00:08:57.528 "supported_io_types": { 00:08:57.528 "read": true, 00:08:57.528 "write": true, 00:08:57.528 "unmap": true, 00:08:57.528 "flush": false, 00:08:57.528 "reset": true, 00:08:57.528 "nvme_admin": false, 00:08:57.528 "nvme_io": false, 00:08:57.528 "nvme_io_md": false, 00:08:57.528 "write_zeroes": true, 00:08:57.528 "zcopy": false, 00:08:57.528 "get_zone_info": false, 00:08:57.528 "zone_management": false, 00:08:57.528 "zone_append": false, 00:08:57.528 "compare": false, 00:08:57.528 "compare_and_write": false, 00:08:57.528 "abort": false, 00:08:57.528 "seek_hole": true, 00:08:57.528 "seek_data": true, 00:08:57.528 "copy": false, 00:08:57.528 "nvme_iov_md": false 00:08:57.528 }, 00:08:57.528 "driver_specific": { 00:08:57.528 "lvol": { 00:08:57.528 "lvol_store_uuid": "d6f0ffc2-ab90-4820-b5a3-3ed9cc0b5e5a", 00:08:57.528 "base_bdev": "aio_bdev", 00:08:57.528 "thin_provision": false, 00:08:57.528 "num_allocated_clusters": 38, 00:08:57.528 "snapshot": false, 00:08:57.528 "clone": false, 00:08:57.528 "esnap_clone": false 00:08:57.528 } 00:08:57.528 } 00:08:57.528 } 00:08:57.528 ] 00:08:57.528 09:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:57.528 09:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d6f0ffc2-ab90-4820-b5a3-3ed9cc0b5e5a 00:08:57.528 09:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:57.528 09:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:57.528 09:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d6f0ffc2-ab90-4820-b5a3-3ed9cc0b5e5a 00:08:57.528 09:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:57.788 09:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:57.788 09:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:57.788 [2024-11-20 09:42:28.652328] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:57.788 09:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d6f0ffc2-ab90-4820-b5a3-3ed9cc0b5e5a 00:08:57.788 09:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:08:57.788 09:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d6f0ffc2-ab90-4820-b5a3-3ed9cc0b5e5a 00:08:57.788 09:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:57.788 09:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:57.789 09:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:57.789 09:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:57.789 09:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:57.789 09:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:57.789 09:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:57.789 09:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:57.789 09:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d6f0ffc2-ab90-4820-b5a3-3ed9cc0b5e5a 00:08:58.050 request: 00:08:58.050 { 00:08:58.050 "uuid": "d6f0ffc2-ab90-4820-b5a3-3ed9cc0b5e5a", 00:08:58.050 "method": "bdev_lvol_get_lvstores", 00:08:58.050 "req_id": 1 00:08:58.050 } 00:08:58.050 Got JSON-RPC error response 00:08:58.050 response: 00:08:58.050 { 00:08:58.050 "code": -19, 00:08:58.050 "message": "No such device" 00:08:58.050 } 00:08:58.050 09:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:08:58.050 09:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:58.050 09:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:58.050 09:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:58.050 09:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:58.310 aio_bdev 00:08:58.310 09:42:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 2545f511-7ac0-410e-ad55-f22834333901 00:08:58.310 09:42:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=2545f511-7ac0-410e-ad55-f22834333901 00:08:58.310 09:42:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:58.310 09:42:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:58.310 09:42:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:58.310 09:42:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:58.310 09:42:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:58.310 09:42:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 2545f511-7ac0-410e-ad55-f22834333901 -t 2000 00:08:58.571 [ 00:08:58.571 { 00:08:58.571 "name": "2545f511-7ac0-410e-ad55-f22834333901", 00:08:58.571 "aliases": [ 00:08:58.571 "lvs/lvol" 00:08:58.571 ], 00:08:58.571 "product_name": "Logical Volume", 00:08:58.571 "block_size": 4096, 00:08:58.571 "num_blocks": 38912, 00:08:58.571 "uuid": "2545f511-7ac0-410e-ad55-f22834333901", 00:08:58.571 "assigned_rate_limits": { 00:08:58.571 "rw_ios_per_sec": 0, 00:08:58.571 "rw_mbytes_per_sec": 0, 00:08:58.571 "r_mbytes_per_sec": 0, 00:08:58.571 "w_mbytes_per_sec": 0 00:08:58.571 }, 00:08:58.571 "claimed": false, 00:08:58.571 "zoned": false, 00:08:58.571 "supported_io_types": { 00:08:58.571 "read": true, 00:08:58.571 "write": true, 00:08:58.571 "unmap": true, 00:08:58.571 "flush": false, 00:08:58.571 "reset": true, 00:08:58.571 "nvme_admin": false, 00:08:58.571 "nvme_io": false, 00:08:58.571 "nvme_io_md": false, 00:08:58.571 "write_zeroes": true, 00:08:58.571 "zcopy": false, 00:08:58.571 "get_zone_info": false, 00:08:58.571 "zone_management": false, 00:08:58.571 "zone_append": false, 00:08:58.571 "compare": false, 00:08:58.571 "compare_and_write": false, 00:08:58.571 "abort": false, 00:08:58.571 "seek_hole": true, 00:08:58.571 "seek_data": true, 00:08:58.571 "copy": false, 00:08:58.571 "nvme_iov_md": false 00:08:58.571 }, 00:08:58.571 "driver_specific": { 00:08:58.571 "lvol": { 00:08:58.571 "lvol_store_uuid": "d6f0ffc2-ab90-4820-b5a3-3ed9cc0b5e5a", 00:08:58.571 "base_bdev": "aio_bdev", 00:08:58.571 "thin_provision": false, 00:08:58.571 "num_allocated_clusters": 38, 00:08:58.571 "snapshot": false, 00:08:58.571 "clone": false, 00:08:58.571 "esnap_clone": false 00:08:58.571 } 00:08:58.571 } 00:08:58.571 } 00:08:58.571 ] 00:08:58.571 09:42:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:58.571 09:42:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d6f0ffc2-ab90-4820-b5a3-3ed9cc0b5e5a 00:08:58.571 09:42:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:58.832 09:42:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:58.832 09:42:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d6f0ffc2-ab90-4820-b5a3-3ed9cc0b5e5a 00:08:58.832 09:42:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:58.832 09:42:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:58.832 09:42:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 2545f511-7ac0-410e-ad55-f22834333901 00:08:59.092 09:42:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d6f0ffc2-ab90-4820-b5a3-3ed9cc0b5e5a 00:08:59.353 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:59.353 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:59.353 00:08:59.353 real 0m17.338s 00:08:59.353 user 0m45.696s 00:08:59.353 sys 0m2.976s 00:08:59.353 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:59.353 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:59.353 ************************************ 00:08:59.353 END TEST lvs_grow_dirty 00:08:59.353 ************************************ 00:08:59.353 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:59.353 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:08:59.353 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:08:59.353 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:08:59.353 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:59.353 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:08:59.353 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:08:59.353 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:08:59.353 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:59.353 nvmf_trace.0 00:08:59.613 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:08:59.613 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:59.613 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:59.613 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:08:59.613 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:59.613 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:08:59.613 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:59.613 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:59.613 rmmod nvme_tcp 00:08:59.613 rmmod nvme_fabrics 00:08:59.613 rmmod nvme_keyring 00:08:59.613 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:59.613 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:08:59.613 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:08:59.613 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 1196197 ']' 00:08:59.613 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 1196197 00:08:59.613 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 1196197 ']' 00:08:59.613 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 1196197 00:08:59.613 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:08:59.613 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:59.614 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1196197 00:08:59.614 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:59.614 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:59.614 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1196197' 00:08:59.614 killing process with pid 1196197 00:08:59.614 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 1196197 00:08:59.614 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 1196197 00:08:59.875 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:59.875 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:59.875 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:59.875 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:08:59.875 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:08:59.875 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:59.875 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:08:59.875 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:59.875 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:59.875 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:59.875 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:59.875 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:01.789 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:01.789 00:09:01.789 real 0m44.562s 00:09:01.789 user 1m7.478s 00:09:01.789 sys 0m10.606s 00:09:01.789 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:01.789 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:01.789 ************************************ 00:09:01.789 END TEST nvmf_lvs_grow 00:09:01.789 ************************************ 00:09:01.789 09:42:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:01.789 09:42:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:01.789 09:42:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:01.789 09:42:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:01.789 ************************************ 00:09:01.789 START TEST nvmf_bdev_io_wait 00:09:01.789 ************************************ 00:09:01.789 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:02.051 * Looking for test storage... 00:09:02.051 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:02.051 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:02.051 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:09:02.051 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:02.051 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:02.051 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:02.051 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:02.051 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:02.051 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:09:02.051 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:09:02.051 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:09:02.051 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:09:02.051 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:09:02.051 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:09:02.051 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:09:02.051 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:02.051 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:09:02.051 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:09:02.051 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:02.051 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:02.051 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:09:02.051 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:09:02.051 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:02.051 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:09:02.051 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:09:02.051 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:09:02.051 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:09:02.051 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:02.051 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:09:02.051 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:09:02.051 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:02.051 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:02.051 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:09:02.051 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:02.051 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:02.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.051 --rc genhtml_branch_coverage=1 00:09:02.051 --rc genhtml_function_coverage=1 00:09:02.051 --rc genhtml_legend=1 00:09:02.051 --rc geninfo_all_blocks=1 00:09:02.051 --rc geninfo_unexecuted_blocks=1 00:09:02.051 00:09:02.051 ' 00:09:02.051 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:02.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.051 --rc genhtml_branch_coverage=1 00:09:02.051 --rc genhtml_function_coverage=1 00:09:02.051 --rc genhtml_legend=1 00:09:02.051 --rc geninfo_all_blocks=1 00:09:02.051 --rc geninfo_unexecuted_blocks=1 00:09:02.051 00:09:02.051 ' 00:09:02.051 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:02.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.051 --rc genhtml_branch_coverage=1 00:09:02.051 --rc genhtml_function_coverage=1 00:09:02.052 --rc genhtml_legend=1 00:09:02.052 --rc geninfo_all_blocks=1 00:09:02.052 --rc geninfo_unexecuted_blocks=1 00:09:02.052 00:09:02.052 ' 00:09:02.052 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:02.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.052 --rc genhtml_branch_coverage=1 00:09:02.052 --rc genhtml_function_coverage=1 00:09:02.052 --rc genhtml_legend=1 00:09:02.052 --rc geninfo_all_blocks=1 00:09:02.052 --rc geninfo_unexecuted_blocks=1 00:09:02.052 00:09:02.052 ' 00:09:02.052 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:02.052 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:02.052 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:02.052 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:02.052 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:02.052 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:02.052 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:02.052 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:02.052 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:02.052 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:02.052 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:02.052 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:02.052 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:02.052 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:02.052 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:02.052 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:02.052 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:02.052 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:02.052 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:02.052 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:09:02.052 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:02.052 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:02.052 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:02.052 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.052 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.052 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.052 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:02.052 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.052 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:09:02.052 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:02.052 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:02.052 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:02.052 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:02.052 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:02.052 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:02.052 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:02.052 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:02.052 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:02.052 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:02.052 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:02.052 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:02.052 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:02.052 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:02.052 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:02.052 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:02.052 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:02.052 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:02.052 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:02.052 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:02.052 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:02.052 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:02.052 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:02.052 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:09:02.052 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:10.197 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:10.197 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:09:10.197 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:10.197 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:10.197 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:10.197 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:10.197 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:10.197 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:09:10.197 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:10.197 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:09:10.197 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:09:10.197 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:09:10.197 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:09:10.197 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:09:10.197 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:09:10.197 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:10.197 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:10.198 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:10.198 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:10.198 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:10.198 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:10.198 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:10.198 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:10.198 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:10.198 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:10.198 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:10.198 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:10.198 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:10.198 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:10.198 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:10.198 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:10.198 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:10.198 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:10.198 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:10.198 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:10.198 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:10.198 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:10.198 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:10.198 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:10.198 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:10.198 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:10.198 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:10.198 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:10.198 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:10.198 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:10.198 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:10.198 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:10.198 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:10.198 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:10.198 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:10.198 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:10.198 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:10.198 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:10.198 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:10.198 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:10.198 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:10.198 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:10.198 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:10.198 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:10.198 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:10.198 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:10.198 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:10.198 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:10.198 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:10.198 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:10.198 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:10.198 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:10.198 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:10.198 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:10.198 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:10.198 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:10.198 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:10.198 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:10.198 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:09:10.198 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:10.198 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:10.198 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:10.198 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:10.198 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:10.198 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:10.198 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:10.198 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:10.198 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:10.198 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:10.198 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:10.198 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:10.198 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:10.198 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:10.198 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:10.198 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:10.198 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:10.198 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:10.198 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:10.198 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:10.198 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:10.198 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:10.198 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:10.198 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:10.198 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:10.198 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:10.198 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:10.198 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.584 ms 00:09:10.198 00:09:10.198 --- 10.0.0.2 ping statistics --- 00:09:10.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:10.198 rtt min/avg/max/mdev = 0.584/0.584/0.584/0.000 ms 00:09:10.198 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:10.198 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:10.198 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.308 ms 00:09:10.198 00:09:10.198 --- 10.0.0.1 ping statistics --- 00:09:10.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:10.198 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:09:10.198 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:10.198 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:09:10.198 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:10.198 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:10.198 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:10.198 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:10.198 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:10.198 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:10.198 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:10.198 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:10.198 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:10.198 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:10.198 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:10.198 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=1201280 00:09:10.198 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 1201280 00:09:10.199 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:10.199 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 1201280 ']' 00:09:10.199 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:10.199 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:10.199 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:10.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:10.199 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:10.199 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:10.199 [2024-11-20 09:42:40.495387] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:09:10.199 [2024-11-20 09:42:40.495452] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:10.199 [2024-11-20 09:42:40.592273] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:10.199 [2024-11-20 09:42:40.646995] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:10.199 [2024-11-20 09:42:40.647050] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:10.199 [2024-11-20 09:42:40.647059] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:10.199 [2024-11-20 09:42:40.647066] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:10.199 [2024-11-20 09:42:40.647072] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:10.199 [2024-11-20 09:42:40.649475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:10.199 [2024-11-20 09:42:40.649618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:10.199 [2024-11-20 09:42:40.649779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.199 [2024-11-20 09:42:40.649778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:10.460 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:10.460 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:09:10.460 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:10.460 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:10.460 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:10.460 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:10.460 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:10.460 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.460 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:10.722 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.722 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:10.722 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.722 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:10.722 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.722 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:10.722 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.722 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:10.722 [2024-11-20 09:42:41.444746] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:10.722 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.722 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:10.722 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.722 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:10.722 Malloc0 00:09:10.722 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.722 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:10.722 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.722 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:10.722 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.722 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:10.722 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.722 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:10.722 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.722 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:10.722 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.722 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:10.722 [2024-11-20 09:42:41.510201] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:10.722 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.722 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1201392 00:09:10.722 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:10.722 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:10.722 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1201395 00:09:10.722 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:10.722 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:10.722 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:10.722 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:10.722 { 00:09:10.722 "params": { 00:09:10.722 "name": "Nvme$subsystem", 00:09:10.722 "trtype": "$TEST_TRANSPORT", 00:09:10.722 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:10.722 "adrfam": "ipv4", 00:09:10.722 "trsvcid": "$NVMF_PORT", 00:09:10.722 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:10.722 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:10.722 "hdgst": ${hdgst:-false}, 00:09:10.722 "ddgst": ${ddgst:-false} 00:09:10.722 }, 00:09:10.722 "method": "bdev_nvme_attach_controller" 00:09:10.722 } 00:09:10.722 EOF 00:09:10.722 )") 00:09:10.722 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1201398 00:09:10.722 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:10.722 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:10.722 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:10.722 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:10.722 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:10.722 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1201402 00:09:10.722 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:10.722 { 00:09:10.722 "params": { 00:09:10.722 "name": "Nvme$subsystem", 00:09:10.722 "trtype": "$TEST_TRANSPORT", 00:09:10.722 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:10.722 "adrfam": "ipv4", 00:09:10.722 "trsvcid": "$NVMF_PORT", 00:09:10.722 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:10.722 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:10.722 "hdgst": ${hdgst:-false}, 00:09:10.722 "ddgst": ${ddgst:-false} 00:09:10.722 }, 00:09:10.722 "method": "bdev_nvme_attach_controller" 00:09:10.722 } 00:09:10.722 EOF 00:09:10.722 )") 00:09:10.722 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:10.722 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:10.722 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:10.722 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:10.722 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:10.722 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:10.722 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:10.723 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:10.723 { 00:09:10.723 "params": { 00:09:10.723 "name": "Nvme$subsystem", 00:09:10.723 "trtype": "$TEST_TRANSPORT", 00:09:10.723 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:10.723 "adrfam": "ipv4", 00:09:10.723 "trsvcid": "$NVMF_PORT", 00:09:10.723 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:10.723 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:10.723 "hdgst": ${hdgst:-false}, 00:09:10.723 "ddgst": ${ddgst:-false} 00:09:10.723 }, 00:09:10.723 "method": "bdev_nvme_attach_controller" 00:09:10.723 } 00:09:10.723 EOF 00:09:10.723 )") 00:09:10.723 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:10.723 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:10.723 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:10.723 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:10.723 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:10.723 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:10.723 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:10.723 { 00:09:10.723 "params": { 00:09:10.723 "name": "Nvme$subsystem", 00:09:10.723 "trtype": "$TEST_TRANSPORT", 00:09:10.723 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:10.723 "adrfam": "ipv4", 00:09:10.723 "trsvcid": "$NVMF_PORT", 00:09:10.723 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:10.723 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:10.723 "hdgst": ${hdgst:-false}, 00:09:10.723 "ddgst": ${ddgst:-false} 00:09:10.723 }, 00:09:10.723 "method": "bdev_nvme_attach_controller" 00:09:10.723 } 00:09:10.723 EOF 00:09:10.723 )") 00:09:10.723 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:10.723 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1201392 00:09:10.723 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:10.723 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:10.723 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:10.723 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:10.723 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:10.723 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:10.723 "params": { 00:09:10.723 "name": "Nvme1", 00:09:10.723 "trtype": "tcp", 00:09:10.723 "traddr": "10.0.0.2", 00:09:10.723 "adrfam": "ipv4", 00:09:10.723 "trsvcid": "4420", 00:09:10.723 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:10.723 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:10.723 "hdgst": false, 00:09:10.723 "ddgst": false 00:09:10.723 }, 00:09:10.723 "method": "bdev_nvme_attach_controller" 00:09:10.723 }' 00:09:10.723 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:10.723 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:10.723 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:10.723 "params": { 00:09:10.723 "name": "Nvme1", 00:09:10.723 "trtype": "tcp", 00:09:10.723 "traddr": "10.0.0.2", 00:09:10.723 "adrfam": "ipv4", 00:09:10.723 "trsvcid": "4420", 00:09:10.723 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:10.723 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:10.723 "hdgst": false, 00:09:10.723 "ddgst": false 00:09:10.723 }, 00:09:10.723 "method": "bdev_nvme_attach_controller" 00:09:10.723 }' 00:09:10.723 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:10.723 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:10.723 "params": { 00:09:10.723 "name": "Nvme1", 00:09:10.723 "trtype": "tcp", 00:09:10.723 "traddr": "10.0.0.2", 00:09:10.723 "adrfam": "ipv4", 00:09:10.723 "trsvcid": "4420", 00:09:10.723 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:10.723 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:10.723 "hdgst": false, 00:09:10.723 "ddgst": false 00:09:10.723 }, 00:09:10.723 "method": "bdev_nvme_attach_controller" 00:09:10.723 }' 00:09:10.723 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:10.723 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:10.723 "params": { 00:09:10.723 "name": "Nvme1", 00:09:10.723 "trtype": "tcp", 00:09:10.723 "traddr": "10.0.0.2", 00:09:10.723 "adrfam": "ipv4", 00:09:10.723 "trsvcid": "4420", 00:09:10.723 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:10.723 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:10.723 "hdgst": false, 00:09:10.723 "ddgst": false 00:09:10.723 }, 00:09:10.723 "method": "bdev_nvme_attach_controller" 00:09:10.723 }' 00:09:10.723 [2024-11-20 09:42:41.573472] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:09:10.723 [2024-11-20 09:42:41.573475] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:09:10.723 [2024-11-20 09:42:41.573540] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-20 09:42:41.573542] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:09:10.723 --proc-type=auto ] 00:09:10.723 [2024-11-20 09:42:41.580806] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:09:10.723 [2024-11-20 09:42:41.580893] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:10.723 [2024-11-20 09:42:41.583762] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:09:10.723 [2024-11-20 09:42:41.583825] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:10.985 [2024-11-20 09:42:41.769085] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:10.985 [2024-11-20 09:42:41.804696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:10.985 [2024-11-20 09:42:41.833525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:10.985 [2024-11-20 09:42:41.868069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:10.985 [2024-11-20 09:42:41.891234] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.247 [2024-11-20 09:42:41.930985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:09:11.247 [2024-11-20 09:42:41.984026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.247 [2024-11-20 09:42:42.026367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:11.509 Running I/O for 1 seconds... 00:09:11.509 Running I/O for 1 seconds... 00:09:11.509 Running I/O for 1 seconds... 00:09:11.509 Running I/O for 1 seconds... 00:09:12.453 7172.00 IOPS, 28.02 MiB/s 00:09:12.453 Latency(us) 00:09:12.453 [2024-11-20T08:42:43.369Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:12.453 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:12.453 Nvme1n1 : 1.02 7181.70 28.05 0.00 0.00 17666.92 5734.40 22828.37 00:09:12.453 [2024-11-20T08:42:43.369Z] =================================================================================================================== 00:09:12.453 [2024-11-20T08:42:43.369Z] Total : 7181.70 28.05 0.00 0.00 17666.92 5734.40 22828.37 00:09:12.453 185944.00 IOPS, 726.34 MiB/s 00:09:12.453 Latency(us) 00:09:12.453 [2024-11-20T08:42:43.369Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:12.453 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:12.453 Nvme1n1 : 1.00 185566.58 724.87 0.00 0.00 686.03 305.49 2020.69 00:09:12.453 [2024-11-20T08:42:43.369Z] =================================================================================================================== 00:09:12.453 [2024-11-20T08:42:43.369Z] Total : 185566.58 724.87 0.00 0.00 686.03 305.49 2020.69 00:09:12.453 7148.00 IOPS, 27.92 MiB/s 00:09:12.453 Latency(us) 00:09:12.453 [2024-11-20T08:42:43.369Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:12.453 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:12.453 Nvme1n1 : 1.01 7263.37 28.37 0.00 0.00 17567.66 4805.97 32331.09 00:09:12.453 [2024-11-20T08:42:43.369Z] =================================================================================================================== 00:09:12.453 [2024-11-20T08:42:43.370Z] Total : 7263.37 28.37 0.00 0.00 17567.66 4805.97 32331.09 00:09:12.454 10696.00 IOPS, 41.78 MiB/s 00:09:12.454 Latency(us) 00:09:12.454 [2024-11-20T08:42:43.370Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:12.454 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:12.454 Nvme1n1 : 1.01 10769.93 42.07 0.00 0.00 11842.92 5324.80 23592.96 00:09:12.454 [2024-11-20T08:42:43.370Z] =================================================================================================================== 00:09:12.454 [2024-11-20T08:42:43.370Z] Total : 10769.93 42.07 0.00 0.00 11842.92 5324.80 23592.96 00:09:12.716 09:42:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1201395 00:09:12.716 09:42:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1201398 00:09:12.716 09:42:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1201402 00:09:12.716 09:42:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:12.716 09:42:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.716 09:42:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:12.716 09:42:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.716 09:42:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:12.716 09:42:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:12.716 09:42:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:12.716 09:42:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:09:12.716 09:42:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:12.716 09:42:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:09:12.716 09:42:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:12.716 09:42:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:12.716 rmmod nvme_tcp 00:09:12.716 rmmod nvme_fabrics 00:09:12.716 rmmod nvme_keyring 00:09:12.716 09:42:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:12.716 09:42:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:09:12.716 09:42:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:09:12.716 09:42:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 1201280 ']' 00:09:12.716 09:42:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 1201280 00:09:12.716 09:42:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 1201280 ']' 00:09:12.716 09:42:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 1201280 00:09:12.716 09:42:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:09:12.716 09:42:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:12.716 09:42:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1201280 00:09:12.716 09:42:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:12.716 09:42:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:12.716 09:42:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1201280' 00:09:12.716 killing process with pid 1201280 00:09:12.716 09:42:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 1201280 00:09:12.716 09:42:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 1201280 00:09:12.978 09:42:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:12.978 09:42:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:12.978 09:42:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:12.978 09:42:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:09:12.978 09:42:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:09:12.978 09:42:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:12.978 09:42:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:09:12.978 09:42:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:12.978 09:42:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:12.978 09:42:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:12.978 09:42:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:12.978 09:42:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:14.896 09:42:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:14.896 00:09:14.896 real 0m13.092s 00:09:14.896 user 0m19.778s 00:09:14.896 sys 0m7.410s 00:09:14.896 09:42:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:14.896 09:42:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:14.896 ************************************ 00:09:14.896 END TEST nvmf_bdev_io_wait 00:09:14.896 ************************************ 00:09:15.157 09:42:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:15.157 09:42:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:15.157 09:42:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:15.157 09:42:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:15.157 ************************************ 00:09:15.157 START TEST nvmf_queue_depth 00:09:15.157 ************************************ 00:09:15.157 09:42:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:15.157 * Looking for test storage... 00:09:15.157 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:15.157 09:42:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:15.157 09:42:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:09:15.157 09:42:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:15.157 09:42:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:15.157 09:42:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:15.157 09:42:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:15.157 09:42:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:15.157 09:42:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:09:15.157 09:42:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:09:15.157 09:42:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:09:15.157 09:42:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:09:15.157 09:42:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:09:15.157 09:42:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:09:15.157 09:42:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:09:15.157 09:42:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:15.157 09:42:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:09:15.157 09:42:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:09:15.157 09:42:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:15.157 09:42:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:15.157 09:42:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:09:15.157 09:42:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:09:15.157 09:42:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:15.157 09:42:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:09:15.157 09:42:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:09:15.157 09:42:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:09:15.157 09:42:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:09:15.157 09:42:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:15.157 09:42:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:09:15.157 09:42:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:09:15.418 09:42:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:15.418 09:42:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:15.418 09:42:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:09:15.418 09:42:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:15.418 09:42:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:15.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.418 --rc genhtml_branch_coverage=1 00:09:15.418 --rc genhtml_function_coverage=1 00:09:15.418 --rc genhtml_legend=1 00:09:15.418 --rc geninfo_all_blocks=1 00:09:15.418 --rc geninfo_unexecuted_blocks=1 00:09:15.418 00:09:15.418 ' 00:09:15.418 09:42:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:15.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.418 --rc genhtml_branch_coverage=1 00:09:15.418 --rc genhtml_function_coverage=1 00:09:15.418 --rc genhtml_legend=1 00:09:15.418 --rc geninfo_all_blocks=1 00:09:15.418 --rc geninfo_unexecuted_blocks=1 00:09:15.418 00:09:15.418 ' 00:09:15.418 09:42:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:15.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.418 --rc genhtml_branch_coverage=1 00:09:15.419 --rc genhtml_function_coverage=1 00:09:15.419 --rc genhtml_legend=1 00:09:15.419 --rc geninfo_all_blocks=1 00:09:15.419 --rc geninfo_unexecuted_blocks=1 00:09:15.419 00:09:15.419 ' 00:09:15.419 09:42:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:15.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.419 --rc genhtml_branch_coverage=1 00:09:15.419 --rc genhtml_function_coverage=1 00:09:15.419 --rc genhtml_legend=1 00:09:15.419 --rc geninfo_all_blocks=1 00:09:15.419 --rc geninfo_unexecuted_blocks=1 00:09:15.419 00:09:15.419 ' 00:09:15.419 09:42:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:15.419 09:42:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:15.419 09:42:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:15.419 09:42:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:15.419 09:42:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:15.419 09:42:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:15.419 09:42:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:15.419 09:42:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:15.419 09:42:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:15.419 09:42:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:15.419 09:42:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:15.419 09:42:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:15.419 09:42:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:15.419 09:42:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:15.419 09:42:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:15.419 09:42:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:15.419 09:42:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:15.419 09:42:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:15.419 09:42:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:15.419 09:42:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:09:15.419 09:42:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:15.419 09:42:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:15.419 09:42:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:15.419 09:42:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.419 09:42:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.419 09:42:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.419 09:42:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:15.419 09:42:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.419 09:42:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:09:15.419 09:42:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:15.419 09:42:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:15.419 09:42:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:15.419 09:42:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:15.419 09:42:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:15.419 09:42:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:15.419 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:15.419 09:42:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:15.419 09:42:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:15.419 09:42:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:15.419 09:42:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:15.419 09:42:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:15.419 09:42:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:15.419 09:42:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:15.419 09:42:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:15.419 09:42:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:15.419 09:42:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:15.419 09:42:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:15.419 09:42:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:15.419 09:42:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:15.419 09:42:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:15.419 09:42:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:15.419 09:42:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:15.419 09:42:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:15.419 09:42:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:09:15.419 09:42:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:23.566 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:23.567 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:09:23.567 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:23.567 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:23.567 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:23.567 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:23.567 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:23.567 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:09:23.567 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:23.567 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:09:23.567 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:09:23.567 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:09:23.567 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:09:23.567 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:09:23.567 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:09:23.567 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:23.567 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:23.567 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:23.567 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:23.567 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:23.567 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:23.567 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:23.567 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:23.567 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:23.567 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:23.567 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:23.567 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:23.567 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:23.567 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:23.567 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:23.567 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:23.567 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:23.567 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:23.567 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:23.567 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:23.567 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:23.567 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:23.567 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:23.567 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:23.567 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:23.567 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:23.567 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:23.567 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:23.567 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:23.567 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:23.567 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:23.567 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:23.567 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:23.567 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:23.567 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:23.567 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:23.567 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:23.567 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:23.567 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:23.567 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:23.567 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:23.567 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:23.567 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:23.567 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:23.567 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:23.567 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:23.567 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:23.567 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:23.567 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:23.567 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:23.567 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:23.567 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:23.567 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:23.567 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:23.567 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:23.567 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:23.567 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:23.567 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:23.567 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:09:23.567 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:23.567 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:23.567 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:23.567 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:23.567 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:23.567 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:23.567 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:23.567 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:23.567 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:23.567 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:23.567 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:23.567 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:23.567 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:23.567 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:23.567 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:23.567 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:23.567 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:23.567 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:23.567 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:23.567 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:23.567 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:23.567 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:23.567 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:23.567 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:23.567 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:23.567 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:23.567 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:23.567 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.616 ms 00:09:23.567 00:09:23.567 --- 10.0.0.2 ping statistics --- 00:09:23.567 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:23.567 rtt min/avg/max/mdev = 0.616/0.616/0.616/0.000 ms 00:09:23.568 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:23.568 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:23.568 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:09:23.568 00:09:23.568 --- 10.0.0.1 ping statistics --- 00:09:23.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:23.568 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:09:23.568 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:23.568 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:09:23.568 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:23.568 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:23.568 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:23.568 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:23.568 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:23.568 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:23.568 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:23.568 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:23.568 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:23.568 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:23.568 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:23.568 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=1206015 00:09:23.568 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 1206015 00:09:23.568 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:23.568 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1206015 ']' 00:09:23.568 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:23.568 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:23.568 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:23.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:23.568 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:23.568 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:23.568 [2024-11-20 09:42:53.715129] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:09:23.568 [2024-11-20 09:42:53.715206] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:23.568 [2024-11-20 09:42:53.816361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.568 [2024-11-20 09:42:53.867376] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:23.568 [2024-11-20 09:42:53.867423] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:23.568 [2024-11-20 09:42:53.867432] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:23.568 [2024-11-20 09:42:53.867439] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:23.568 [2024-11-20 09:42:53.867445] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:23.568 [2024-11-20 09:42:53.868263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:23.829 09:42:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:23.829 09:42:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:23.829 09:42:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:23.829 09:42:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:23.829 09:42:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:23.829 09:42:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:23.829 09:42:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:23.829 09:42:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.829 09:42:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:23.829 [2024-11-20 09:42:54.578585] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:23.829 09:42:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.829 09:42:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:23.829 09:42:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.829 09:42:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:23.829 Malloc0 00:09:23.829 09:42:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.829 09:42:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:23.829 09:42:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.829 09:42:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:23.829 09:42:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.829 09:42:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:23.829 09:42:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.829 09:42:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:23.829 09:42:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.829 09:42:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:23.829 09:42:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.830 09:42:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:23.830 [2024-11-20 09:42:54.639555] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:23.830 09:42:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.830 09:42:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1206356 00:09:23.830 09:42:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:23.830 09:42:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:23.830 09:42:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1206356 /var/tmp/bdevperf.sock 00:09:23.830 09:42:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1206356 ']' 00:09:23.830 09:42:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:23.830 09:42:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:23.830 09:42:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:23.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:23.830 09:42:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:23.830 09:42:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:23.830 [2024-11-20 09:42:54.706928] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:09:23.830 [2024-11-20 09:42:54.706994] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1206356 ] 00:09:24.090 [2024-11-20 09:42:54.801567] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.090 [2024-11-20 09:42:54.853932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.662 09:42:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:24.662 09:42:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:24.662 09:42:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:24.662 09:42:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.662 09:42:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:24.923 NVMe0n1 00:09:24.923 09:42:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.923 09:42:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:24.923 Running I/O for 10 seconds... 00:09:27.249 9070.00 IOPS, 35.43 MiB/s [2024-11-20T08:42:59.105Z] 10240.00 IOPS, 40.00 MiB/s [2024-11-20T08:43:00.047Z] 10584.67 IOPS, 41.35 MiB/s [2024-11-20T08:43:00.990Z] 11011.00 IOPS, 43.01 MiB/s [2024-11-20T08:43:01.930Z] 11302.40 IOPS, 44.15 MiB/s [2024-11-20T08:43:02.871Z] 11607.17 IOPS, 45.34 MiB/s [2024-11-20T08:43:04.255Z] 11788.14 IOPS, 46.05 MiB/s [2024-11-20T08:43:04.825Z] 11906.50 IOPS, 46.51 MiB/s [2024-11-20T08:43:06.208Z] 12060.89 IOPS, 47.11 MiB/s [2024-11-20T08:43:06.208Z] 12186.20 IOPS, 47.60 MiB/s 00:09:35.292 Latency(us) 00:09:35.292 [2024-11-20T08:43:06.208Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:35.292 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:35.292 Verification LBA range: start 0x0 length 0x4000 00:09:35.292 NVMe0n1 : 10.05 12223.30 47.75 0.00 0.00 83507.45 16711.68 76458.67 00:09:35.292 [2024-11-20T08:43:06.208Z] =================================================================================================================== 00:09:35.292 [2024-11-20T08:43:06.208Z] Total : 12223.30 47.75 0.00 0.00 83507.45 16711.68 76458.67 00:09:35.292 { 00:09:35.292 "results": [ 00:09:35.292 { 00:09:35.292 "job": "NVMe0n1", 00:09:35.292 "core_mask": "0x1", 00:09:35.292 "workload": "verify", 00:09:35.292 "status": "finished", 00:09:35.292 "verify_range": { 00:09:35.292 "start": 0, 00:09:35.292 "length": 16384 00:09:35.292 }, 00:09:35.292 "queue_depth": 1024, 00:09:35.292 "io_size": 4096, 00:09:35.292 "runtime": 10.053425, 00:09:35.292 "iops": 12223.297035587375, 00:09:35.292 "mibps": 47.74725404526318, 00:09:35.292 "io_failed": 0, 00:09:35.292 "io_timeout": 0, 00:09:35.292 "avg_latency_us": 83507.45404315111, 00:09:35.292 "min_latency_us": 16711.68, 00:09:35.292 "max_latency_us": 76458.66666666667 00:09:35.292 } 00:09:35.292 ], 00:09:35.292 "core_count": 1 00:09:35.292 } 00:09:35.292 09:43:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1206356 00:09:35.292 09:43:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1206356 ']' 00:09:35.292 09:43:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1206356 00:09:35.292 09:43:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:35.292 09:43:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:35.292 09:43:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1206356 00:09:35.292 09:43:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:35.292 09:43:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:35.292 09:43:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1206356' 00:09:35.292 killing process with pid 1206356 00:09:35.292 09:43:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1206356 00:09:35.292 Received shutdown signal, test time was about 10.000000 seconds 00:09:35.292 00:09:35.292 Latency(us) 00:09:35.292 [2024-11-20T08:43:06.208Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:35.292 [2024-11-20T08:43:06.208Z] =================================================================================================================== 00:09:35.292 [2024-11-20T08:43:06.208Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:35.292 09:43:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1206356 00:09:35.292 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:35.292 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:35.292 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:35.292 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:09:35.292 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:35.292 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:09:35.292 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:35.292 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:35.292 rmmod nvme_tcp 00:09:35.292 rmmod nvme_fabrics 00:09:35.292 rmmod nvme_keyring 00:09:35.292 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:35.292 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:09:35.292 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:09:35.292 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 1206015 ']' 00:09:35.292 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 1206015 00:09:35.292 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1206015 ']' 00:09:35.292 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1206015 00:09:35.292 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:35.292 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:35.292 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1206015 00:09:35.292 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:35.292 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:35.553 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1206015' 00:09:35.553 killing process with pid 1206015 00:09:35.553 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1206015 00:09:35.553 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1206015 00:09:35.553 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:35.553 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:35.553 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:35.553 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:09:35.553 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:09:35.553 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:35.553 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:09:35.553 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:35.553 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:35.553 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:35.553 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:35.553 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:37.558 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:37.558 00:09:37.558 real 0m22.523s 00:09:37.558 user 0m25.709s 00:09:37.558 sys 0m7.151s 00:09:37.558 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:37.558 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:37.558 ************************************ 00:09:37.558 END TEST nvmf_queue_depth 00:09:37.558 ************************************ 00:09:37.558 09:43:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:37.558 09:43:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:37.558 09:43:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:37.558 09:43:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:37.820 ************************************ 00:09:37.820 START TEST nvmf_target_multipath 00:09:37.820 ************************************ 00:09:37.820 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:37.820 * Looking for test storage... 00:09:37.820 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:37.820 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:37.820 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:09:37.820 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:37.820 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:37.820 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:37.820 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:37.820 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:37.820 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:09:37.820 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:09:37.820 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:09:37.820 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:09:37.820 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:09:37.820 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:09:37.820 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:09:37.820 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:37.820 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:09:37.820 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:09:37.820 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:37.820 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:37.820 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:09:37.820 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:09:37.820 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:37.820 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:09:37.820 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:09:37.820 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:09:37.820 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:09:37.820 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:37.820 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:09:37.820 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:09:37.820 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:37.820 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:37.820 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:09:37.820 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:37.820 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:37.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.820 --rc genhtml_branch_coverage=1 00:09:37.820 --rc genhtml_function_coverage=1 00:09:37.820 --rc genhtml_legend=1 00:09:37.820 --rc geninfo_all_blocks=1 00:09:37.820 --rc geninfo_unexecuted_blocks=1 00:09:37.820 00:09:37.820 ' 00:09:37.820 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:37.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.820 --rc genhtml_branch_coverage=1 00:09:37.820 --rc genhtml_function_coverage=1 00:09:37.820 --rc genhtml_legend=1 00:09:37.820 --rc geninfo_all_blocks=1 00:09:37.820 --rc geninfo_unexecuted_blocks=1 00:09:37.820 00:09:37.820 ' 00:09:37.820 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:37.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.820 --rc genhtml_branch_coverage=1 00:09:37.820 --rc genhtml_function_coverage=1 00:09:37.820 --rc genhtml_legend=1 00:09:37.820 --rc geninfo_all_blocks=1 00:09:37.820 --rc geninfo_unexecuted_blocks=1 00:09:37.820 00:09:37.820 ' 00:09:37.820 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:37.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.820 --rc genhtml_branch_coverage=1 00:09:37.820 --rc genhtml_function_coverage=1 00:09:37.820 --rc genhtml_legend=1 00:09:37.820 --rc geninfo_all_blocks=1 00:09:37.820 --rc geninfo_unexecuted_blocks=1 00:09:37.820 00:09:37.820 ' 00:09:37.820 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:37.820 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:37.820 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:37.820 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:37.820 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:37.820 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:37.820 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:37.820 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:37.821 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:37.821 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:37.821 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:37.821 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:37.821 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:37.821 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:37.821 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:37.821 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:37.821 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:37.821 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:37.821 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:37.821 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:09:37.821 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:37.821 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:37.821 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:37.821 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.821 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.821 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.821 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:37.821 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.821 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:09:37.821 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:37.821 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:37.821 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:37.821 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:37.821 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:37.821 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:37.821 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:37.821 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:37.821 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:37.821 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:37.821 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:37.821 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:37.821 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:37.821 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:37.821 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:37.821 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:37.821 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:37.821 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:37.821 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:37.821 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:37.821 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:37.821 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:37.821 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:37.821 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:37.821 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:37.821 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:09:37.821 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:45.964 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:45.964 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:09:45.964 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:45.964 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:45.964 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:45.964 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:45.964 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:45.964 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:09:45.964 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:45.964 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:09:45.964 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:09:45.964 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:09:45.964 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:09:45.964 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:09:45.964 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:09:45.964 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:45.964 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:45.964 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:45.964 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:45.964 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:45.964 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:45.964 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:45.964 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:45.964 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:45.964 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:45.964 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:45.964 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:45.964 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:45.964 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:45.964 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:45.964 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:45.964 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:45.964 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:45.964 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:45.964 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:45.964 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:45.964 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:45.964 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:45.964 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:45.964 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:45.964 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:45.964 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:45.964 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:45.964 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:45.964 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:45.964 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:45.964 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:45.964 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:45.964 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:45.964 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:45.964 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:45.964 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:45.964 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:45.964 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:45.964 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:45.964 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:45.964 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:45.964 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:45.964 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:45.964 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:45.964 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:45.964 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:45.964 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:45.964 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:45.964 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:45.964 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:45.964 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:45.964 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:45.964 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:45.964 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:45.964 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:45.964 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:45.964 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:45.964 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:09:45.964 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:45.964 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:45.964 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:45.964 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:45.964 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:45.964 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:45.964 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:45.964 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:45.964 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:45.964 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:45.964 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:45.964 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:45.964 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:45.964 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:45.964 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:45.964 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:45.964 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:45.964 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:45.964 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:45.964 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:45.964 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:45.964 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:45.964 09:43:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:45.964 09:43:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:45.964 09:43:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:45.964 09:43:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:45.965 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:45.965 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.616 ms 00:09:45.965 00:09:45.965 --- 10.0.0.2 ping statistics --- 00:09:45.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:45.965 rtt min/avg/max/mdev = 0.616/0.616/0.616/0.000 ms 00:09:45.965 09:43:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:45.965 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:45.965 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.233 ms 00:09:45.965 00:09:45.965 --- 10.0.0.1 ping statistics --- 00:09:45.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:45.965 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:09:45.965 09:43:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:45.965 09:43:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:09:45.965 09:43:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:45.965 09:43:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:45.965 09:43:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:45.965 09:43:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:45.965 09:43:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:45.965 09:43:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:45.965 09:43:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:45.965 09:43:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:09:45.965 09:43:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:09:45.965 only one NIC for nvmf test 00:09:45.965 09:43:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:09:45.965 09:43:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:45.965 09:43:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:45.965 09:43:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:45.965 09:43:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:45.965 09:43:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:45.965 09:43:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:45.965 rmmod nvme_tcp 00:09:45.965 rmmod nvme_fabrics 00:09:45.965 rmmod nvme_keyring 00:09:45.965 09:43:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:45.965 09:43:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:45.965 09:43:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:45.965 09:43:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:45.965 09:43:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:45.965 09:43:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:45.965 09:43:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:45.965 09:43:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:45.965 09:43:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:45.965 09:43:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:45.965 09:43:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:45.965 09:43:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:45.965 09:43:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:45.965 09:43:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:45.965 09:43:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:45.965 09:43:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:47.350 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:47.350 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:09:47.350 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:47.350 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:47.350 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:47.611 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:47.611 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:47.611 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:47.611 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:47.611 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:47.611 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:47.611 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:47.611 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:47.611 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:47.611 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:47.611 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:47.611 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:47.611 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:47.611 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:47.611 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:47.611 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:47.611 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:47.611 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:47.611 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:47.611 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:47.611 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:47.611 00:09:47.611 real 0m9.832s 00:09:47.611 user 0m2.079s 00:09:47.611 sys 0m5.684s 00:09:47.611 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:47.611 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:47.611 ************************************ 00:09:47.611 END TEST nvmf_target_multipath 00:09:47.611 ************************************ 00:09:47.611 09:43:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:47.611 09:43:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:47.611 09:43:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:47.611 09:43:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:47.611 ************************************ 00:09:47.611 START TEST nvmf_zcopy 00:09:47.611 ************************************ 00:09:47.611 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:47.611 * Looking for test storage... 00:09:47.611 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:47.611 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:47.611 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:09:47.611 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:47.874 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:47.874 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:47.874 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:47.874 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:47.874 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:47.874 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:47.874 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:47.874 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:47.874 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:47.874 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:47.874 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:47.874 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:47.874 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:47.874 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:47.874 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:47.874 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:47.874 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:47.874 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:47.874 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:47.874 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:47.874 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:47.874 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:47.874 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:47.874 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:47.874 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:47.874 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:47.874 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:47.874 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:47.874 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:47.874 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:47.874 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:47.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.874 --rc genhtml_branch_coverage=1 00:09:47.874 --rc genhtml_function_coverage=1 00:09:47.874 --rc genhtml_legend=1 00:09:47.874 --rc geninfo_all_blocks=1 00:09:47.874 --rc geninfo_unexecuted_blocks=1 00:09:47.874 00:09:47.874 ' 00:09:47.874 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:47.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.874 --rc genhtml_branch_coverage=1 00:09:47.874 --rc genhtml_function_coverage=1 00:09:47.874 --rc genhtml_legend=1 00:09:47.874 --rc geninfo_all_blocks=1 00:09:47.874 --rc geninfo_unexecuted_blocks=1 00:09:47.874 00:09:47.874 ' 00:09:47.874 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:47.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.874 --rc genhtml_branch_coverage=1 00:09:47.874 --rc genhtml_function_coverage=1 00:09:47.874 --rc genhtml_legend=1 00:09:47.874 --rc geninfo_all_blocks=1 00:09:47.874 --rc geninfo_unexecuted_blocks=1 00:09:47.874 00:09:47.874 ' 00:09:47.874 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:47.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.874 --rc genhtml_branch_coverage=1 00:09:47.874 --rc genhtml_function_coverage=1 00:09:47.874 --rc genhtml_legend=1 00:09:47.874 --rc geninfo_all_blocks=1 00:09:47.874 --rc geninfo_unexecuted_blocks=1 00:09:47.874 00:09:47.874 ' 00:09:47.874 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:47.874 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:47.874 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:47.874 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:47.874 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:47.874 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:47.874 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:47.874 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:47.874 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:47.874 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:47.874 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:47.874 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:47.874 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:47.874 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:47.874 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:47.874 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:47.874 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:47.874 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:47.874 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:47.874 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:47.874 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:47.874 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:47.874 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:47.874 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.874 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.875 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.875 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:47.875 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.875 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:47.875 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:47.875 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:47.875 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:47.875 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:47.875 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:47.875 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:47.875 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:47.875 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:47.875 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:47.875 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:47.875 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:47.875 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:47.875 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:47.875 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:47.875 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:47.875 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:47.875 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:47.875 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:47.875 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:47.875 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:47.875 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:47.875 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:09:47.875 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:56.022 09:43:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:56.022 09:43:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:09:56.022 09:43:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:56.022 09:43:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:56.022 09:43:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:56.022 09:43:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:56.022 09:43:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:56.022 09:43:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:09:56.022 09:43:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:56.022 09:43:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:09:56.022 09:43:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:09:56.022 09:43:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:09:56.022 09:43:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:09:56.022 09:43:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:09:56.022 09:43:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:09:56.022 09:43:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:56.022 09:43:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:56.022 09:43:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:56.022 09:43:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:56.022 09:43:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:56.022 09:43:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:56.022 09:43:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:56.022 09:43:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:56.022 09:43:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:56.022 09:43:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:56.022 09:43:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:56.022 09:43:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:56.022 09:43:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:56.022 09:43:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:56.022 09:43:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:56.022 09:43:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:56.022 09:43:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:56.022 09:43:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:56.022 09:43:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:56.022 09:43:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:56.022 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:56.022 09:43:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:56.022 09:43:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:56.022 09:43:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:56.022 09:43:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:56.022 09:43:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:56.022 09:43:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:56.022 09:43:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:56.022 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:56.022 09:43:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:56.022 09:43:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:56.022 09:43:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:56.022 09:43:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:56.022 09:43:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:56.022 09:43:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:56.022 09:43:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:56.022 09:43:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:56.022 09:43:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:56.022 09:43:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:56.022 09:43:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:56.022 09:43:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:56.022 09:43:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:56.022 09:43:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:56.022 09:43:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:56.022 09:43:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:56.022 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:56.022 09:43:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:56.022 09:43:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:56.022 09:43:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:56.022 09:43:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:56.022 09:43:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:56.022 09:43:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:56.022 09:43:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:56.022 09:43:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:56.022 09:43:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:56.022 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:56.022 09:43:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:56.022 09:43:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:56.022 09:43:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:09:56.022 09:43:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:56.022 09:43:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:56.022 09:43:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:56.022 09:43:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:56.022 09:43:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:56.022 09:43:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:56.022 09:43:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:56.022 09:43:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:56.022 09:43:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:56.022 09:43:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:56.022 09:43:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:56.023 09:43:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:56.023 09:43:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:56.023 09:43:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:56.023 09:43:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:56.023 09:43:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:56.023 09:43:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:56.023 09:43:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:56.023 09:43:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:56.023 09:43:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:56.023 09:43:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:56.023 09:43:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:56.023 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:56.023 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:56.023 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:56.023 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:56.023 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:56.023 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.636 ms 00:09:56.023 00:09:56.023 --- 10.0.0.2 ping statistics --- 00:09:56.023 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:56.023 rtt min/avg/max/mdev = 0.636/0.636/0.636/0.000 ms 00:09:56.023 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:56.023 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:56.023 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.294 ms 00:09:56.023 00:09:56.023 --- 10.0.0.1 ping statistics --- 00:09:56.023 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:56.023 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:09:56.023 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:56.023 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:09:56.023 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:56.023 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:56.023 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:56.023 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:56.023 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:56.023 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:56.023 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:56.023 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:56.023 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:56.023 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:56.023 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:56.023 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=1217060 00:09:56.023 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 1217060 00:09:56.023 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:56.023 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 1217060 ']' 00:09:56.023 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:56.023 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:56.023 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:56.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:56.023 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:56.023 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:56.023 [2024-11-20 09:43:26.173299] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:09:56.023 [2024-11-20 09:43:26.173365] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:56.023 [2024-11-20 09:43:26.272979] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:56.023 [2024-11-20 09:43:26.323503] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:56.023 [2024-11-20 09:43:26.323555] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:56.023 [2024-11-20 09:43:26.323580] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:56.023 [2024-11-20 09:43:26.323587] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:56.023 [2024-11-20 09:43:26.323593] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:56.023 [2024-11-20 09:43:26.324351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:56.285 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:56.285 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:09:56.285 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:56.285 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:56.285 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:56.285 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:56.285 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:56.285 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:56.285 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.285 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:56.285 [2024-11-20 09:43:27.050152] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:56.285 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.285 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:56.285 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.285 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:56.285 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.285 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:56.285 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.285 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:56.285 [2024-11-20 09:43:27.074456] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:56.285 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.285 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:56.285 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.285 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:56.285 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.285 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:56.285 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.285 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:56.285 malloc0 00:09:56.285 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.285 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:56.285 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.285 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:56.285 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.285 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:56.285 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:56.285 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:56.285 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:56.285 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:56.285 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:56.285 { 00:09:56.285 "params": { 00:09:56.285 "name": "Nvme$subsystem", 00:09:56.285 "trtype": "$TEST_TRANSPORT", 00:09:56.285 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:56.285 "adrfam": "ipv4", 00:09:56.285 "trsvcid": "$NVMF_PORT", 00:09:56.285 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:56.285 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:56.285 "hdgst": ${hdgst:-false}, 00:09:56.285 "ddgst": ${ddgst:-false} 00:09:56.285 }, 00:09:56.285 "method": "bdev_nvme_attach_controller" 00:09:56.285 } 00:09:56.285 EOF 00:09:56.285 )") 00:09:56.285 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:56.285 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:56.285 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:56.285 09:43:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:56.285 "params": { 00:09:56.285 "name": "Nvme1", 00:09:56.285 "trtype": "tcp", 00:09:56.285 "traddr": "10.0.0.2", 00:09:56.285 "adrfam": "ipv4", 00:09:56.285 "trsvcid": "4420", 00:09:56.285 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:56.285 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:56.286 "hdgst": false, 00:09:56.286 "ddgst": false 00:09:56.286 }, 00:09:56.286 "method": "bdev_nvme_attach_controller" 00:09:56.286 }' 00:09:56.286 [2024-11-20 09:43:27.175141] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:09:56.286 [2024-11-20 09:43:27.175213] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1217289 ] 00:09:56.548 [2024-11-20 09:43:27.265186] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:56.548 [2024-11-20 09:43:27.318313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.809 Running I/O for 10 seconds... 00:09:58.698 6454.00 IOPS, 50.42 MiB/s [2024-11-20T08:43:30.557Z] 6514.00 IOPS, 50.89 MiB/s [2024-11-20T08:43:31.941Z] 7491.00 IOPS, 58.52 MiB/s [2024-11-20T08:43:32.882Z] 8062.50 IOPS, 62.99 MiB/s [2024-11-20T08:43:33.823Z] 8400.20 IOPS, 65.63 MiB/s [2024-11-20T08:43:34.765Z] 8629.67 IOPS, 67.42 MiB/s [2024-11-20T08:43:35.707Z] 8791.86 IOPS, 68.69 MiB/s [2024-11-20T08:43:36.647Z] 8913.50 IOPS, 69.64 MiB/s [2024-11-20T08:43:37.588Z] 9004.78 IOPS, 70.35 MiB/s [2024-11-20T08:43:37.588Z] 9079.90 IOPS, 70.94 MiB/s 00:10:06.672 Latency(us) 00:10:06.672 [2024-11-20T08:43:37.588Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:06.672 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:06.672 Verification LBA range: start 0x0 length 0x1000 00:10:06.672 Nvme1n1 : 10.01 9081.98 70.95 0.00 0.00 14045.39 1645.23 27852.80 00:10:06.672 [2024-11-20T08:43:37.588Z] =================================================================================================================== 00:10:06.672 [2024-11-20T08:43:37.588Z] Total : 9081.98 70.95 0.00 0.00 14045.39 1645.23 27852.80 00:10:06.931 09:43:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1219441 00:10:06.931 09:43:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:06.931 09:43:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:06.931 09:43:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:06.931 09:43:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:06.931 09:43:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:06.931 09:43:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:06.931 09:43:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:06.931 09:43:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:06.931 { 00:10:06.931 "params": { 00:10:06.931 "name": "Nvme$subsystem", 00:10:06.931 "trtype": "$TEST_TRANSPORT", 00:10:06.931 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:06.931 "adrfam": "ipv4", 00:10:06.931 "trsvcid": "$NVMF_PORT", 00:10:06.931 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:06.931 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:06.931 "hdgst": ${hdgst:-false}, 00:10:06.931 "ddgst": ${ddgst:-false} 00:10:06.931 }, 00:10:06.931 "method": "bdev_nvme_attach_controller" 00:10:06.931 } 00:10:06.931 EOF 00:10:06.931 )") 00:10:06.931 [2024-11-20 09:43:37.667062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.931 [2024-11-20 09:43:37.667095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.932 09:43:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:06.932 09:43:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:06.932 09:43:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:06.932 09:43:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:06.932 "params": { 00:10:06.932 "name": "Nvme1", 00:10:06.932 "trtype": "tcp", 00:10:06.932 "traddr": "10.0.0.2", 00:10:06.932 "adrfam": "ipv4", 00:10:06.932 "trsvcid": "4420", 00:10:06.932 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:06.932 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:06.932 "hdgst": false, 00:10:06.932 "ddgst": false 00:10:06.932 }, 00:10:06.932 "method": "bdev_nvme_attach_controller" 00:10:06.932 }' 00:10:06.932 [2024-11-20 09:43:37.679059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.932 [2024-11-20 09:43:37.679068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.932 [2024-11-20 09:43:37.691090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.932 [2024-11-20 09:43:37.691099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.932 [2024-11-20 09:43:37.703122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.932 [2024-11-20 09:43:37.703130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.932 [2024-11-20 09:43:37.710154] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:10:06.932 [2024-11-20 09:43:37.710211] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1219441 ] 00:10:06.932 [2024-11-20 09:43:37.715152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.932 [2024-11-20 09:43:37.715164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.932 [2024-11-20 09:43:37.727187] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.932 [2024-11-20 09:43:37.727195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.932 [2024-11-20 09:43:37.739218] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.932 [2024-11-20 09:43:37.739226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.932 [2024-11-20 09:43:37.751249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.932 [2024-11-20 09:43:37.751256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.932 [2024-11-20 09:43:37.763279] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.932 [2024-11-20 09:43:37.763286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.932 [2024-11-20 09:43:37.775310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.932 [2024-11-20 09:43:37.775317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.932 [2024-11-20 09:43:37.787339] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.932 [2024-11-20 09:43:37.787347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.932 [2024-11-20 09:43:37.791609] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:06.932 [2024-11-20 09:43:37.799369] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.932 [2024-11-20 09:43:37.799382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.932 [2024-11-20 09:43:37.811401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.932 [2024-11-20 09:43:37.811410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.932 [2024-11-20 09:43:37.820644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:06.932 [2024-11-20 09:43:37.823432] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.932 [2024-11-20 09:43:37.823441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.932 [2024-11-20 09:43:37.835469] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.932 [2024-11-20 09:43:37.835480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.192 [2024-11-20 09:43:37.847497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.192 [2024-11-20 09:43:37.847509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.192 [2024-11-20 09:43:37.859524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.192 [2024-11-20 09:43:37.859535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.192 [2024-11-20 09:43:37.871555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.192 [2024-11-20 09:43:37.871564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.192 [2024-11-20 09:43:37.883586] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.192 [2024-11-20 09:43:37.883594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.192 [2024-11-20 09:43:37.895631] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.192 [2024-11-20 09:43:37.895648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.192 [2024-11-20 09:43:37.907652] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.192 [2024-11-20 09:43:37.907661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.192 [2024-11-20 09:43:37.919686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.192 [2024-11-20 09:43:37.919697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.192 [2024-11-20 09:43:37.931714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.192 [2024-11-20 09:43:37.931723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.192 [2024-11-20 09:43:37.943745] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.192 [2024-11-20 09:43:37.943756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.192 [2024-11-20 09:43:37.955785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.192 [2024-11-20 09:43:37.955801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.192 Running I/O for 5 seconds... 00:10:07.192 [2024-11-20 09:43:37.967806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.192 [2024-11-20 09:43:37.967813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.192 [2024-11-20 09:43:37.982833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.192 [2024-11-20 09:43:37.982850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.192 [2024-11-20 09:43:37.996736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.192 [2024-11-20 09:43:37.996753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.192 [2024-11-20 09:43:38.010621] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.192 [2024-11-20 09:43:38.010638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.192 [2024-11-20 09:43:38.023941] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.192 [2024-11-20 09:43:38.023962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.192 [2024-11-20 09:43:38.037375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.192 [2024-11-20 09:43:38.037392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.192 [2024-11-20 09:43:38.051023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.192 [2024-11-20 09:43:38.051039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.192 [2024-11-20 09:43:38.063870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.192 [2024-11-20 09:43:38.063886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.192 [2024-11-20 09:43:38.077432] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.192 [2024-11-20 09:43:38.077448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.193 [2024-11-20 09:43:38.090936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.193 [2024-11-20 09:43:38.090952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.193 [2024-11-20 09:43:38.104639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.193 [2024-11-20 09:43:38.104654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.453 [2024-11-20 09:43:38.118164] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.453 [2024-11-20 09:43:38.118181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.453 [2024-11-20 09:43:38.131702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.453 [2024-11-20 09:43:38.131716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.453 [2024-11-20 09:43:38.145269] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.453 [2024-11-20 09:43:38.145285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.453 [2024-11-20 09:43:38.158334] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.453 [2024-11-20 09:43:38.158350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.453 [2024-11-20 09:43:38.172118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.453 [2024-11-20 09:43:38.172134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.453 [2024-11-20 09:43:38.185241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.453 [2024-11-20 09:43:38.185257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.453 [2024-11-20 09:43:38.198826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.453 [2024-11-20 09:43:38.198843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.453 [2024-11-20 09:43:38.211973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.453 [2024-11-20 09:43:38.211989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.453 [2024-11-20 09:43:38.225419] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.453 [2024-11-20 09:43:38.225435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.453 [2024-11-20 09:43:38.238236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.453 [2024-11-20 09:43:38.238251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.453 [2024-11-20 09:43:38.251546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.453 [2024-11-20 09:43:38.251561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.453 [2024-11-20 09:43:38.264590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.453 [2024-11-20 09:43:38.264605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.453 [2024-11-20 09:43:38.277870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.453 [2024-11-20 09:43:38.277889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.453 [2024-11-20 09:43:38.290394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.453 [2024-11-20 09:43:38.290409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.453 [2024-11-20 09:43:38.303729] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.453 [2024-11-20 09:43:38.303744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.453 [2024-11-20 09:43:38.316835] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.453 [2024-11-20 09:43:38.316850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.453 [2024-11-20 09:43:38.329243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.453 [2024-11-20 09:43:38.329259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.453 [2024-11-20 09:43:38.341895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.453 [2024-11-20 09:43:38.341910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.453 [2024-11-20 09:43:38.354974] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.453 [2024-11-20 09:43:38.354989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.713 [2024-11-20 09:43:38.368480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.713 [2024-11-20 09:43:38.368495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.713 [2024-11-20 09:43:38.381583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.713 [2024-11-20 09:43:38.381598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.713 [2024-11-20 09:43:38.394658] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.713 [2024-11-20 09:43:38.394675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.713 [2024-11-20 09:43:38.407917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.713 [2024-11-20 09:43:38.407933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.713 [2024-11-20 09:43:38.421319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.713 [2024-11-20 09:43:38.421334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.713 [2024-11-20 09:43:38.434846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.713 [2024-11-20 09:43:38.434861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.713 [2024-11-20 09:43:38.448125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.713 [2024-11-20 09:43:38.448140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.714 [2024-11-20 09:43:38.460751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.714 [2024-11-20 09:43:38.460767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.714 [2024-11-20 09:43:38.473654] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.714 [2024-11-20 09:43:38.473669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.714 [2024-11-20 09:43:38.487017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.714 [2024-11-20 09:43:38.487031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.714 [2024-11-20 09:43:38.500416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.714 [2024-11-20 09:43:38.500431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.714 [2024-11-20 09:43:38.513549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.714 [2024-11-20 09:43:38.513564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.714 [2024-11-20 09:43:38.526899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.714 [2024-11-20 09:43:38.526918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.714 [2024-11-20 09:43:38.539601] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.714 [2024-11-20 09:43:38.539616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.714 [2024-11-20 09:43:38.552249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.714 [2024-11-20 09:43:38.552264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.714 [2024-11-20 09:43:38.565511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.714 [2024-11-20 09:43:38.565526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.714 [2024-11-20 09:43:38.577709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.714 [2024-11-20 09:43:38.577724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.714 [2024-11-20 09:43:38.591172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.714 [2024-11-20 09:43:38.591187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.714 [2024-11-20 09:43:38.604774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.714 [2024-11-20 09:43:38.604789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.714 [2024-11-20 09:43:38.618496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.714 [2024-11-20 09:43:38.618511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.975 [2024-11-20 09:43:38.632050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.975 [2024-11-20 09:43:38.632065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.975 [2024-11-20 09:43:38.645279] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.975 [2024-11-20 09:43:38.645294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.975 [2024-11-20 09:43:38.658503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.975 [2024-11-20 09:43:38.658518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.975 [2024-11-20 09:43:38.671822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.975 [2024-11-20 09:43:38.671838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.975 [2024-11-20 09:43:38.685373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.975 [2024-11-20 09:43:38.685389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.975 [2024-11-20 09:43:38.698803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.975 [2024-11-20 09:43:38.698819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.975 [2024-11-20 09:43:38.712153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.975 [2024-11-20 09:43:38.712176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.975 [2024-11-20 09:43:38.725292] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.975 [2024-11-20 09:43:38.725308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.975 [2024-11-20 09:43:38.738791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.975 [2024-11-20 09:43:38.738807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.975 [2024-11-20 09:43:38.751405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.975 [2024-11-20 09:43:38.751421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.975 [2024-11-20 09:43:38.765226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.975 [2024-11-20 09:43:38.765241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.975 [2024-11-20 09:43:38.778506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.975 [2024-11-20 09:43:38.778521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.975 [2024-11-20 09:43:38.791922] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.975 [2024-11-20 09:43:38.791938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.975 [2024-11-20 09:43:38.805233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.975 [2024-11-20 09:43:38.805249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.975 [2024-11-20 09:43:38.818380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.975 [2024-11-20 09:43:38.818396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.975 [2024-11-20 09:43:38.831809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.975 [2024-11-20 09:43:38.831824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.975 [2024-11-20 09:43:38.844817] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.975 [2024-11-20 09:43:38.844833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.975 [2024-11-20 09:43:38.857594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.975 [2024-11-20 09:43:38.857610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.975 [2024-11-20 09:43:38.870290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.975 [2024-11-20 09:43:38.870306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.975 [2024-11-20 09:43:38.883489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.975 [2024-11-20 09:43:38.883505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.236 [2024-11-20 09:43:38.896895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.236 [2024-11-20 09:43:38.896911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.236 [2024-11-20 09:43:38.909871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.236 [2024-11-20 09:43:38.909887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.236 [2024-11-20 09:43:38.922573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.236 [2024-11-20 09:43:38.922588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.236 [2024-11-20 09:43:38.935075] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.236 [2024-11-20 09:43:38.935091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.236 [2024-11-20 09:43:38.948119] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.236 [2024-11-20 09:43:38.948136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.236 [2024-11-20 09:43:38.960533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.236 [2024-11-20 09:43:38.960548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.236 19119.00 IOPS, 149.37 MiB/s [2024-11-20T08:43:39.152Z] [2024-11-20 09:43:38.973406] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.236 [2024-11-20 09:43:38.973421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.236 [2024-11-20 09:43:38.986868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.236 [2024-11-20 09:43:38.986884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.236 [2024-11-20 09:43:38.999569] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.236 [2024-11-20 09:43:38.999585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.236 [2024-11-20 09:43:39.012271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.236 [2024-11-20 09:43:39.012287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.236 [2024-11-20 09:43:39.025054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.236 [2024-11-20 09:43:39.025069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.236 [2024-11-20 09:43:39.038804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.236 [2024-11-20 09:43:39.038821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.236 [2024-11-20 09:43:39.051422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.236 [2024-11-20 09:43:39.051438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.236 [2024-11-20 09:43:39.064137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.236 [2024-11-20 09:43:39.064153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.236 [2024-11-20 09:43:39.077526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.236 [2024-11-20 09:43:39.077541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.236 [2024-11-20 09:43:39.091091] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.236 [2024-11-20 09:43:39.091107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.236 [2024-11-20 09:43:39.103546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.236 [2024-11-20 09:43:39.103562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.236 [2024-11-20 09:43:39.116968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.236 [2024-11-20 09:43:39.116983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.236 [2024-11-20 09:43:39.130093] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.236 [2024-11-20 09:43:39.130108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.236 [2024-11-20 09:43:39.142893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.236 [2024-11-20 09:43:39.142908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.500 [2024-11-20 09:43:39.156258] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.500 [2024-11-20 09:43:39.156274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.500 [2024-11-20 09:43:39.169569] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.500 [2024-11-20 09:43:39.169584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.500 [2024-11-20 09:43:39.183056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.500 [2024-11-20 09:43:39.183072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.500 [2024-11-20 09:43:39.196714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.500 [2024-11-20 09:43:39.196730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.500 [2024-11-20 09:43:39.210268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.500 [2024-11-20 09:43:39.210284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.500 [2024-11-20 09:43:39.223196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.500 [2024-11-20 09:43:39.223212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.500 [2024-11-20 09:43:39.235843] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.500 [2024-11-20 09:43:39.235859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.500 [2024-11-20 09:43:39.249470] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.500 [2024-11-20 09:43:39.249485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.500 [2024-11-20 09:43:39.262761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.500 [2024-11-20 09:43:39.262776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.500 [2024-11-20 09:43:39.276112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.500 [2024-11-20 09:43:39.276128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.500 [2024-11-20 09:43:39.289677] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.500 [2024-11-20 09:43:39.289693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.500 [2024-11-20 09:43:39.303175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.500 [2024-11-20 09:43:39.303190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.500 [2024-11-20 09:43:39.316045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.500 [2024-11-20 09:43:39.316060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.500 [2024-11-20 09:43:39.328842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.500 [2024-11-20 09:43:39.328857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.500 [2024-11-20 09:43:39.341572] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.500 [2024-11-20 09:43:39.341587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.500 [2024-11-20 09:43:39.354841] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.500 [2024-11-20 09:43:39.354856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.500 [2024-11-20 09:43:39.368065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.500 [2024-11-20 09:43:39.368081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.500 [2024-11-20 09:43:39.381258] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.500 [2024-11-20 09:43:39.381274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.500 [2024-11-20 09:43:39.394211] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.500 [2024-11-20 09:43:39.394227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.500 [2024-11-20 09:43:39.407553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.500 [2024-11-20 09:43:39.407569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.763 [2024-11-20 09:43:39.420655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.763 [2024-11-20 09:43:39.420670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.763 [2024-11-20 09:43:39.434369] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.763 [2024-11-20 09:43:39.434384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.763 [2024-11-20 09:43:39.447473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.763 [2024-11-20 09:43:39.447488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.763 [2024-11-20 09:43:39.460243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.763 [2024-11-20 09:43:39.460258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.763 [2024-11-20 09:43:39.472861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.763 [2024-11-20 09:43:39.472876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.763 [2024-11-20 09:43:39.486073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.763 [2024-11-20 09:43:39.486088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.763 [2024-11-20 09:43:39.499591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.763 [2024-11-20 09:43:39.499605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.763 [2024-11-20 09:43:39.512099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.763 [2024-11-20 09:43:39.512118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.763 [2024-11-20 09:43:39.524790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.763 [2024-11-20 09:43:39.524805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.763 [2024-11-20 09:43:39.537331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.763 [2024-11-20 09:43:39.537346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.763 [2024-11-20 09:43:39.550106] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.763 [2024-11-20 09:43:39.550121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.763 [2024-11-20 09:43:39.563698] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.763 [2024-11-20 09:43:39.563713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.763 [2024-11-20 09:43:39.576774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.763 [2024-11-20 09:43:39.576789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.763 [2024-11-20 09:43:39.589552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.763 [2024-11-20 09:43:39.589567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.763 [2024-11-20 09:43:39.602034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.763 [2024-11-20 09:43:39.602049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.763 [2024-11-20 09:43:39.615053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.763 [2024-11-20 09:43:39.615068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.763 [2024-11-20 09:43:39.627670] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.763 [2024-11-20 09:43:39.627685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.763 [2024-11-20 09:43:39.640487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.763 [2024-11-20 09:43:39.640502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.763 [2024-11-20 09:43:39.653397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.763 [2024-11-20 09:43:39.653412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.763 [2024-11-20 09:43:39.667024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.763 [2024-11-20 09:43:39.667039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.024 [2024-11-20 09:43:39.679321] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.024 [2024-11-20 09:43:39.679337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.024 [2024-11-20 09:43:39.692696] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.024 [2024-11-20 09:43:39.692711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.024 [2024-11-20 09:43:39.706606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.024 [2024-11-20 09:43:39.706622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.024 [2024-11-20 09:43:39.719314] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.024 [2024-11-20 09:43:39.719329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.024 [2024-11-20 09:43:39.731809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.024 [2024-11-20 09:43:39.731824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.024 [2024-11-20 09:43:39.744493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.024 [2024-11-20 09:43:39.744509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.024 [2024-11-20 09:43:39.758066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.024 [2024-11-20 09:43:39.758086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.024 [2024-11-20 09:43:39.771116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.024 [2024-11-20 09:43:39.771131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.024 [2024-11-20 09:43:39.784498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.024 [2024-11-20 09:43:39.784514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.024 [2024-11-20 09:43:39.798259] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.024 [2024-11-20 09:43:39.798275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.024 [2024-11-20 09:43:39.810534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.024 [2024-11-20 09:43:39.810549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.024 [2024-11-20 09:43:39.823459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.024 [2024-11-20 09:43:39.823475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.024 [2024-11-20 09:43:39.836836] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.024 [2024-11-20 09:43:39.836852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.024 [2024-11-20 09:43:39.849594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.024 [2024-11-20 09:43:39.849610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.024 [2024-11-20 09:43:39.863024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.024 [2024-11-20 09:43:39.863039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.024 [2024-11-20 09:43:39.876544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.024 [2024-11-20 09:43:39.876559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.024 [2024-11-20 09:43:39.889304] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.024 [2024-11-20 09:43:39.889320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.024 [2024-11-20 09:43:39.902689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.024 [2024-11-20 09:43:39.902705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.024 [2024-11-20 09:43:39.915630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.024 [2024-11-20 09:43:39.915646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.024 [2024-11-20 09:43:39.927998] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.024 [2024-11-20 09:43:39.928013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.286 [2024-11-20 09:43:39.941197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.286 [2024-11-20 09:43:39.941212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.286 [2024-11-20 09:43:39.954601] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.286 [2024-11-20 09:43:39.954615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.286 [2024-11-20 09:43:39.967410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.286 [2024-11-20 09:43:39.967424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.286 19164.00 IOPS, 149.72 MiB/s [2024-11-20T08:43:40.202Z] [2024-11-20 09:43:39.980508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.286 [2024-11-20 09:43:39.980523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.286 [2024-11-20 09:43:39.993554] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.286 [2024-11-20 09:43:39.993569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.286 [2024-11-20 09:43:40.007637] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.286 [2024-11-20 09:43:40.007660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.286 [2024-11-20 09:43:40.020409] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.286 [2024-11-20 09:43:40.020425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.286 [2024-11-20 09:43:40.033625] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.286 [2024-11-20 09:43:40.033642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.286 [2024-11-20 09:43:40.047048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.286 [2024-11-20 09:43:40.047065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.286 [2024-11-20 09:43:40.060754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.286 [2024-11-20 09:43:40.060769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.286 [2024-11-20 09:43:40.073391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.286 [2024-11-20 09:43:40.073406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.286 [2024-11-20 09:43:40.086659] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.286 [2024-11-20 09:43:40.086675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.286 [2024-11-20 09:43:40.100085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.286 [2024-11-20 09:43:40.100101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.286 [2024-11-20 09:43:40.113003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.286 [2024-11-20 09:43:40.113020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.286 [2024-11-20 09:43:40.126223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.286 [2024-11-20 09:43:40.126239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.286 [2024-11-20 09:43:40.138555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.286 [2024-11-20 09:43:40.138570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.286 [2024-11-20 09:43:40.151624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.286 [2024-11-20 09:43:40.151639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.286 [2024-11-20 09:43:40.165276] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.286 [2024-11-20 09:43:40.165291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.286 [2024-11-20 09:43:40.178121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.286 [2024-11-20 09:43:40.178137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.286 [2024-11-20 09:43:40.191020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.286 [2024-11-20 09:43:40.191036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.547 [2024-11-20 09:43:40.203897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.547 [2024-11-20 09:43:40.203913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.547 [2024-11-20 09:43:40.217329] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.547 [2024-11-20 09:43:40.217344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.547 [2024-11-20 09:43:40.230502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.547 [2024-11-20 09:43:40.230518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.547 [2024-11-20 09:43:40.243726] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.547 [2024-11-20 09:43:40.243743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.547 [2024-11-20 09:43:40.257410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.547 [2024-11-20 09:43:40.257426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.547 [2024-11-20 09:43:40.270032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.547 [2024-11-20 09:43:40.270048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.547 [2024-11-20 09:43:40.282336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.547 [2024-11-20 09:43:40.282352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.547 [2024-11-20 09:43:40.295446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.547 [2024-11-20 09:43:40.295462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.547 [2024-11-20 09:43:40.308108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.547 [2024-11-20 09:43:40.308123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.547 [2024-11-20 09:43:40.321154] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.547 [2024-11-20 09:43:40.321174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.547 [2024-11-20 09:43:40.334887] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.547 [2024-11-20 09:43:40.334902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.547 [2024-11-20 09:43:40.347704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.547 [2024-11-20 09:43:40.347719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.547 [2024-11-20 09:43:40.361282] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.547 [2024-11-20 09:43:40.361298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.547 [2024-11-20 09:43:40.374388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.547 [2024-11-20 09:43:40.374403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.547 [2024-11-20 09:43:40.387486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.547 [2024-11-20 09:43:40.387502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.547 [2024-11-20 09:43:40.400249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.547 [2024-11-20 09:43:40.400265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.547 [2024-11-20 09:43:40.413605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.547 [2024-11-20 09:43:40.413620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.547 [2024-11-20 09:43:40.426232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.547 [2024-11-20 09:43:40.426247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.547 [2024-11-20 09:43:40.438842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.547 [2024-11-20 09:43:40.438857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.547 [2024-11-20 09:43:40.452499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.547 [2024-11-20 09:43:40.452515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.809 [2024-11-20 09:43:40.466282] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.809 [2024-11-20 09:43:40.466300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.809 [2024-11-20 09:43:40.478817] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.809 [2024-11-20 09:43:40.478834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.809 [2024-11-20 09:43:40.491573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.809 [2024-11-20 09:43:40.491590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.809 [2024-11-20 09:43:40.504713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.809 [2024-11-20 09:43:40.504729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.809 [2024-11-20 09:43:40.517860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.809 [2024-11-20 09:43:40.517876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.809 [2024-11-20 09:43:40.530218] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.809 [2024-11-20 09:43:40.530233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.809 [2024-11-20 09:43:40.543570] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.809 [2024-11-20 09:43:40.543586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.809 [2024-11-20 09:43:40.556898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.809 [2024-11-20 09:43:40.556913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.809 [2024-11-20 09:43:40.570587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.809 [2024-11-20 09:43:40.570603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.809 [2024-11-20 09:43:40.583333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.809 [2024-11-20 09:43:40.583349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.809 [2024-11-20 09:43:40.595726] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.809 [2024-11-20 09:43:40.595740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.809 [2024-11-20 09:43:40.609094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.809 [2024-11-20 09:43:40.609109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.809 [2024-11-20 09:43:40.622757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.809 [2024-11-20 09:43:40.622773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.809 [2024-11-20 09:43:40.636338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.809 [2024-11-20 09:43:40.636354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.809 [2024-11-20 09:43:40.649103] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.809 [2024-11-20 09:43:40.649119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.809 [2024-11-20 09:43:40.661937] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.809 [2024-11-20 09:43:40.661953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.809 [2024-11-20 09:43:40.675346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.809 [2024-11-20 09:43:40.675362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.809 [2024-11-20 09:43:40.688623] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.809 [2024-11-20 09:43:40.688639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.809 [2024-11-20 09:43:40.701689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.809 [2024-11-20 09:43:40.701705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.809 [2024-11-20 09:43:40.714716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.809 [2024-11-20 09:43:40.714733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.071 [2024-11-20 09:43:40.728038] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.071 [2024-11-20 09:43:40.728054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.071 [2024-11-20 09:43:40.741564] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.071 [2024-11-20 09:43:40.741580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.071 [2024-11-20 09:43:40.754522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.071 [2024-11-20 09:43:40.754538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.071 [2024-11-20 09:43:40.767228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.071 [2024-11-20 09:43:40.767244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.071 [2024-11-20 09:43:40.780258] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.071 [2024-11-20 09:43:40.780274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.071 [2024-11-20 09:43:40.793681] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.071 [2024-11-20 09:43:40.793696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.071 [2024-11-20 09:43:40.807148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.071 [2024-11-20 09:43:40.807168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.071 [2024-11-20 09:43:40.819744] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.071 [2024-11-20 09:43:40.819759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.071 [2024-11-20 09:43:40.832971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.071 [2024-11-20 09:43:40.832988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.071 [2024-11-20 09:43:40.846571] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.071 [2024-11-20 09:43:40.846588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.071 [2024-11-20 09:43:40.859081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.071 [2024-11-20 09:43:40.859097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.071 [2024-11-20 09:43:40.871838] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.071 [2024-11-20 09:43:40.871854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.071 [2024-11-20 09:43:40.885539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.071 [2024-11-20 09:43:40.885555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.071 [2024-11-20 09:43:40.898342] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.071 [2024-11-20 09:43:40.898357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.071 [2024-11-20 09:43:40.911611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.071 [2024-11-20 09:43:40.911626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.071 [2024-11-20 09:43:40.924781] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.071 [2024-11-20 09:43:40.924797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.071 [2024-11-20 09:43:40.938055] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.071 [2024-11-20 09:43:40.938071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.071 [2024-11-20 09:43:40.951372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.071 [2024-11-20 09:43:40.951388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.071 [2024-11-20 09:43:40.963954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.071 [2024-11-20 09:43:40.963970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.071 19202.67 IOPS, 150.02 MiB/s [2024-11-20T08:43:40.987Z] [2024-11-20 09:43:40.977285] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.071 [2024-11-20 09:43:40.977301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.332 [2024-11-20 09:43:40.990637] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.332 [2024-11-20 09:43:40.990658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.332 [2024-11-20 09:43:41.003594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.332 [2024-11-20 09:43:41.003610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.332 [2024-11-20 09:43:41.016290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.332 [2024-11-20 09:43:41.016306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.332 [2024-11-20 09:43:41.030015] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.332 [2024-11-20 09:43:41.030032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.332 [2024-11-20 09:43:41.043046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.332 [2024-11-20 09:43:41.043062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.332 [2024-11-20 09:43:41.056292] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.332 [2024-11-20 09:43:41.056307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.332 [2024-11-20 09:43:41.068826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.332 [2024-11-20 09:43:41.068842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.332 [2024-11-20 09:43:41.082433] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.332 [2024-11-20 09:43:41.082449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.332 [2024-11-20 09:43:41.095851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.332 [2024-11-20 09:43:41.095866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.332 [2024-11-20 09:43:41.108580] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.332 [2024-11-20 09:43:41.108596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.332 [2024-11-20 09:43:41.121392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.332 [2024-11-20 09:43:41.121407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.332 [2024-11-20 09:43:41.134621] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.332 [2024-11-20 09:43:41.134636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.332 [2024-11-20 09:43:41.147556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.332 [2024-11-20 09:43:41.147572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.332 [2024-11-20 09:43:41.161035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.332 [2024-11-20 09:43:41.161050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.332 [2024-11-20 09:43:41.174778] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.332 [2024-11-20 09:43:41.174794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.332 [2024-11-20 09:43:41.188419] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.332 [2024-11-20 09:43:41.188435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.332 [2024-11-20 09:43:41.201122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.332 [2024-11-20 09:43:41.201138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.332 [2024-11-20 09:43:41.214406] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.332 [2024-11-20 09:43:41.214422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.332 [2024-11-20 09:43:41.227168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.332 [2024-11-20 09:43:41.227183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.332 [2024-11-20 09:43:41.240608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.332 [2024-11-20 09:43:41.240627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.593 [2024-11-20 09:43:41.253662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.593 [2024-11-20 09:43:41.253678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.593 [2024-11-20 09:43:41.267170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.593 [2024-11-20 09:43:41.267186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.593 [2024-11-20 09:43:41.280178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.593 [2024-11-20 09:43:41.280195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.593 [2024-11-20 09:43:41.292694] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.593 [2024-11-20 09:43:41.292710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.593 [2024-11-20 09:43:41.305451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.593 [2024-11-20 09:43:41.305466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.593 [2024-11-20 09:43:41.318661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.593 [2024-11-20 09:43:41.318676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.593 [2024-11-20 09:43:41.331656] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.593 [2024-11-20 09:43:41.331671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.593 [2024-11-20 09:43:41.345373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.593 [2024-11-20 09:43:41.345388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.593 [2024-11-20 09:43:41.358841] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.593 [2024-11-20 09:43:41.358856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.593 [2024-11-20 09:43:41.371628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.593 [2024-11-20 09:43:41.371644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.593 [2024-11-20 09:43:41.385057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.593 [2024-11-20 09:43:41.385072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.593 [2024-11-20 09:43:41.398105] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.593 [2024-11-20 09:43:41.398120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.593 [2024-11-20 09:43:41.410665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.593 [2024-11-20 09:43:41.410681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.593 [2024-11-20 09:43:41.423552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.593 [2024-11-20 09:43:41.423568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.593 [2024-11-20 09:43:41.436151] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.593 [2024-11-20 09:43:41.436171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.593 [2024-11-20 09:43:41.448971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.593 [2024-11-20 09:43:41.448986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.593 [2024-11-20 09:43:41.462575] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.593 [2024-11-20 09:43:41.462590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.593 [2024-11-20 09:43:41.475228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.593 [2024-11-20 09:43:41.475244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.593 [2024-11-20 09:43:41.487655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.593 [2024-11-20 09:43:41.487675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.593 [2024-11-20 09:43:41.500436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.593 [2024-11-20 09:43:41.500451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.854 [2024-11-20 09:43:41.513622] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.854 [2024-11-20 09:43:41.513638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.854 [2024-11-20 09:43:41.526419] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.854 [2024-11-20 09:43:41.526435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.854 [2024-11-20 09:43:41.538876] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.854 [2024-11-20 09:43:41.538892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.854 [2024-11-20 09:43:41.552626] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.854 [2024-11-20 09:43:41.552641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.854 [2024-11-20 09:43:41.565348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.854 [2024-11-20 09:43:41.565363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.854 [2024-11-20 09:43:41.578785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.854 [2024-11-20 09:43:41.578801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.854 [2024-11-20 09:43:41.592426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.854 [2024-11-20 09:43:41.592441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.854 [2024-11-20 09:43:41.605421] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.854 [2024-11-20 09:43:41.605437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.854 [2024-11-20 09:43:41.618353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.854 [2024-11-20 09:43:41.618369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.854 [2024-11-20 09:43:41.631652] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.854 [2024-11-20 09:43:41.631669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.854 [2024-11-20 09:43:41.645137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.854 [2024-11-20 09:43:41.645153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.854 [2024-11-20 09:43:41.658508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.854 [2024-11-20 09:43:41.658523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.854 [2024-11-20 09:43:41.671977] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.854 [2024-11-20 09:43:41.671992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.854 [2024-11-20 09:43:41.685505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.854 [2024-11-20 09:43:41.685520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.854 [2024-11-20 09:43:41.698606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.854 [2024-11-20 09:43:41.698621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.854 [2024-11-20 09:43:41.711983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.854 [2024-11-20 09:43:41.711999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.854 [2024-11-20 09:43:41.725408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.854 [2024-11-20 09:43:41.725424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.854 [2024-11-20 09:43:41.738277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.854 [2024-11-20 09:43:41.738293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.854 [2024-11-20 09:43:41.751012] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.854 [2024-11-20 09:43:41.751027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.854 [2024-11-20 09:43:41.764245] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.854 [2024-11-20 09:43:41.764260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.115 [2024-11-20 09:43:41.777470] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.115 [2024-11-20 09:43:41.777486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.115 [2024-11-20 09:43:41.790618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.115 [2024-11-20 09:43:41.790633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.115 [2024-11-20 09:43:41.803235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.115 [2024-11-20 09:43:41.803251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.115 [2024-11-20 09:43:41.815695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.115 [2024-11-20 09:43:41.815710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.115 [2024-11-20 09:43:41.828996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.115 [2024-11-20 09:43:41.829012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.115 [2024-11-20 09:43:41.842660] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.115 [2024-11-20 09:43:41.842675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.115 [2024-11-20 09:43:41.855460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.115 [2024-11-20 09:43:41.855475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.115 [2024-11-20 09:43:41.867817] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.115 [2024-11-20 09:43:41.867832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.115 [2024-11-20 09:43:41.881052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.115 [2024-11-20 09:43:41.881068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.115 [2024-11-20 09:43:41.894695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.115 [2024-11-20 09:43:41.894711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.115 [2024-11-20 09:43:41.907162] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.115 [2024-11-20 09:43:41.907178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.115 [2024-11-20 09:43:41.919963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.115 [2024-11-20 09:43:41.919978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.115 [2024-11-20 09:43:41.933353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.115 [2024-11-20 09:43:41.933369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.115 [2024-11-20 09:43:41.946551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.115 [2024-11-20 09:43:41.946567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.115 [2024-11-20 09:43:41.959769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.115 [2024-11-20 09:43:41.959784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.115 [2024-11-20 09:43:41.972603] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.115 [2024-11-20 09:43:41.972618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.115 19216.00 IOPS, 150.12 MiB/s [2024-11-20T08:43:42.031Z] [2024-11-20 09:43:41.985417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.115 [2024-11-20 09:43:41.985432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.115 [2024-11-20 09:43:41.997918] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.115 [2024-11-20 09:43:41.997934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.115 [2024-11-20 09:43:42.011058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.115 [2024-11-20 09:43:42.011073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.115 [2024-11-20 09:43:42.024294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.115 [2024-11-20 09:43:42.024309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.376 [2024-11-20 09:43:42.037540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.376 [2024-11-20 09:43:42.037556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.376 [2024-11-20 09:43:42.050910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.376 [2024-11-20 09:43:42.050925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.376 [2024-11-20 09:43:42.063408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.376 [2024-11-20 09:43:42.063423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.376 [2024-11-20 09:43:42.075658] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.376 [2024-11-20 09:43:42.075673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.376 [2024-11-20 09:43:42.089045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.376 [2024-11-20 09:43:42.089060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.376 [2024-11-20 09:43:42.102146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.376 [2024-11-20 09:43:42.102166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.376 [2024-11-20 09:43:42.114647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.376 [2024-11-20 09:43:42.114663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.376 [2024-11-20 09:43:42.128049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.376 [2024-11-20 09:43:42.128065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.376 [2024-11-20 09:43:42.141474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.376 [2024-11-20 09:43:42.141490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.376 [2024-11-20 09:43:42.154524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.376 [2024-11-20 09:43:42.154540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.376 [2024-11-20 09:43:42.167742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.376 [2024-11-20 09:43:42.167758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.376 [2024-11-20 09:43:42.180972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.376 [2024-11-20 09:43:42.180989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.376 [2024-11-20 09:43:42.193757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.376 [2024-11-20 09:43:42.193773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.376 [2024-11-20 09:43:42.206971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.376 [2024-11-20 09:43:42.206987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.376 [2024-11-20 09:43:42.220446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.376 [2024-11-20 09:43:42.220463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.376 [2024-11-20 09:43:42.234299] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.376 [2024-11-20 09:43:42.234316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.376 [2024-11-20 09:43:42.246971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.376 [2024-11-20 09:43:42.246986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.376 [2024-11-20 09:43:42.260844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.376 [2024-11-20 09:43:42.260860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.376 [2024-11-20 09:43:42.274149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.376 [2024-11-20 09:43:42.274170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.376 [2024-11-20 09:43:42.287393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.376 [2024-11-20 09:43:42.287409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.638 [2024-11-20 09:43:42.300304] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.638 [2024-11-20 09:43:42.300320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.638 [2024-11-20 09:43:42.313533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.638 [2024-11-20 09:43:42.313549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.638 [2024-11-20 09:43:42.327192] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.638 [2024-11-20 09:43:42.327209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.638 [2024-11-20 09:43:42.340398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.638 [2024-11-20 09:43:42.340414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.638 [2024-11-20 09:43:42.353473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.638 [2024-11-20 09:43:42.353489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.638 [2024-11-20 09:43:42.366844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.638 [2024-11-20 09:43:42.366859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.638 [2024-11-20 09:43:42.380320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.638 [2024-11-20 09:43:42.380336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.638 [2024-11-20 09:43:42.393783] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.638 [2024-11-20 09:43:42.393800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.638 [2024-11-20 09:43:42.406757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.638 [2024-11-20 09:43:42.406774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.638 [2024-11-20 09:43:42.419800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.638 [2024-11-20 09:43:42.419816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.638 [2024-11-20 09:43:42.433188] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.638 [2024-11-20 09:43:42.433204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.638 [2024-11-20 09:43:42.446126] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.638 [2024-11-20 09:43:42.446142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.638 [2024-11-20 09:43:42.459696] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.638 [2024-11-20 09:43:42.459712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.638 [2024-11-20 09:43:42.472199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.638 [2024-11-20 09:43:42.472222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.638 [2024-11-20 09:43:42.485306] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.638 [2024-11-20 09:43:42.485322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.638 [2024-11-20 09:43:42.498461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.638 [2024-11-20 09:43:42.498476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.638 [2024-11-20 09:43:42.511239] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.638 [2024-11-20 09:43:42.511254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.638 [2024-11-20 09:43:42.524319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.638 [2024-11-20 09:43:42.524335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.638 [2024-11-20 09:43:42.537603] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.638 [2024-11-20 09:43:42.537620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.638 [2024-11-20 09:43:42.550733] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.638 [2024-11-20 09:43:42.550748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.899 [2024-11-20 09:43:42.563257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.899 [2024-11-20 09:43:42.563273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.899 [2024-11-20 09:43:42.575405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.899 [2024-11-20 09:43:42.575420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.899 [2024-11-20 09:43:42.588707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.899 [2024-11-20 09:43:42.588723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.899 [2024-11-20 09:43:42.601269] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.899 [2024-11-20 09:43:42.601285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.899 [2024-11-20 09:43:42.614386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.899 [2024-11-20 09:43:42.614402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.899 [2024-11-20 09:43:42.627566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.899 [2024-11-20 09:43:42.627583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.899 [2024-11-20 09:43:42.641034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.899 [2024-11-20 09:43:42.641051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.899 [2024-11-20 09:43:42.654380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.899 [2024-11-20 09:43:42.654397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.899 [2024-11-20 09:43:42.667676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.899 [2024-11-20 09:43:42.667692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.899 [2024-11-20 09:43:42.680942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.899 [2024-11-20 09:43:42.680957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.899 [2024-11-20 09:43:42.694427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.899 [2024-11-20 09:43:42.694443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.899 [2024-11-20 09:43:42.707283] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.899 [2024-11-20 09:43:42.707299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.899 [2024-11-20 09:43:42.720030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.899 [2024-11-20 09:43:42.720050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.899 [2024-11-20 09:43:42.733746] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.899 [2024-11-20 09:43:42.733761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.899 [2024-11-20 09:43:42.746537] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.899 [2024-11-20 09:43:42.746552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.899 [2024-11-20 09:43:42.759591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.899 [2024-11-20 09:43:42.759606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.899 [2024-11-20 09:43:42.772032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.899 [2024-11-20 09:43:42.772047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.899 [2024-11-20 09:43:42.785570] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.899 [2024-11-20 09:43:42.785586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.899 [2024-11-20 09:43:42.798870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.899 [2024-11-20 09:43:42.798885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.899 [2024-11-20 09:43:42.811811] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.899 [2024-11-20 09:43:42.811827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.160 [2024-11-20 09:43:42.824234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.160 [2024-11-20 09:43:42.824250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.160 [2024-11-20 09:43:42.836582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.160 [2024-11-20 09:43:42.836598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.160 [2024-11-20 09:43:42.849338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.160 [2024-11-20 09:43:42.849353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.160 [2024-11-20 09:43:42.862487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.160 [2024-11-20 09:43:42.862502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.160 [2024-11-20 09:43:42.875691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.160 [2024-11-20 09:43:42.875706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.160 [2024-11-20 09:43:42.888029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.160 [2024-11-20 09:43:42.888043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.160 [2024-11-20 09:43:42.900676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.160 [2024-11-20 09:43:42.900692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.160 [2024-11-20 09:43:42.914290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.160 [2024-11-20 09:43:42.914306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.160 [2024-11-20 09:43:42.927740] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.160 [2024-11-20 09:43:42.927756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.160 [2024-11-20 09:43:42.941033] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.160 [2024-11-20 09:43:42.941049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.160 [2024-11-20 09:43:42.954218] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.160 [2024-11-20 09:43:42.954233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.160 [2024-11-20 09:43:42.967093] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.160 [2024-11-20 09:43:42.967112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.160 19239.00 IOPS, 150.30 MiB/s [2024-11-20T08:43:43.076Z] [2024-11-20 09:43:42.980132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.160 [2024-11-20 09:43:42.980147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.160 00:10:12.160 Latency(us) 00:10:12.160 [2024-11-20T08:43:43.076Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:12.160 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:12.160 Nvme1n1 : 5.01 19239.62 150.31 0.00 0.00 6646.83 2935.47 16820.91 00:10:12.160 [2024-11-20T08:43:43.076Z] =================================================================================================================== 00:10:12.160 [2024-11-20T08:43:43.076Z] Total : 19239.62 150.31 0.00 0.00 6646.83 2935.47 16820.91 00:10:12.160 [2024-11-20 09:43:42.989275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.160 [2024-11-20 09:43:42.989290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.160 [2024-11-20 09:43:43.001308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.160 [2024-11-20 09:43:43.001321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.160 [2024-11-20 09:43:43.013337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.160 [2024-11-20 09:43:43.013350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.160 [2024-11-20 09:43:43.025369] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.160 [2024-11-20 09:43:43.025380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.160 [2024-11-20 09:43:43.037396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.160 [2024-11-20 09:43:43.037405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.160 [2024-11-20 09:43:43.049423] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.160 [2024-11-20 09:43:43.049432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.160 [2024-11-20 09:43:43.061454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.160 [2024-11-20 09:43:43.061463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.421 [2024-11-20 09:43:43.073487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.421 [2024-11-20 09:43:43.073497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.421 [2024-11-20 09:43:43.085515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.421 [2024-11-20 09:43:43.085524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1219441) - No such process 00:10:12.421 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1219441 00:10:12.421 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:12.421 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.421 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:12.421 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.421 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:12.421 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.421 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:12.421 delay0 00:10:12.421 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.421 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:12.421 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.421 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:12.421 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.421 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:12.421 [2024-11-20 09:43:43.300331] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:19.004 Initializing NVMe Controllers 00:10:19.004 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:19.004 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:19.004 Initialization complete. Launching workers. 00:10:19.004 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 726 00:10:19.004 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1009, failed to submit 37 00:10:19.004 success 807, unsuccessful 202, failed 0 00:10:19.004 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:19.004 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:19.004 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:19.004 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:10:19.004 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:19.004 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:10:19.004 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:19.004 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:19.004 rmmod nvme_tcp 00:10:19.004 rmmod nvme_fabrics 00:10:19.004 rmmod nvme_keyring 00:10:19.004 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:19.004 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:10:19.004 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:10:19.004 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 1217060 ']' 00:10:19.004 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 1217060 00:10:19.004 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 1217060 ']' 00:10:19.004 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 1217060 00:10:19.004 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:10:19.004 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:19.004 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1217060 00:10:19.004 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:19.004 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:19.004 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1217060' 00:10:19.004 killing process with pid 1217060 00:10:19.004 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 1217060 00:10:19.004 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 1217060 00:10:19.004 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:19.004 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:19.004 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:19.004 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:10:19.004 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:10:19.004 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:19.004 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:10:19.004 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:19.004 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:19.004 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:19.004 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:19.004 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:21.549 09:43:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:21.549 00:10:21.549 real 0m33.497s 00:10:21.549 user 0m44.328s 00:10:21.549 sys 0m11.032s 00:10:21.549 09:43:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:21.549 09:43:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:21.549 ************************************ 00:10:21.549 END TEST nvmf_zcopy 00:10:21.550 ************************************ 00:10:21.550 09:43:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:21.550 09:43:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:21.550 09:43:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:21.550 09:43:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:21.550 ************************************ 00:10:21.550 START TEST nvmf_nmic 00:10:21.550 ************************************ 00:10:21.550 09:43:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:21.550 * Looking for test storage... 00:10:21.550 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:21.550 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:21.550 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:10:21.550 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:21.550 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:21.550 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:21.550 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:21.550 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:21.550 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:10:21.550 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:10:21.550 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:10:21.550 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:10:21.550 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:10:21.550 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:10:21.550 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:10:21.550 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:21.550 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:10:21.550 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:10:21.550 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:21.550 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:21.550 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:10:21.550 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:10:21.550 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:21.550 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:10:21.550 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:10:21.550 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:10:21.550 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:10:21.550 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:21.550 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:10:21.550 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:10:21.550 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:21.550 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:21.550 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:10:21.550 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:21.550 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:21.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.550 --rc genhtml_branch_coverage=1 00:10:21.550 --rc genhtml_function_coverage=1 00:10:21.550 --rc genhtml_legend=1 00:10:21.550 --rc geninfo_all_blocks=1 00:10:21.550 --rc geninfo_unexecuted_blocks=1 00:10:21.550 00:10:21.550 ' 00:10:21.550 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:21.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.550 --rc genhtml_branch_coverage=1 00:10:21.550 --rc genhtml_function_coverage=1 00:10:21.550 --rc genhtml_legend=1 00:10:21.550 --rc geninfo_all_blocks=1 00:10:21.550 --rc geninfo_unexecuted_blocks=1 00:10:21.550 00:10:21.550 ' 00:10:21.550 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:21.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.550 --rc genhtml_branch_coverage=1 00:10:21.550 --rc genhtml_function_coverage=1 00:10:21.550 --rc genhtml_legend=1 00:10:21.550 --rc geninfo_all_blocks=1 00:10:21.550 --rc geninfo_unexecuted_blocks=1 00:10:21.550 00:10:21.550 ' 00:10:21.550 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:21.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.550 --rc genhtml_branch_coverage=1 00:10:21.550 --rc genhtml_function_coverage=1 00:10:21.550 --rc genhtml_legend=1 00:10:21.550 --rc geninfo_all_blocks=1 00:10:21.550 --rc geninfo_unexecuted_blocks=1 00:10:21.550 00:10:21.550 ' 00:10:21.550 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:21.550 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:21.550 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:21.550 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:21.550 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:21.550 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:21.550 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:21.550 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:21.550 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:21.550 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:21.550 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:21.550 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:21.550 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:21.550 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:21.550 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:21.550 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:21.550 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:21.550 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:21.550 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:21.550 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:10:21.550 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:21.550 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:21.550 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:21.550 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.550 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.550 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.550 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:21.550 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.550 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:10:21.550 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:21.550 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:21.550 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:21.550 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:21.550 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:21.551 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:21.551 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:21.551 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:21.551 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:21.551 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:21.551 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:21.551 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:21.551 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:21.551 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:21.551 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:21.551 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:21.551 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:21.551 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:21.551 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:21.551 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:21.551 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:21.551 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:21.551 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:21.551 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:10:21.551 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:29.691 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:29.691 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:10:29.691 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:29.691 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:29.691 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:29.691 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:29.691 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:29.691 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:10:29.691 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:29.692 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:10:29.692 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:10:29.692 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:10:29.692 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:10:29.692 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:10:29.692 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:10:29.692 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:29.692 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:29.692 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:29.692 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:29.692 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:29.692 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:29.692 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:29.692 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:29.692 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:29.692 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:29.692 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:29.692 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:29.692 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:29.692 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:29.692 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:29.692 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:29.692 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:29.692 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:29.692 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:29.692 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:29.692 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:29.692 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:29.692 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:29.692 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:29.692 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:29.692 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:29.692 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:29.692 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:29.692 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:29.692 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:29.692 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:29.692 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:29.692 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:29.692 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:29.692 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:29.692 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:29.692 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:29.692 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:29.692 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:29.692 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:29.692 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:29.692 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:29.692 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:29.692 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:29.692 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:29.692 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:29.692 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:29.692 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:29.692 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:29.692 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:29.692 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:29.692 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:29.692 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:29.692 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:29.692 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:29.692 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:29.692 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:29.692 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:29.692 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:10:29.692 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:29.692 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:29.692 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:29.692 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:29.692 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:29.692 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:29.692 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:29.692 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:29.692 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:29.692 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:29.692 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:29.692 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:29.692 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:29.692 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:29.692 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:29.692 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:29.692 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:29.692 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:29.692 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:29.692 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:29.692 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:29.692 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:29.692 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:29.692 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:29.692 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:29.692 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:29.692 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:29.692 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.679 ms 00:10:29.692 00:10:29.692 --- 10.0.0.2 ping statistics --- 00:10:29.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:29.692 rtt min/avg/max/mdev = 0.679/0.679/0.679/0.000 ms 00:10:29.692 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:29.692 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:29.692 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.315 ms 00:10:29.692 00:10:29.692 --- 10.0.0.1 ping statistics --- 00:10:29.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:29.692 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:10:29.692 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:29.692 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:10:29.692 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:29.692 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:29.692 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:29.692 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:29.692 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:29.692 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:29.692 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:29.692 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:29.693 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:29.693 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:29.693 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:29.693 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=1225873 00:10:29.693 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 1225873 00:10:29.693 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:29.693 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 1225873 ']' 00:10:29.693 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:29.693 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:29.693 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:29.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:29.693 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:29.693 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:29.693 [2024-11-20 09:43:59.771041] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:10:29.693 [2024-11-20 09:43:59.771114] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:29.693 [2024-11-20 09:43:59.869225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:29.693 [2024-11-20 09:43:59.924377] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:29.693 [2024-11-20 09:43:59.924431] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:29.693 [2024-11-20 09:43:59.924440] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:29.693 [2024-11-20 09:43:59.924447] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:29.693 [2024-11-20 09:43:59.924453] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:29.693 [2024-11-20 09:43:59.926657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:29.693 [2024-11-20 09:43:59.926818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:29.693 [2024-11-20 09:43:59.926979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.693 [2024-11-20 09:43:59.926979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:29.693 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:29.693 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:10:29.693 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:29.693 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:29.693 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:29.954 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:29.954 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:29.954 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.954 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:29.954 [2024-11-20 09:44:00.652075] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:29.954 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.954 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:29.954 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.954 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:29.954 Malloc0 00:10:29.954 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.954 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:29.954 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.954 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:29.954 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.954 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:29.954 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.954 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:29.954 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.954 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:29.954 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.954 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:29.955 [2024-11-20 09:44:00.727203] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:29.955 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.955 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:29.955 test case1: single bdev can't be used in multiple subsystems 00:10:29.955 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:29.955 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.955 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:29.955 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.955 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:29.955 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.955 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:29.955 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.955 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:29.955 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:29.955 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.955 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:29.955 [2024-11-20 09:44:00.763057] bdev.c:8199:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:29.955 [2024-11-20 09:44:00.763085] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:29.955 [2024-11-20 09:44:00.763095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.955 request: 00:10:29.955 { 00:10:29.955 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:29.955 "namespace": { 00:10:29.955 "bdev_name": "Malloc0", 00:10:29.955 "no_auto_visible": false 00:10:29.955 }, 00:10:29.955 "method": "nvmf_subsystem_add_ns", 00:10:29.955 "req_id": 1 00:10:29.955 } 00:10:29.955 Got JSON-RPC error response 00:10:29.955 response: 00:10:29.955 { 00:10:29.955 "code": -32602, 00:10:29.955 "message": "Invalid parameters" 00:10:29.955 } 00:10:29.955 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:29.955 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:29.955 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:29.955 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:29.955 Adding namespace failed - expected result. 00:10:29.955 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:29.955 test case2: host connect to nvmf target in multiple paths 00:10:29.955 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:29.955 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.955 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:29.955 [2024-11-20 09:44:00.775280] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:29.955 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.955 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:31.869 09:44:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:33.252 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:33.252 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:10:33.252 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:33.252 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:33.252 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:10:35.165 09:44:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:35.165 09:44:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:35.165 09:44:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:35.165 09:44:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:35.165 09:44:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:35.165 09:44:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:10:35.165 09:44:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:35.165 [global] 00:10:35.165 thread=1 00:10:35.165 invalidate=1 00:10:35.165 rw=write 00:10:35.165 time_based=1 00:10:35.165 runtime=1 00:10:35.165 ioengine=libaio 00:10:35.165 direct=1 00:10:35.165 bs=4096 00:10:35.165 iodepth=1 00:10:35.165 norandommap=0 00:10:35.165 numjobs=1 00:10:35.165 00:10:35.165 verify_dump=1 00:10:35.165 verify_backlog=512 00:10:35.165 verify_state_save=0 00:10:35.165 do_verify=1 00:10:35.165 verify=crc32c-intel 00:10:35.165 [job0] 00:10:35.165 filename=/dev/nvme0n1 00:10:35.165 Could not set queue depth (nvme0n1) 00:10:35.734 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:35.734 fio-3.35 00:10:35.734 Starting 1 thread 00:10:36.676 00:10:36.676 job0: (groupid=0, jobs=1): err= 0: pid=1227352: Wed Nov 20 09:44:07 2024 00:10:36.676 read: IOPS=18, BW=74.0KiB/s (75.8kB/s)(76.0KiB/1027msec) 00:10:36.676 slat (nsec): min=26317, max=27389, avg=26592.16, stdev=241.09 00:10:36.676 clat (usec): min=40879, max=41739, avg=40999.40, stdev=183.91 00:10:36.676 lat (usec): min=40906, max=41765, avg=41025.99, stdev=183.86 00:10:36.676 clat percentiles (usec): 00:10:36.676 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:10:36.676 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:36.676 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:10:36.676 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:10:36.676 | 99.99th=[41681] 00:10:36.676 write: IOPS=498, BW=1994KiB/s (2042kB/s)(2048KiB/1027msec); 0 zone resets 00:10:36.676 slat (usec): min=9, max=24877, avg=75.80, stdev=1098.29 00:10:36.676 clat (usec): min=185, max=636, avg=399.23, stdev=67.47 00:10:36.676 lat (usec): min=196, max=25240, avg=475.02, stdev=1099.02 00:10:36.676 clat percentiles (usec): 00:10:36.676 | 1.00th=[ 233], 5.00th=[ 262], 10.00th=[ 322], 20.00th=[ 338], 00:10:36.676 | 30.00th=[ 359], 40.00th=[ 396], 50.00th=[ 420], 60.00th=[ 424], 00:10:36.676 | 70.00th=[ 437], 80.00th=[ 453], 90.00th=[ 469], 95.00th=[ 486], 00:10:36.676 | 99.00th=[ 553], 99.50th=[ 586], 99.90th=[ 635], 99.95th=[ 635], 00:10:36.676 | 99.99th=[ 635] 00:10:36.676 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:10:36.676 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:36.676 lat (usec) : 250=2.64%, 500=90.21%, 750=3.58% 00:10:36.676 lat (msec) : 50=3.58% 00:10:36.676 cpu : usr=0.88%, sys=1.07%, ctx=535, majf=0, minf=1 00:10:36.676 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:36.676 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:36.676 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:36.676 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:36.676 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:36.677 00:10:36.677 Run status group 0 (all jobs): 00:10:36.677 READ: bw=74.0KiB/s (75.8kB/s), 74.0KiB/s-74.0KiB/s (75.8kB/s-75.8kB/s), io=76.0KiB (77.8kB), run=1027-1027msec 00:10:36.677 WRITE: bw=1994KiB/s (2042kB/s), 1994KiB/s-1994KiB/s (2042kB/s-2042kB/s), io=2048KiB (2097kB), run=1027-1027msec 00:10:36.677 00:10:36.677 Disk stats (read/write): 00:10:36.677 nvme0n1: ios=41/512, merge=0/0, ticks=1600/188, in_queue=1788, util=98.50% 00:10:36.677 09:44:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:36.939 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:36.939 09:44:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:36.939 09:44:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:10:36.939 09:44:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:36.939 09:44:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:36.939 09:44:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:36.939 09:44:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:36.939 09:44:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:10:36.939 09:44:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:36.939 09:44:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:36.939 09:44:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:36.939 09:44:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:36.939 09:44:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:36.939 09:44:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:36.939 09:44:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:36.939 09:44:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:36.939 rmmod nvme_tcp 00:10:36.939 rmmod nvme_fabrics 00:10:36.939 rmmod nvme_keyring 00:10:36.939 09:44:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:36.939 09:44:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:36.939 09:44:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:36.939 09:44:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 1225873 ']' 00:10:36.939 09:44:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 1225873 00:10:36.939 09:44:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 1225873 ']' 00:10:36.939 09:44:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 1225873 00:10:36.939 09:44:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:10:36.939 09:44:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:36.939 09:44:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1225873 00:10:37.202 09:44:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:37.202 09:44:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:37.202 09:44:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1225873' 00:10:37.202 killing process with pid 1225873 00:10:37.202 09:44:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 1225873 00:10:37.202 09:44:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 1225873 00:10:37.202 09:44:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:37.202 09:44:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:37.202 09:44:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:37.202 09:44:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:37.202 09:44:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:10:37.202 09:44:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:37.202 09:44:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:10:37.202 09:44:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:37.202 09:44:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:37.202 09:44:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:37.202 09:44:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:37.202 09:44:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:39.753 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:39.753 00:10:39.753 real 0m18.103s 00:10:39.753 user 0m49.203s 00:10:39.753 sys 0m6.718s 00:10:39.753 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:39.753 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:39.753 ************************************ 00:10:39.753 END TEST nvmf_nmic 00:10:39.753 ************************************ 00:10:39.753 09:44:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:39.753 09:44:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:39.753 09:44:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:39.753 09:44:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:39.753 ************************************ 00:10:39.753 START TEST nvmf_fio_target 00:10:39.753 ************************************ 00:10:39.753 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:39.753 * Looking for test storage... 00:10:39.753 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:39.753 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:39.753 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:10:39.753 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:39.753 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:39.753 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:39.753 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:39.753 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:39.753 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:39.753 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:39.753 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:39.753 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:39.753 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:39.753 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:39.753 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:39.753 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:39.753 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:39.753 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:39.753 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:39.753 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:39.753 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:39.753 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:39.753 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:39.753 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:39.753 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:39.753 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:39.753 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:39.753 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:39.753 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:39.753 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:39.753 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:39.753 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:39.753 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:39.753 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:39.753 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:39.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:39.753 --rc genhtml_branch_coverage=1 00:10:39.753 --rc genhtml_function_coverage=1 00:10:39.753 --rc genhtml_legend=1 00:10:39.753 --rc geninfo_all_blocks=1 00:10:39.753 --rc geninfo_unexecuted_blocks=1 00:10:39.753 00:10:39.753 ' 00:10:39.753 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:39.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:39.753 --rc genhtml_branch_coverage=1 00:10:39.753 --rc genhtml_function_coverage=1 00:10:39.753 --rc genhtml_legend=1 00:10:39.753 --rc geninfo_all_blocks=1 00:10:39.753 --rc geninfo_unexecuted_blocks=1 00:10:39.753 00:10:39.753 ' 00:10:39.753 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:39.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:39.753 --rc genhtml_branch_coverage=1 00:10:39.753 --rc genhtml_function_coverage=1 00:10:39.753 --rc genhtml_legend=1 00:10:39.753 --rc geninfo_all_blocks=1 00:10:39.753 --rc geninfo_unexecuted_blocks=1 00:10:39.753 00:10:39.753 ' 00:10:39.753 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:39.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:39.753 --rc genhtml_branch_coverage=1 00:10:39.753 --rc genhtml_function_coverage=1 00:10:39.753 --rc genhtml_legend=1 00:10:39.753 --rc geninfo_all_blocks=1 00:10:39.753 --rc geninfo_unexecuted_blocks=1 00:10:39.753 00:10:39.753 ' 00:10:39.753 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:39.753 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:39.754 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:39.754 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:39.754 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:39.754 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:39.754 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:39.754 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:39.754 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:39.754 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:39.754 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:39.754 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:39.754 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:39.754 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:39.754 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:39.754 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:39.754 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:39.754 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:39.754 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:39.754 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:39.754 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:39.754 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:39.754 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:39.754 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.754 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.754 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.754 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:39.754 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.754 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:39.754 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:39.754 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:39.754 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:39.754 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:39.754 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:39.754 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:39.754 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:39.754 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:39.754 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:39.754 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:39.754 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:39.754 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:39.754 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:39.754 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:39.754 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:39.754 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:39.754 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:39.754 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:39.754 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:39.754 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:39.754 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:39.754 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:39.754 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:39.754 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:39.754 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:10:39.754 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:47.895 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:47.895 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:10:47.895 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:47.895 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:47.895 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:47.895 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:47.895 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:47.895 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:10:47.895 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:47.895 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:10:47.895 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:10:47.895 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:10:47.895 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:10:47.895 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:10:47.895 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:10:47.895 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:47.895 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:47.895 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:47.895 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:47.895 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:47.895 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:47.895 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:47.895 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:47.895 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:47.895 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:47.895 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:47.895 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:47.895 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:47.895 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:47.895 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:47.895 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:47.895 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:47.895 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:47.895 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:47.895 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:47.895 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:47.895 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:47.895 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:47.895 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:47.895 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:47.895 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:47.895 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:47.895 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:47.895 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:47.895 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:47.895 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:47.895 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:47.895 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:47.895 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:47.896 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:47.896 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:47.896 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:47.896 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:47.896 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:47.896 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:47.896 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:47.896 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:47.896 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:47.896 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:47.896 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:47.896 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:47.896 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:47.896 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:47.896 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:47.896 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:47.896 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:47.896 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:47.896 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:47.896 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:47.896 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:47.896 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:47.896 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:47.896 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:47.896 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:10:47.896 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:47.896 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:47.896 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:47.896 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:47.896 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:47.896 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:47.896 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:47.896 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:47.896 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:47.896 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:47.896 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:47.896 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:47.896 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:47.896 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:47.896 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:47.896 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:47.896 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:47.896 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:47.896 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:47.896 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:47.896 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:47.896 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:47.896 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:47.896 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:47.896 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:47.896 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:47.896 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:47.896 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.667 ms 00:10:47.896 00:10:47.896 --- 10.0.0.2 ping statistics --- 00:10:47.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:47.896 rtt min/avg/max/mdev = 0.667/0.667/0.667/0.000 ms 00:10:47.896 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:47.896 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:47.896 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:10:47.896 00:10:47.896 --- 10.0.0.1 ping statistics --- 00:10:47.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:47.896 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:10:47.896 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:47.896 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:10:47.896 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:47.896 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:47.896 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:47.896 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:47.896 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:47.896 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:47.896 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:47.896 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:47.896 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:47.896 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:47.896 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:47.896 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=1232016 00:10:47.896 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 1232016 00:10:47.896 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:47.896 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 1232016 ']' 00:10:47.896 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:47.896 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:47.896 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:47.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:47.896 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:47.896 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:47.896 [2024-11-20 09:44:17.970777] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:10:47.896 [2024-11-20 09:44:17.970840] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:47.896 [2024-11-20 09:44:18.070300] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:47.896 [2024-11-20 09:44:18.124874] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:47.896 [2024-11-20 09:44:18.124928] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:47.896 [2024-11-20 09:44:18.124936] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:47.896 [2024-11-20 09:44:18.124944] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:47.896 [2024-11-20 09:44:18.124950] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:47.896 [2024-11-20 09:44:18.127409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:47.896 [2024-11-20 09:44:18.127566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:47.896 [2024-11-20 09:44:18.127726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:47.896 [2024-11-20 09:44:18.127727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:47.896 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:47.896 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:10:47.896 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:47.896 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:47.896 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.158 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:48.158 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:48.158 [2024-11-20 09:44:19.004367] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:48.158 09:44:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:48.418 09:44:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:48.418 09:44:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:48.684 09:44:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:48.684 09:44:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:48.977 09:44:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:48.977 09:44:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:49.307 09:44:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:49.307 09:44:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:49.307 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:49.569 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:49.569 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:49.830 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:49.830 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:49.830 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:49.830 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:50.091 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:50.350 09:44:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:50.350 09:44:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:50.350 09:44:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:50.350 09:44:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:50.609 09:44:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:50.869 [2024-11-20 09:44:21.580871] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:50.869 09:44:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:51.130 09:44:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:51.130 09:44:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:53.040 09:44:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:53.040 09:44:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:10:53.040 09:44:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:53.040 09:44:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:10:53.040 09:44:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:10:53.040 09:44:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:10:54.974 09:44:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:54.974 09:44:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:54.974 09:44:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:54.974 09:44:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:10:54.974 09:44:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:54.974 09:44:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:10:54.974 09:44:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:54.974 [global] 00:10:54.974 thread=1 00:10:54.974 invalidate=1 00:10:54.974 rw=write 00:10:54.974 time_based=1 00:10:54.974 runtime=1 00:10:54.974 ioengine=libaio 00:10:54.974 direct=1 00:10:54.974 bs=4096 00:10:54.974 iodepth=1 00:10:54.974 norandommap=0 00:10:54.974 numjobs=1 00:10:54.974 00:10:54.974 verify_dump=1 00:10:54.974 verify_backlog=512 00:10:54.974 verify_state_save=0 00:10:54.974 do_verify=1 00:10:54.974 verify=crc32c-intel 00:10:54.974 [job0] 00:10:54.974 filename=/dev/nvme0n1 00:10:54.974 [job1] 00:10:54.974 filename=/dev/nvme0n2 00:10:54.974 [job2] 00:10:54.974 filename=/dev/nvme0n3 00:10:54.974 [job3] 00:10:54.974 filename=/dev/nvme0n4 00:10:54.974 Could not set queue depth (nvme0n1) 00:10:54.974 Could not set queue depth (nvme0n2) 00:10:54.974 Could not set queue depth (nvme0n3) 00:10:54.974 Could not set queue depth (nvme0n4) 00:10:55.238 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:55.238 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:55.238 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:55.238 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:55.238 fio-3.35 00:10:55.238 Starting 4 threads 00:10:56.643 00:10:56.644 job0: (groupid=0, jobs=1): err= 0: pid=1233759: Wed Nov 20 09:44:27 2024 00:10:56.644 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:10:56.644 slat (nsec): min=23996, max=43237, avg=24926.95, stdev=1755.42 00:10:56.644 clat (usec): min=726, max=1175, avg=985.00, stdev=73.52 00:10:56.644 lat (usec): min=752, max=1200, avg=1009.92, stdev=73.38 00:10:56.644 clat percentiles (usec): 00:10:56.644 | 1.00th=[ 758], 5.00th=[ 832], 10.00th=[ 881], 20.00th=[ 947], 00:10:56.644 | 30.00th=[ 971], 40.00th=[ 988], 50.00th=[ 996], 60.00th=[ 1012], 00:10:56.644 | 70.00th=[ 1020], 80.00th=[ 1037], 90.00th=[ 1057], 95.00th=[ 1074], 00:10:56.644 | 99.00th=[ 1123], 99.50th=[ 1139], 99.90th=[ 1172], 99.95th=[ 1172], 00:10:56.644 | 99.99th=[ 1172] 00:10:56.644 write: IOPS=748, BW=2993KiB/s (3065kB/s)(2996KiB/1001msec); 0 zone resets 00:10:56.644 slat (nsec): min=9342, max=68922, avg=28925.11, stdev=9374.28 00:10:56.644 clat (usec): min=203, max=953, avg=603.42, stdev=125.08 00:10:56.644 lat (usec): min=212, max=986, avg=632.35, stdev=128.81 00:10:56.644 clat percentiles (usec): 00:10:56.644 | 1.00th=[ 293], 5.00th=[ 383], 10.00th=[ 441], 20.00th=[ 498], 00:10:56.644 | 30.00th=[ 545], 40.00th=[ 578], 50.00th=[ 611], 60.00th=[ 635], 00:10:56.644 | 70.00th=[ 676], 80.00th=[ 709], 90.00th=[ 758], 95.00th=[ 799], 00:10:56.644 | 99.00th=[ 881], 99.50th=[ 922], 99.90th=[ 955], 99.95th=[ 955], 00:10:56.644 | 99.99th=[ 955] 00:10:56.644 bw ( KiB/s): min= 4096, max= 4096, per=41.56%, avg=4096.00, stdev= 0.00, samples=1 00:10:56.644 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:56.644 lat (usec) : 250=0.24%, 500=11.97%, 750=41.08%, 1000=26.88% 00:10:56.644 lat (msec) : 2=19.83% 00:10:56.644 cpu : usr=1.80%, sys=3.60%, ctx=1261, majf=0, minf=1 00:10:56.644 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:56.644 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:56.644 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:56.644 issued rwts: total=512,749,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:56.644 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:56.644 job1: (groupid=0, jobs=1): err= 0: pid=1233783: Wed Nov 20 09:44:27 2024 00:10:56.644 read: IOPS=17, BW=69.6KiB/s (71.2kB/s)(72.0KiB/1035msec) 00:10:56.644 slat (nsec): min=24978, max=25672, avg=25358.78, stdev=181.77 00:10:56.644 clat (usec): min=961, max=42119, avg=39449.60, stdev=9614.54 00:10:56.644 lat (usec): min=987, max=42145, avg=39474.96, stdev=9614.53 00:10:56.644 clat percentiles (usec): 00:10:56.644 | 1.00th=[ 963], 5.00th=[ 963], 10.00th=[41157], 20.00th=[41157], 00:10:56.644 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:10:56.644 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:56.644 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:56.644 | 99.99th=[42206] 00:10:56.644 write: IOPS=494, BW=1979KiB/s (2026kB/s)(2048KiB/1035msec); 0 zone resets 00:10:56.644 slat (nsec): min=9318, max=67914, avg=28365.08, stdev=9785.07 00:10:56.644 clat (usec): min=255, max=868, avg=597.79, stdev=109.26 00:10:56.644 lat (usec): min=267, max=899, avg=626.16, stdev=113.05 00:10:56.644 clat percentiles (usec): 00:10:56.644 | 1.00th=[ 347], 5.00th=[ 388], 10.00th=[ 457], 20.00th=[ 502], 00:10:56.644 | 30.00th=[ 553], 40.00th=[ 578], 50.00th=[ 603], 60.00th=[ 627], 00:10:56.644 | 70.00th=[ 660], 80.00th=[ 693], 90.00th=[ 725], 95.00th=[ 766], 00:10:56.644 | 99.00th=[ 840], 99.50th=[ 865], 99.90th=[ 873], 99.95th=[ 873], 00:10:56.644 | 99.99th=[ 873] 00:10:56.644 bw ( KiB/s): min= 4096, max= 4096, per=41.56%, avg=4096.00, stdev= 0.00, samples=1 00:10:56.644 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:56.644 lat (usec) : 500=18.68%, 750=71.51%, 1000=6.60% 00:10:56.644 lat (msec) : 50=3.21% 00:10:56.644 cpu : usr=0.68%, sys=1.35%, ctx=531, majf=0, minf=1 00:10:56.644 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:56.644 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:56.644 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:56.644 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:56.644 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:56.644 job2: (groupid=0, jobs=1): err= 0: pid=1233805: Wed Nov 20 09:44:27 2024 00:10:56.644 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:10:56.644 slat (nsec): min=8295, max=30757, avg=27666.43, stdev=938.52 00:10:56.644 clat (usec): min=670, max=1148, avg=962.92, stdev=56.73 00:10:56.644 lat (usec): min=699, max=1175, avg=990.59, stdev=56.65 00:10:56.644 clat percentiles (usec): 00:10:56.644 | 1.00th=[ 799], 5.00th=[ 848], 10.00th=[ 889], 20.00th=[ 930], 00:10:56.644 | 30.00th=[ 947], 40.00th=[ 955], 50.00th=[ 971], 60.00th=[ 988], 00:10:56.644 | 70.00th=[ 996], 80.00th=[ 1004], 90.00th=[ 1020], 95.00th=[ 1037], 00:10:56.644 | 99.00th=[ 1090], 99.50th=[ 1090], 99.90th=[ 1156], 99.95th=[ 1156], 00:10:56.644 | 99.99th=[ 1156] 00:10:56.644 write: IOPS=776, BW=3105KiB/s (3179kB/s)(3108KiB/1001msec); 0 zone resets 00:10:56.644 slat (nsec): min=9635, max=68618, avg=32630.99, stdev=9838.57 00:10:56.644 clat (usec): min=165, max=857, avg=589.30, stdev=118.59 00:10:56.644 lat (usec): min=177, max=893, avg=621.93, stdev=122.53 00:10:56.644 clat percentiles (usec): 00:10:56.644 | 1.00th=[ 285], 5.00th=[ 375], 10.00th=[ 429], 20.00th=[ 494], 00:10:56.644 | 30.00th=[ 537], 40.00th=[ 562], 50.00th=[ 603], 60.00th=[ 635], 00:10:56.644 | 70.00th=[ 660], 80.00th=[ 693], 90.00th=[ 725], 95.00th=[ 766], 00:10:56.644 | 99.00th=[ 832], 99.50th=[ 848], 99.90th=[ 857], 99.95th=[ 857], 00:10:56.644 | 99.99th=[ 857] 00:10:56.644 bw ( KiB/s): min= 4096, max= 4096, per=41.56%, avg=4096.00, stdev= 0.00, samples=1 00:10:56.644 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:56.644 lat (usec) : 250=0.23%, 500=12.57%, 750=43.44%, 1000=35.22% 00:10:56.644 lat (msec) : 2=8.53% 00:10:56.644 cpu : usr=4.20%, sys=3.70%, ctx=1290, majf=0, minf=1 00:10:56.644 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:56.644 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:56.644 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:56.644 issued rwts: total=512,777,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:56.644 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:56.644 job3: (groupid=0, jobs=1): err= 0: pid=1233807: Wed Nov 20 09:44:27 2024 00:10:56.644 read: IOPS=16, BW=66.0KiB/s (67.6kB/s)(68.0KiB/1030msec) 00:10:56.644 slat (nsec): min=26688, max=27690, avg=26945.82, stdev=290.65 00:10:56.644 clat (usec): min=40867, max=42163, avg=41778.39, stdev=364.39 00:10:56.644 lat (usec): min=40894, max=42190, avg=41805.33, stdev=364.32 00:10:56.644 clat percentiles (usec): 00:10:56.644 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41681], 00:10:56.644 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:10:56.644 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:56.644 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:56.644 | 99.99th=[42206] 00:10:56.644 write: IOPS=497, BW=1988KiB/s (2036kB/s)(2048KiB/1030msec); 0 zone resets 00:10:56.644 slat (nsec): min=10219, max=70684, avg=30797.70, stdev=10464.61 00:10:56.644 clat (usec): min=217, max=958, avg=584.88, stdev=128.11 00:10:56.644 lat (usec): min=231, max=992, avg=615.67, stdev=132.89 00:10:56.644 clat percentiles (usec): 00:10:56.644 | 1.00th=[ 253], 5.00th=[ 351], 10.00th=[ 416], 20.00th=[ 478], 00:10:56.644 | 30.00th=[ 529], 40.00th=[ 570], 50.00th=[ 586], 60.00th=[ 619], 00:10:56.644 | 70.00th=[ 660], 80.00th=[ 693], 90.00th=[ 742], 95.00th=[ 766], 00:10:56.644 | 99.00th=[ 857], 99.50th=[ 922], 99.90th=[ 955], 99.95th=[ 955], 00:10:56.644 | 99.99th=[ 955] 00:10:56.644 bw ( KiB/s): min= 4096, max= 4096, per=41.56%, avg=4096.00, stdev= 0.00, samples=1 00:10:56.644 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:56.644 lat (usec) : 250=0.76%, 500=23.63%, 750=64.46%, 1000=7.94% 00:10:56.644 lat (msec) : 50=3.21% 00:10:56.644 cpu : usr=0.87%, sys=1.36%, ctx=531, majf=0, minf=1 00:10:56.644 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:56.644 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:56.644 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:56.644 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:56.644 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:56.644 00:10:56.644 Run status group 0 (all jobs): 00:10:56.644 READ: bw=4093KiB/s (4191kB/s), 66.0KiB/s-2046KiB/s (67.6kB/s-2095kB/s), io=4236KiB (4338kB), run=1001-1035msec 00:10:56.644 WRITE: bw=9855KiB/s (10.1MB/s), 1979KiB/s-3105KiB/s (2026kB/s-3179kB/s), io=9.96MiB (10.4MB), run=1001-1035msec 00:10:56.644 00:10:56.644 Disk stats (read/write): 00:10:56.644 nvme0n1: ios=544/512, merge=0/0, ticks=516/299, in_queue=815, util=86.27% 00:10:56.644 nvme0n2: ios=40/512, merge=0/0, ticks=525/299, in_queue=824, util=86.20% 00:10:56.644 nvme0n3: ios=527/512, merge=0/0, ticks=1427/236, in_queue=1663, util=96.39% 00:10:56.644 nvme0n4: ios=69/512, merge=0/0, ticks=803/285, in_queue=1088, util=96.46% 00:10:56.644 09:44:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:56.644 [global] 00:10:56.644 thread=1 00:10:56.644 invalidate=1 00:10:56.644 rw=randwrite 00:10:56.644 time_based=1 00:10:56.644 runtime=1 00:10:56.644 ioengine=libaio 00:10:56.644 direct=1 00:10:56.644 bs=4096 00:10:56.644 iodepth=1 00:10:56.644 norandommap=0 00:10:56.644 numjobs=1 00:10:56.644 00:10:56.644 verify_dump=1 00:10:56.644 verify_backlog=512 00:10:56.644 verify_state_save=0 00:10:56.644 do_verify=1 00:10:56.644 verify=crc32c-intel 00:10:56.644 [job0] 00:10:56.644 filename=/dev/nvme0n1 00:10:56.644 [job1] 00:10:56.644 filename=/dev/nvme0n2 00:10:56.644 [job2] 00:10:56.644 filename=/dev/nvme0n3 00:10:56.644 [job3] 00:10:56.644 filename=/dev/nvme0n4 00:10:56.644 Could not set queue depth (nvme0n1) 00:10:56.644 Could not set queue depth (nvme0n2) 00:10:56.645 Could not set queue depth (nvme0n3) 00:10:56.645 Could not set queue depth (nvme0n4) 00:10:56.906 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:56.906 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:56.906 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:56.906 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:56.906 fio-3.35 00:10:56.906 Starting 4 threads 00:10:58.324 00:10:58.324 job0: (groupid=0, jobs=1): err= 0: pid=1234265: Wed Nov 20 09:44:28 2024 00:10:58.324 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:10:58.324 slat (nsec): min=7221, max=42900, avg=24901.96, stdev=2021.90 00:10:58.324 clat (usec): min=467, max=1190, avg=957.19, stdev=100.55 00:10:58.325 lat (usec): min=475, max=1215, avg=982.09, stdev=100.64 00:10:58.325 clat percentiles (usec): 00:10:58.325 | 1.00th=[ 668], 5.00th=[ 766], 10.00th=[ 824], 20.00th=[ 889], 00:10:58.325 | 30.00th=[ 914], 40.00th=[ 947], 50.00th=[ 971], 60.00th=[ 996], 00:10:58.325 | 70.00th=[ 1020], 80.00th=[ 1045], 90.00th=[ 1074], 95.00th=[ 1090], 00:10:58.325 | 99.00th=[ 1139], 99.50th=[ 1139], 99.90th=[ 1188], 99.95th=[ 1188], 00:10:58.325 | 99.99th=[ 1188] 00:10:58.325 write: IOPS=777, BW=3109KiB/s (3184kB/s)(3112KiB/1001msec); 0 zone resets 00:10:58.325 slat (nsec): min=3584, max=63578, avg=23406.24, stdev=10741.53 00:10:58.325 clat (usec): min=225, max=980, avg=603.79, stdev=120.88 00:10:58.325 lat (usec): min=258, max=1011, avg=627.19, stdev=122.83 00:10:58.325 clat percentiles (usec): 00:10:58.325 | 1.00th=[ 285], 5.00th=[ 396], 10.00th=[ 445], 20.00th=[ 498], 00:10:58.325 | 30.00th=[ 537], 40.00th=[ 578], 50.00th=[ 619], 60.00th=[ 652], 00:10:58.325 | 70.00th=[ 685], 80.00th=[ 709], 90.00th=[ 750], 95.00th=[ 775], 00:10:58.325 | 99.00th=[ 840], 99.50th=[ 906], 99.90th=[ 979], 99.95th=[ 979], 00:10:58.325 | 99.99th=[ 979] 00:10:58.325 bw ( KiB/s): min= 4096, max= 4096, per=36.27%, avg=4096.00, stdev= 0.00, samples=1 00:10:58.325 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:58.325 lat (usec) : 250=0.16%, 500=12.33%, 750=43.49%, 1000=29.38% 00:10:58.325 lat (msec) : 2=14.65% 00:10:58.325 cpu : usr=2.20%, sys=2.80%, ctx=1290, majf=0, minf=1 00:10:58.325 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:58.325 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.325 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.325 issued rwts: total=512,778,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:58.325 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:58.325 job1: (groupid=0, jobs=1): err= 0: pid=1234282: Wed Nov 20 09:44:28 2024 00:10:58.325 read: IOPS=127, BW=511KiB/s (524kB/s)(512KiB/1001msec) 00:10:58.325 slat (nsec): min=6996, max=41653, avg=23733.79, stdev=6184.15 00:10:58.325 clat (usec): min=275, max=42046, avg=5123.85, stdev=12630.00 00:10:58.325 lat (usec): min=301, max=42072, avg=5147.59, stdev=12630.58 00:10:58.325 clat percentiles (usec): 00:10:58.325 | 1.00th=[ 396], 5.00th=[ 553], 10.00th=[ 594], 20.00th=[ 611], 00:10:58.325 | 30.00th=[ 627], 40.00th=[ 652], 50.00th=[ 758], 60.00th=[ 807], 00:10:58.325 | 70.00th=[ 848], 80.00th=[ 889], 90.00th=[41157], 95.00th=[42206], 00:10:58.325 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:58.325 | 99.99th=[42206] 00:10:58.325 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:10:58.325 slat (nsec): min=9300, max=66302, avg=29314.59, stdev=7874.15 00:10:58.325 clat (usec): min=246, max=951, avg=627.58, stdev=115.66 00:10:58.325 lat (usec): min=268, max=981, avg=656.90, stdev=117.90 00:10:58.325 clat percentiles (usec): 00:10:58.325 | 1.00th=[ 351], 5.00th=[ 408], 10.00th=[ 474], 20.00th=[ 529], 00:10:58.325 | 30.00th=[ 586], 40.00th=[ 611], 50.00th=[ 635], 60.00th=[ 668], 00:10:58.325 | 70.00th=[ 701], 80.00th=[ 725], 90.00th=[ 758], 95.00th=[ 799], 00:10:58.325 | 99.00th=[ 840], 99.50th=[ 906], 99.90th=[ 955], 99.95th=[ 955], 00:10:58.325 | 99.99th=[ 955] 00:10:58.325 bw ( KiB/s): min= 4096, max= 4096, per=36.27%, avg=4096.00, stdev= 0.00, samples=1 00:10:58.325 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:58.325 lat (usec) : 250=0.16%, 500=11.88%, 750=67.81%, 1000=16.72% 00:10:58.325 lat (msec) : 2=1.25%, 50=2.19% 00:10:58.325 cpu : usr=1.10%, sys=1.70%, ctx=640, majf=0, minf=2 00:10:58.325 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:58.325 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.325 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.325 issued rwts: total=128,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:58.325 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:58.325 job2: (groupid=0, jobs=1): err= 0: pid=1234302: Wed Nov 20 09:44:28 2024 00:10:58.325 read: IOPS=613, BW=2454KiB/s (2512kB/s)(2456KiB/1001msec) 00:10:58.325 slat (nsec): min=7152, max=59317, avg=25846.36, stdev=6250.99 00:10:58.325 clat (usec): min=200, max=1573, avg=706.91, stdev=189.58 00:10:58.325 lat (usec): min=208, max=1600, avg=732.75, stdev=190.04 00:10:58.325 clat percentiles (usec): 00:10:58.325 | 1.00th=[ 314], 5.00th=[ 469], 10.00th=[ 537], 20.00th=[ 570], 00:10:58.325 | 30.00th=[ 594], 40.00th=[ 611], 50.00th=[ 644], 60.00th=[ 750], 00:10:58.325 | 70.00th=[ 791], 80.00th=[ 840], 90.00th=[ 979], 95.00th=[ 1090], 00:10:58.325 | 99.00th=[ 1205], 99.50th=[ 1270], 99.90th=[ 1582], 99.95th=[ 1582], 00:10:58.325 | 99.99th=[ 1582] 00:10:58.325 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:10:58.325 slat (nsec): min=9892, max=65432, avg=29443.55, stdev=10141.16 00:10:58.325 clat (usec): min=109, max=1062, avg=494.99, stdev=152.91 00:10:58.325 lat (usec): min=120, max=1096, avg=524.44, stdev=155.00 00:10:58.325 clat percentiles (usec): 00:10:58.325 | 1.00th=[ 159], 5.00th=[ 269], 10.00th=[ 338], 20.00th=[ 371], 00:10:58.325 | 30.00th=[ 388], 40.00th=[ 412], 50.00th=[ 469], 60.00th=[ 545], 00:10:58.325 | 70.00th=[ 594], 80.00th=[ 644], 90.00th=[ 709], 95.00th=[ 742], 00:10:58.325 | 99.00th=[ 832], 99.50th=[ 857], 99.90th=[ 938], 99.95th=[ 1057], 00:10:58.325 | 99.99th=[ 1057] 00:10:58.325 bw ( KiB/s): min= 4096, max= 4096, per=36.27%, avg=4096.00, stdev= 0.00, samples=1 00:10:58.325 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:58.325 lat (usec) : 250=1.95%, 500=34.31%, 750=46.52%, 1000=13.61% 00:10:58.325 lat (msec) : 2=3.60% 00:10:58.325 cpu : usr=2.30%, sys=4.80%, ctx=1640, majf=0, minf=1 00:10:58.325 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:58.325 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.325 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.325 issued rwts: total=614,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:58.325 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:58.325 job3: (groupid=0, jobs=1): err= 0: pid=1234309: Wed Nov 20 09:44:28 2024 00:10:58.325 read: IOPS=56, BW=228KiB/s (233kB/s)(228KiB/1001msec) 00:10:58.325 slat (nsec): min=26265, max=45547, avg=27548.58, stdev=3838.60 00:10:58.325 clat (usec): min=912, max=41992, avg=11778.91, stdev=17991.87 00:10:58.325 lat (usec): min=939, max=42018, avg=11806.46, stdev=17991.38 00:10:58.325 clat percentiles (usec): 00:10:58.325 | 1.00th=[ 914], 5.00th=[ 988], 10.00th=[ 1057], 20.00th=[ 1090], 00:10:58.325 | 30.00th=[ 1106], 40.00th=[ 1139], 50.00th=[ 1172], 60.00th=[ 1188], 00:10:58.325 | 70.00th=[ 1205], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:10:58.325 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:58.325 | 99.99th=[42206] 00:10:58.325 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:10:58.325 slat (nsec): min=8901, max=74448, avg=30912.99, stdev=8218.95 00:10:58.325 clat (usec): min=224, max=3810, avg=601.43, stdev=185.26 00:10:58.325 lat (usec): min=257, max=3844, avg=632.34, stdev=186.80 00:10:58.325 clat percentiles (usec): 00:10:58.325 | 1.00th=[ 293], 5.00th=[ 379], 10.00th=[ 445], 20.00th=[ 490], 00:10:58.325 | 30.00th=[ 537], 40.00th=[ 570], 50.00th=[ 603], 60.00th=[ 635], 00:10:58.325 | 70.00th=[ 668], 80.00th=[ 701], 90.00th=[ 742], 95.00th=[ 783], 00:10:58.325 | 99.00th=[ 824], 99.50th=[ 865], 99.90th=[ 3818], 99.95th=[ 3818], 00:10:58.325 | 99.99th=[ 3818] 00:10:58.325 bw ( KiB/s): min= 4096, max= 4096, per=36.27%, avg=4096.00, stdev= 0.00, samples=1 00:10:58.325 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:58.325 lat (usec) : 250=0.35%, 500=19.68%, 750=62.57%, 1000=7.73% 00:10:58.325 lat (msec) : 2=6.85%, 4=0.18%, 50=2.64% 00:10:58.325 cpu : usr=1.00%, sys=2.40%, ctx=570, majf=0, minf=1 00:10:58.325 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:58.325 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.325 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.325 issued rwts: total=57,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:58.325 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:58.325 00:10:58.325 Run status group 0 (all jobs): 00:10:58.325 READ: bw=5239KiB/s (5364kB/s), 228KiB/s-2454KiB/s (233kB/s-2512kB/s), io=5244KiB (5370kB), run=1001-1001msec 00:10:58.325 WRITE: bw=11.0MiB/s (11.6MB/s), 2046KiB/s-4092KiB/s (2095kB/s-4190kB/s), io=11.0MiB (11.6MB), run=1001-1001msec 00:10:58.325 00:10:58.325 Disk stats (read/write): 00:10:58.325 nvme0n1: ios=562/516, merge=0/0, ticks=567/311, in_queue=878, util=87.88% 00:10:58.325 nvme0n2: ios=44/512, merge=0/0, ticks=499/310, in_queue=809, util=86.85% 00:10:58.325 nvme0n3: ios=569/895, merge=0/0, ticks=726/430, in_queue=1156, util=96.84% 00:10:58.325 nvme0n4: ios=19/512, merge=0/0, ticks=508/246, in_queue=754, util=89.43% 00:10:58.325 09:44:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:58.325 [global] 00:10:58.325 thread=1 00:10:58.325 invalidate=1 00:10:58.325 rw=write 00:10:58.325 time_based=1 00:10:58.325 runtime=1 00:10:58.325 ioengine=libaio 00:10:58.325 direct=1 00:10:58.325 bs=4096 00:10:58.325 iodepth=128 00:10:58.325 norandommap=0 00:10:58.325 numjobs=1 00:10:58.325 00:10:58.325 verify_dump=1 00:10:58.325 verify_backlog=512 00:10:58.325 verify_state_save=0 00:10:58.325 do_verify=1 00:10:58.325 verify=crc32c-intel 00:10:58.325 [job0] 00:10:58.325 filename=/dev/nvme0n1 00:10:58.325 [job1] 00:10:58.325 filename=/dev/nvme0n2 00:10:58.325 [job2] 00:10:58.325 filename=/dev/nvme0n3 00:10:58.325 [job3] 00:10:58.325 filename=/dev/nvme0n4 00:10:58.325 Could not set queue depth (nvme0n1) 00:10:58.325 Could not set queue depth (nvme0n2) 00:10:58.325 Could not set queue depth (nvme0n3) 00:10:58.325 Could not set queue depth (nvme0n4) 00:10:58.586 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:58.586 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:58.586 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:58.586 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:58.586 fio-3.35 00:10:58.586 Starting 4 threads 00:10:59.984 00:10:59.984 job0: (groupid=0, jobs=1): err= 0: pid=1234759: Wed Nov 20 09:44:30 2024 00:10:59.984 read: IOPS=6616, BW=25.8MiB/s (27.1MB/s)(26.0MiB/1006msec) 00:10:59.984 slat (nsec): min=919, max=14846k, avg=62054.46, stdev=561434.84 00:10:59.984 clat (usec): min=1416, max=27786, avg=9251.33, stdev=3758.84 00:10:59.984 lat (usec): min=1419, max=27792, avg=9313.38, stdev=3791.39 00:10:59.984 clat percentiles (usec): 00:10:59.984 | 1.00th=[ 2802], 5.00th=[ 4178], 10.00th=[ 5538], 20.00th=[ 6783], 00:10:59.984 | 30.00th=[ 7242], 40.00th=[ 7504], 50.00th=[ 8225], 60.00th=[ 8979], 00:10:59.984 | 70.00th=[10028], 80.00th=[12125], 90.00th=[15008], 95.00th=[16581], 00:10:59.984 | 99.00th=[20317], 99.50th=[20579], 99.90th=[24773], 99.95th=[24773], 00:10:59.984 | 99.99th=[27657] 00:10:59.984 write: IOPS=7403, BW=28.9MiB/s (30.3MB/s)(29.1MiB/1006msec); 0 zone resets 00:10:59.984 slat (nsec): min=1654, max=9292.6k, avg=55942.60, stdev=394565.34 00:10:59.984 clat (usec): min=465, max=49826, avg=8886.67, stdev=6463.88 00:10:59.984 lat (usec): min=471, max=49828, avg=8942.61, stdev=6504.96 00:10:59.984 clat percentiles (usec): 00:10:59.984 | 1.00th=[ 1303], 5.00th=[ 3130], 10.00th=[ 4047], 20.00th=[ 5014], 00:10:59.984 | 30.00th=[ 6128], 40.00th=[ 6652], 50.00th=[ 7111], 60.00th=[ 7504], 00:10:59.984 | 70.00th=[ 8029], 80.00th=[ 9634], 90.00th=[19006], 95.00th=[25035], 00:10:59.984 | 99.00th=[31327], 99.50th=[33817], 99.90th=[45351], 99.95th=[50070], 00:10:59.984 | 99.99th=[50070] 00:10:59.984 bw ( KiB/s): min=28584, max=29984, per=32.29%, avg=29284.00, stdev=989.95, samples=2 00:10:59.984 iops : min= 7146, max= 7496, avg=7321.00, stdev=247.49, samples=2 00:10:59.984 lat (usec) : 500=0.01%, 750=0.03%, 1000=0.10% 00:10:59.984 lat (msec) : 2=1.11%, 4=5.57%, 10=68.78%, 20=18.85%, 50=5.54% 00:10:59.984 cpu : usr=5.77%, sys=7.66%, ctx=515, majf=0, minf=1 00:10:59.984 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:10:59.984 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:59.984 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:59.984 issued rwts: total=6656,7448,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:59.984 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:59.984 job1: (groupid=0, jobs=1): err= 0: pid=1234777: Wed Nov 20 09:44:30 2024 00:10:59.984 read: IOPS=3103, BW=12.1MiB/s (12.7MB/s)(12.2MiB/1005msec) 00:10:59.984 slat (nsec): min=946, max=19121k, avg=127662.95, stdev=874181.46 00:10:59.984 clat (usec): min=3844, max=51273, avg=15770.30, stdev=9267.09 00:10:59.984 lat (usec): min=4159, max=51303, avg=15897.97, stdev=9361.60 00:10:59.984 clat percentiles (usec): 00:10:59.984 | 1.00th=[ 6325], 5.00th=[ 7963], 10.00th=[ 8979], 20.00th=[ 9765], 00:10:59.984 | 30.00th=[10159], 40.00th=[10290], 50.00th=[10945], 60.00th=[12256], 00:10:59.984 | 70.00th=[17171], 80.00th=[23725], 90.00th=[31327], 95.00th=[38536], 00:10:59.984 | 99.00th=[42206], 99.50th=[43254], 99.90th=[47973], 99.95th=[50070], 00:10:59.984 | 99.99th=[51119] 00:10:59.984 write: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec); 0 zone resets 00:10:59.984 slat (nsec): min=1671, max=15393k, avg=163723.42, stdev=981338.29 00:10:59.984 clat (usec): min=1280, max=112670, avg=21849.27, stdev=19777.69 00:10:59.984 lat (usec): min=1291, max=112678, avg=22012.99, stdev=19911.51 00:10:59.984 clat percentiles (msec): 00:10:59.984 | 1.00th=[ 6], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 9], 00:10:59.984 | 30.00th=[ 10], 40.00th=[ 14], 50.00th=[ 17], 60.00th=[ 18], 00:10:59.984 | 70.00th=[ 24], 80.00th=[ 27], 90.00th=[ 45], 95.00th=[ 64], 00:10:59.984 | 99.00th=[ 107], 99.50th=[ 110], 99.90th=[ 113], 99.95th=[ 113], 00:10:59.984 | 99.99th=[ 113] 00:10:59.984 bw ( KiB/s): min=11464, max=16568, per=15.46%, avg=14016.00, stdev=3609.07, samples=2 00:10:59.985 iops : min= 2866, max= 4142, avg=3504.00, stdev=902.27, samples=2 00:10:59.985 lat (msec) : 2=0.10%, 4=0.24%, 10=28.67%, 20=40.36%, 50=26.32% 00:10:59.985 lat (msec) : 100=3.27%, 250=1.04% 00:10:59.985 cpu : usr=2.09%, sys=3.49%, ctx=286, majf=0, minf=1 00:10:59.985 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:10:59.985 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:59.985 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:59.985 issued rwts: total=3119,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:59.985 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:59.985 job2: (groupid=0, jobs=1): err= 0: pid=1234797: Wed Nov 20 09:44:30 2024 00:10:59.985 read: IOPS=7264, BW=28.4MiB/s (29.8MB/s)(28.5MiB/1004msec) 00:10:59.985 slat (nsec): min=917, max=19028k, avg=69364.35, stdev=486453.29 00:10:59.985 clat (usec): min=1210, max=44432, avg=8719.56, stdev=4081.64 00:10:59.985 lat (usec): min=4772, max=44458, avg=8788.92, stdev=4126.99 00:10:59.985 clat percentiles (usec): 00:10:59.985 | 1.00th=[ 5080], 5.00th=[ 6063], 10.00th=[ 6521], 20.00th=[ 6915], 00:10:59.985 | 30.00th=[ 7111], 40.00th=[ 7308], 50.00th=[ 7570], 60.00th=[ 8225], 00:10:59.985 | 70.00th=[ 8455], 80.00th=[ 8979], 90.00th=[10552], 95.00th=[15139], 00:10:59.985 | 99.00th=[28705], 99.50th=[32113], 99.90th=[32375], 99.95th=[32375], 00:10:59.985 | 99.99th=[44303] 00:10:59.985 write: IOPS=7649, BW=29.9MiB/s (31.3MB/s)(30.0MiB/1004msec); 0 zone resets 00:10:59.985 slat (nsec): min=1561, max=12901k, avg=60886.21, stdev=369513.65 00:10:59.985 clat (usec): min=3447, max=35250, avg=8267.47, stdev=3893.84 00:10:59.985 lat (usec): min=3449, max=35260, avg=8328.36, stdev=3922.47 00:10:59.985 clat percentiles (usec): 00:10:59.985 | 1.00th=[ 4555], 5.00th=[ 6128], 10.00th=[ 6521], 20.00th=[ 6718], 00:10:59.985 | 30.00th=[ 6849], 40.00th=[ 6980], 50.00th=[ 7177], 60.00th=[ 7635], 00:10:59.985 | 70.00th=[ 8160], 80.00th=[ 8586], 90.00th=[ 9896], 95.00th=[12649], 00:10:59.985 | 99.00th=[29754], 99.50th=[34341], 99.90th=[34866], 99.95th=[35390], 00:10:59.985 | 99.99th=[35390] 00:10:59.985 bw ( KiB/s): min=25080, max=36344, per=33.87%, avg=30712.00, stdev=7964.85, samples=2 00:10:59.985 iops : min= 6270, max= 9086, avg=7678.00, stdev=1991.21, samples=2 00:10:59.985 lat (msec) : 2=0.01%, 4=0.07%, 10=89.84%, 20=6.75%, 50=3.35% 00:10:59.985 cpu : usr=3.69%, sys=6.28%, ctx=861, majf=0, minf=2 00:10:59.985 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:10:59.985 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:59.985 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:59.985 issued rwts: total=7294,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:59.985 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:59.985 job3: (groupid=0, jobs=1): err= 0: pid=1234804: Wed Nov 20 09:44:30 2024 00:10:59.985 read: IOPS=3913, BW=15.3MiB/s (16.0MB/s)(15.4MiB/1006msec) 00:10:59.985 slat (nsec): min=976, max=11336k, avg=114838.03, stdev=763075.81 00:10:59.985 clat (usec): min=4863, max=62319, avg=13808.64, stdev=7221.77 00:10:59.985 lat (usec): min=4868, max=62326, avg=13923.48, stdev=7304.36 00:10:59.985 clat percentiles (usec): 00:10:59.985 | 1.00th=[ 6128], 5.00th=[ 7767], 10.00th=[ 8094], 20.00th=[ 9241], 00:10:59.985 | 30.00th=[ 9896], 40.00th=[11338], 50.00th=[11994], 60.00th=[12649], 00:10:59.985 | 70.00th=[14353], 80.00th=[16057], 90.00th=[20841], 95.00th=[27395], 00:10:59.985 | 99.00th=[44827], 99.50th=[55837], 99.90th=[62129], 99.95th=[62129], 00:10:59.985 | 99.99th=[62129] 00:10:59.985 write: IOPS=4071, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec); 0 zone resets 00:10:59.985 slat (nsec): min=1630, max=14670k, avg=124483.44, stdev=690103.41 00:10:59.985 clat (usec): min=1154, max=62312, avg=17871.97, stdev=11906.33 00:10:59.985 lat (usec): min=1165, max=62321, avg=17996.45, stdev=11975.64 00:10:59.985 clat percentiles (usec): 00:10:59.985 | 1.00th=[ 4490], 5.00th=[ 5997], 10.00th=[ 6587], 20.00th=[ 8029], 00:10:59.985 | 30.00th=[ 9503], 40.00th=[11076], 50.00th=[13829], 60.00th=[16057], 00:10:59.985 | 70.00th=[23462], 80.00th=[26870], 90.00th=[33424], 95.00th=[43254], 00:10:59.985 | 99.00th=[55837], 99.50th=[56361], 99.90th=[60556], 99.95th=[61604], 00:10:59.985 | 99.99th=[62129] 00:10:59.985 bw ( KiB/s): min=13840, max=18928, per=18.07%, avg=16384.00, stdev=3597.76, samples=2 00:10:59.985 iops : min= 3460, max= 4732, avg=4096.00, stdev=899.44, samples=2 00:10:59.985 lat (msec) : 2=0.12%, 4=0.19%, 10=32.45%, 20=44.18%, 50=21.20% 00:10:59.985 lat (msec) : 100=1.85% 00:10:59.985 cpu : usr=2.89%, sys=4.38%, ctx=365, majf=0, minf=2 00:10:59.985 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:59.985 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:59.985 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:59.985 issued rwts: total=3937,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:59.985 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:59.985 00:10:59.985 Run status group 0 (all jobs): 00:10:59.985 READ: bw=81.6MiB/s (85.5MB/s), 12.1MiB/s-28.4MiB/s (12.7MB/s-29.8MB/s), io=82.1MiB (86.0MB), run=1004-1006msec 00:10:59.985 WRITE: bw=88.6MiB/s (92.9MB/s), 13.9MiB/s-29.9MiB/s (14.6MB/s-31.3MB/s), io=89.1MiB (93.4MB), run=1004-1006msec 00:10:59.985 00:10:59.985 Disk stats (read/write): 00:10:59.985 nvme0n1: ios=5596/6147, merge=0/0, ticks=46174/55351, in_queue=101525, util=95.99% 00:10:59.985 nvme0n2: ios=3103/3319, merge=0/0, ticks=24327/26245, in_queue=50572, util=99.69% 00:10:59.985 nvme0n3: ios=5694/6144, merge=0/0, ticks=25866/23820, in_queue=49686, util=88.31% 00:10:59.985 nvme0n4: ios=3117/3495, merge=0/0, ticks=37877/60267, in_queue=98144, util=95.16% 00:10:59.985 09:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:59.985 [global] 00:10:59.985 thread=1 00:10:59.985 invalidate=1 00:10:59.985 rw=randwrite 00:10:59.985 time_based=1 00:10:59.985 runtime=1 00:10:59.985 ioengine=libaio 00:10:59.985 direct=1 00:10:59.985 bs=4096 00:10:59.985 iodepth=128 00:10:59.985 norandommap=0 00:10:59.985 numjobs=1 00:10:59.985 00:10:59.985 verify_dump=1 00:10:59.985 verify_backlog=512 00:10:59.985 verify_state_save=0 00:10:59.985 do_verify=1 00:10:59.985 verify=crc32c-intel 00:10:59.985 [job0] 00:10:59.985 filename=/dev/nvme0n1 00:10:59.985 [job1] 00:10:59.985 filename=/dev/nvme0n2 00:10:59.985 [job2] 00:10:59.985 filename=/dev/nvme0n3 00:10:59.985 [job3] 00:10:59.985 filename=/dev/nvme0n4 00:10:59.985 Could not set queue depth (nvme0n1) 00:10:59.985 Could not set queue depth (nvme0n2) 00:10:59.985 Could not set queue depth (nvme0n3) 00:10:59.985 Could not set queue depth (nvme0n4) 00:11:00.244 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:00.244 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:00.244 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:00.244 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:00.244 fio-3.35 00:11:00.244 Starting 4 threads 00:11:01.649 00:11:01.649 job0: (groupid=0, jobs=1): err= 0: pid=1235229: Wed Nov 20 09:44:32 2024 00:11:01.649 read: IOPS=3555, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1008msec) 00:11:01.649 slat (nsec): min=913, max=9849.0k, avg=100191.42, stdev=631461.49 00:11:01.649 clat (usec): min=5071, max=43907, avg=12155.29, stdev=5354.99 00:11:01.649 lat (usec): min=5080, max=43913, avg=12255.49, stdev=5421.13 00:11:01.649 clat percentiles (usec): 00:11:01.649 | 1.00th=[ 5932], 5.00th=[ 6849], 10.00th=[ 7635], 20.00th=[ 8586], 00:11:01.649 | 30.00th=[ 9241], 40.00th=[ 9634], 50.00th=[10945], 60.00th=[11600], 00:11:01.649 | 70.00th=[12911], 80.00th=[13829], 90.00th=[19530], 95.00th=[23987], 00:11:01.649 | 99.00th=[32900], 99.50th=[34341], 99.90th=[43779], 99.95th=[43779], 00:11:01.649 | 99.99th=[43779] 00:11:01.649 write: IOPS=3816, BW=14.9MiB/s (15.6MB/s)(15.0MiB/1008msec); 0 zone resets 00:11:01.649 slat (nsec): min=1490, max=8317.1k, avg=159468.15, stdev=675630.13 00:11:01.649 clat (usec): min=1102, max=73988, avg=21922.71, stdev=15696.94 00:11:01.649 lat (usec): min=1112, max=73996, avg=22082.17, stdev=15802.09 00:11:01.649 clat percentiles (usec): 00:11:01.649 | 1.00th=[ 2114], 5.00th=[ 5866], 10.00th=[ 6980], 20.00th=[11338], 00:11:01.649 | 30.00th=[13698], 40.00th=[15270], 50.00th=[17171], 60.00th=[20841], 00:11:01.649 | 70.00th=[23462], 80.00th=[26346], 90.00th=[47449], 95.00th=[64226], 00:11:01.649 | 99.00th=[69731], 99.50th=[70779], 99.90th=[73925], 99.95th=[73925], 00:11:01.649 | 99.99th=[73925] 00:11:01.649 bw ( KiB/s): min=11392, max=18360, per=16.62%, avg=14876.00, stdev=4927.12, samples=2 00:11:01.649 iops : min= 2848, max= 4590, avg=3719.00, stdev=1231.78, samples=2 00:11:01.649 lat (msec) : 2=0.28%, 4=0.58%, 10=28.76%, 20=43.83%, 50=21.88% 00:11:01.649 lat (msec) : 100=4.67% 00:11:01.649 cpu : usr=2.58%, sys=3.48%, ctx=518, majf=0, minf=2 00:11:01.649 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:11:01.649 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:01.649 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:01.649 issued rwts: total=3584,3847,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:01.649 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:01.649 job1: (groupid=0, jobs=1): err= 0: pid=1235236: Wed Nov 20 09:44:32 2024 00:11:01.649 read: IOPS=8057, BW=31.5MiB/s (33.0MB/s)(31.6MiB/1004msec) 00:11:01.649 slat (nsec): min=936, max=10165k, avg=65840.24, stdev=485717.30 00:11:01.649 clat (usec): min=1835, max=34280, avg=8374.47, stdev=4259.32 00:11:01.649 lat (usec): min=2235, max=34938, avg=8440.31, stdev=4302.65 00:11:01.649 clat percentiles (usec): 00:11:01.649 | 1.00th=[ 3425], 5.00th=[ 4490], 10.00th=[ 5014], 20.00th=[ 5669], 00:11:01.649 | 30.00th=[ 6259], 40.00th=[ 6849], 50.00th=[ 7177], 60.00th=[ 7635], 00:11:01.649 | 70.00th=[ 8094], 80.00th=[10028], 90.00th=[13173], 95.00th=[17695], 00:11:01.649 | 99.00th=[25560], 99.50th=[27919], 99.90th=[28443], 99.95th=[28967], 00:11:01.649 | 99.99th=[34341] 00:11:01.649 write: IOPS=8159, BW=31.9MiB/s (33.4MB/s)(32.0MiB/1004msec); 0 zone resets 00:11:01.649 slat (nsec): min=1563, max=7990.6k, avg=51837.39, stdev=352014.91 00:11:01.649 clat (usec): min=1341, max=27721, avg=7246.62, stdev=3481.94 00:11:01.649 lat (usec): min=1350, max=27730, avg=7298.46, stdev=3501.81 00:11:01.649 clat percentiles (usec): 00:11:01.649 | 1.00th=[ 2933], 5.00th=[ 3589], 10.00th=[ 3818], 20.00th=[ 5014], 00:11:01.649 | 30.00th=[ 5473], 40.00th=[ 5932], 50.00th=[ 6587], 60.00th=[ 7242], 00:11:01.649 | 70.00th=[ 7701], 80.00th=[ 8717], 90.00th=[10552], 95.00th=[13566], 00:11:01.649 | 99.00th=[21103], 99.50th=[22938], 99.90th=[27657], 99.95th=[27657], 00:11:01.649 | 99.99th=[27657] 00:11:01.649 bw ( KiB/s): min=28672, max=36864, per=36.60%, avg=32768.00, stdev=5792.62, samples=2 00:11:01.649 iops : min= 7168, max= 9216, avg=8192.00, stdev=1448.15, samples=2 00:11:01.649 lat (msec) : 2=0.08%, 4=6.90%, 10=77.56%, 20=13.01%, 50=2.45% 00:11:01.649 cpu : usr=5.88%, sys=8.18%, ctx=487, majf=0, minf=1 00:11:01.649 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:11:01.649 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:01.649 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:01.649 issued rwts: total=8090,8192,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:01.649 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:01.649 job2: (groupid=0, jobs=1): err= 0: pid=1235269: Wed Nov 20 09:44:32 2024 00:11:01.649 read: IOPS=6470, BW=25.3MiB/s (26.5MB/s)(25.5MiB/1008msec) 00:11:01.650 slat (nsec): min=929, max=17321k, avg=71711.81, stdev=548834.25 00:11:01.650 clat (usec): min=1327, max=58911, avg=10283.21, stdev=7017.85 00:11:01.650 lat (usec): min=1352, max=61123, avg=10354.92, stdev=7056.17 00:11:01.650 clat percentiles (usec): 00:11:01.650 | 1.00th=[ 2442], 5.00th=[ 5211], 10.00th=[ 6915], 20.00th=[ 7701], 00:11:01.650 | 30.00th=[ 8160], 40.00th=[ 8717], 50.00th=[ 9110], 60.00th=[ 9503], 00:11:01.650 | 70.00th=[10028], 80.00th=[10683], 90.00th=[12256], 95.00th=[15664], 00:11:01.650 | 99.00th=[53740], 99.50th=[58983], 99.90th=[58983], 99.95th=[58983], 00:11:01.650 | 99.99th=[58983] 00:11:01.650 write: IOPS=6603, BW=25.8MiB/s (27.0MB/s)(26.0MiB/1008msec); 0 zone resets 00:11:01.650 slat (nsec): min=1526, max=13532k, avg=66220.29, stdev=455330.85 00:11:01.650 clat (usec): min=585, max=43302, avg=9085.12, stdev=6007.26 00:11:01.650 lat (usec): min=619, max=45878, avg=9151.34, stdev=6054.52 00:11:01.650 clat percentiles (usec): 00:11:01.650 | 1.00th=[ 1188], 5.00th=[ 2737], 10.00th=[ 4228], 20.00th=[ 5669], 00:11:01.650 | 30.00th=[ 6718], 40.00th=[ 7308], 50.00th=[ 7898], 60.00th=[ 8455], 00:11:01.650 | 70.00th=[ 8979], 80.00th=[ 9634], 90.00th=[16057], 95.00th=[23987], 00:11:01.650 | 99.00th=[31327], 99.50th=[32900], 99.90th=[34341], 99.95th=[35914], 00:11:01.650 | 99.99th=[43254] 00:11:01.650 bw ( KiB/s): min=24576, max=28672, per=29.74%, avg=26624.00, stdev=2896.31, samples=2 00:11:01.650 iops : min= 6144, max= 7168, avg=6656.00, stdev=724.08, samples=2 00:11:01.650 lat (usec) : 750=0.02%, 1000=0.04% 00:11:01.650 lat (msec) : 2=2.19%, 4=3.73%, 10=70.08%, 20=17.77%, 50=5.59% 00:11:01.650 lat (msec) : 100=0.58% 00:11:01.650 cpu : usr=3.87%, sys=7.55%, ctx=509, majf=0, minf=1 00:11:01.650 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:11:01.650 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:01.650 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:01.650 issued rwts: total=6522,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:01.650 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:01.650 job3: (groupid=0, jobs=1): err= 0: pid=1235283: Wed Nov 20 09:44:32 2024 00:11:01.650 read: IOPS=3559, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1007msec) 00:11:01.650 slat (nsec): min=928, max=24150k, avg=166278.84, stdev=1184830.23 00:11:01.650 clat (usec): min=4370, max=73629, avg=21885.00, stdev=16272.86 00:11:01.650 lat (usec): min=4378, max=73638, avg=22051.28, stdev=16356.43 00:11:01.650 clat percentiles (usec): 00:11:01.650 | 1.00th=[ 4752], 5.00th=[ 6128], 10.00th=[ 7177], 20.00th=[ 8455], 00:11:01.650 | 30.00th=[10683], 40.00th=[16188], 50.00th=[17957], 60.00th=[21627], 00:11:01.650 | 70.00th=[23462], 80.00th=[28443], 90.00th=[45351], 95.00th=[64226], 00:11:01.650 | 99.00th=[71828], 99.50th=[71828], 99.90th=[73925], 99.95th=[73925], 00:11:01.650 | 99.99th=[73925] 00:11:01.650 write: IOPS=3837, BW=15.0MiB/s (15.7MB/s)(15.1MiB/1007msec); 0 zone resets 00:11:01.650 slat (nsec): min=1573, max=8322.2k, avg=99411.46, stdev=551976.43 00:11:01.650 clat (usec): min=2883, max=31989, avg=12701.14, stdev=4454.48 00:11:01.650 lat (usec): min=2891, max=31995, avg=12800.55, stdev=4468.40 00:11:01.650 clat percentiles (usec): 00:11:01.650 | 1.00th=[ 4621], 5.00th=[ 6194], 10.00th=[ 8029], 20.00th=[ 8586], 00:11:01.650 | 30.00th=[10552], 40.00th=[11338], 50.00th=[11994], 60.00th=[12649], 00:11:01.650 | 70.00th=[14746], 80.00th=[16712], 90.00th=[19006], 95.00th=[21627], 00:11:01.650 | 99.00th=[23462], 99.50th=[26346], 99.90th=[31851], 99.95th=[32113], 00:11:01.650 | 99.99th=[32113] 00:11:01.650 bw ( KiB/s): min=10920, max=18968, per=16.69%, avg=14944.00, stdev=5690.80, samples=2 00:11:01.650 iops : min= 2730, max= 4742, avg=3736.00, stdev=1422.70, samples=2 00:11:01.650 lat (msec) : 4=0.35%, 10=28.22%, 20=47.26%, 50=19.51%, 100=4.66% 00:11:01.650 cpu : usr=2.09%, sys=5.07%, ctx=285, majf=0, minf=1 00:11:01.650 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:11:01.650 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:01.650 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:01.650 issued rwts: total=3584,3864,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:01.650 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:01.650 00:11:01.650 Run status group 0 (all jobs): 00:11:01.650 READ: bw=84.4MiB/s (88.5MB/s), 13.9MiB/s-31.5MiB/s (14.6MB/s-33.0MB/s), io=85.1MiB (89.2MB), run=1004-1008msec 00:11:01.650 WRITE: bw=87.4MiB/s (91.7MB/s), 14.9MiB/s-31.9MiB/s (15.6MB/s-33.4MB/s), io=88.1MiB (92.4MB), run=1004-1008msec 00:11:01.650 00:11:01.650 Disk stats (read/write): 00:11:01.650 nvme0n1: ios=2641/3072, merge=0/0, ticks=19414/45973, in_queue=65387, util=82.06% 00:11:01.650 nvme0n2: ios=5657/6081, merge=0/0, ticks=29801/25729, in_queue=55530, util=99.38% 00:11:01.650 nvme0n3: ios=5116/5120, merge=0/0, ticks=33254/26305, in_queue=59559, util=94.00% 00:11:01.650 nvme0n4: ios=3072/3095, merge=0/0, ticks=18335/10348, in_queue=28683, util=88.72% 00:11:01.650 09:44:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:11:01.650 09:44:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1235546 00:11:01.650 09:44:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:01.650 09:44:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:11:01.650 [global] 00:11:01.650 thread=1 00:11:01.650 invalidate=1 00:11:01.650 rw=read 00:11:01.650 time_based=1 00:11:01.650 runtime=10 00:11:01.650 ioengine=libaio 00:11:01.650 direct=1 00:11:01.650 bs=4096 00:11:01.650 iodepth=1 00:11:01.650 norandommap=1 00:11:01.650 numjobs=1 00:11:01.650 00:11:01.650 [job0] 00:11:01.650 filename=/dev/nvme0n1 00:11:01.650 [job1] 00:11:01.650 filename=/dev/nvme0n2 00:11:01.650 [job2] 00:11:01.650 filename=/dev/nvme0n3 00:11:01.650 [job3] 00:11:01.650 filename=/dev/nvme0n4 00:11:01.650 Could not set queue depth (nvme0n1) 00:11:01.650 Could not set queue depth (nvme0n2) 00:11:01.650 Could not set queue depth (nvme0n3) 00:11:01.650 Could not set queue depth (nvme0n4) 00:11:01.912 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:01.912 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:01.912 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:01.912 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:01.912 fio-3.35 00:11:01.912 Starting 4 threads 00:11:04.453 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:04.714 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:04.714 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=262144, buflen=4096 00:11:04.714 fio: pid=1235768, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:04.714 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=12357632, buflen=4096 00:11:04.714 fio: pid=1235761, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:04.714 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:04.714 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:04.975 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:04.975 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:04.975 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=323584, buflen=4096 00:11:04.975 fio: pid=1235750, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:05.236 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:05.236 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:05.236 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=2592768, buflen=4096 00:11:05.236 fio: pid=1235755, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:05.236 00:11:05.236 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1235750: Wed Nov 20 09:44:36 2024 00:11:05.236 read: IOPS=26, BW=106KiB/s (108kB/s)(316KiB/2985msec) 00:11:05.236 slat (usec): min=24, max=13493, avg=194.52, stdev=1505.75 00:11:05.236 clat (usec): min=331, max=42132, avg=37297.76, stdev=12416.33 00:11:05.236 lat (usec): min=357, max=54947, avg=37494.43, stdev=12563.47 00:11:05.236 clat percentiles (usec): 00:11:05.236 | 1.00th=[ 330], 5.00th=[ 523], 10.00th=[ 988], 20.00th=[41157], 00:11:05.236 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:11:05.236 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:11:05.236 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:05.236 | 99.99th=[42206] 00:11:05.236 bw ( KiB/s): min= 96, max= 151, per=2.26%, avg=108.60, stdev=23.95, samples=5 00:11:05.236 iops : min= 24, max= 37, avg=27.00, stdev= 5.66, samples=5 00:11:05.236 lat (usec) : 500=2.50%, 750=6.25%, 1000=1.25% 00:11:05.236 lat (msec) : 50=88.75% 00:11:05.236 cpu : usr=0.03%, sys=0.07%, ctx=81, majf=0, minf=1 00:11:05.236 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:05.236 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:05.236 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:05.236 issued rwts: total=80,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:05.236 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:05.236 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1235755: Wed Nov 20 09:44:36 2024 00:11:05.236 read: IOPS=200, BW=799KiB/s (818kB/s)(2532KiB/3168msec) 00:11:05.236 slat (usec): min=6, max=14581, avg=48.85, stdev=578.12 00:11:05.236 clat (usec): min=150, max=42100, avg=4914.04, stdev=12809.41 00:11:05.236 lat (usec): min=157, max=56003, avg=4962.93, stdev=12888.60 00:11:05.236 clat percentiles (usec): 00:11:05.236 | 1.00th=[ 289], 5.00th=[ 322], 10.00th=[ 338], 20.00th=[ 367], 00:11:05.236 | 30.00th=[ 433], 40.00th=[ 490], 50.00th=[ 506], 60.00th=[ 529], 00:11:05.236 | 70.00th=[ 553], 80.00th=[ 586], 90.00th=[41157], 95.00th=[42206], 00:11:05.236 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:05.236 | 99.99th=[42206] 00:11:05.236 bw ( KiB/s): min= 89, max= 3336, per=17.50%, avg=838.83, stdev=1317.95, samples=6 00:11:05.236 iops : min= 22, max= 834, avg=209.67, stdev=329.52, samples=6 00:11:05.236 lat (usec) : 250=0.79%, 500=45.58%, 750=42.11%, 1000=0.47% 00:11:05.236 lat (msec) : 2=0.16%, 50=10.73% 00:11:05.236 cpu : usr=0.16%, sys=0.63%, ctx=636, majf=0, minf=2 00:11:05.236 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:05.236 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:05.237 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:05.237 issued rwts: total=634,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:05.237 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:05.237 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1235761: Wed Nov 20 09:44:36 2024 00:11:05.237 read: IOPS=1089, BW=4355KiB/s (4460kB/s)(11.8MiB/2771msec) 00:11:05.237 slat (usec): min=6, max=9590, avg=29.35, stdev=215.95 00:11:05.237 clat (usec): min=133, max=1572, avg=876.18, stdev=218.97 00:11:05.237 lat (usec): min=140, max=10551, avg=905.53, stdev=311.55 00:11:05.237 clat percentiles (usec): 00:11:05.237 | 1.00th=[ 221], 5.00th=[ 363], 10.00th=[ 469], 20.00th=[ 848], 00:11:05.237 | 30.00th=[ 930], 40.00th=[ 955], 50.00th=[ 963], 60.00th=[ 971], 00:11:05.237 | 70.00th=[ 988], 80.00th=[ 1004], 90.00th=[ 1020], 95.00th=[ 1045], 00:11:05.237 | 99.00th=[ 1090], 99.50th=[ 1106], 99.90th=[ 1401], 99.95th=[ 1483], 00:11:05.237 | 99.99th=[ 1565] 00:11:05.237 bw ( KiB/s): min= 3992, max= 6283, per=93.15%, avg=4461.40, stdev=1018.37, samples=5 00:11:05.237 iops : min= 998, max= 1570, avg=1115.20, stdev=254.26, samples=5 00:11:05.237 lat (usec) : 250=1.29%, 500=11.03%, 750=5.53%, 1000=61.43% 00:11:05.237 lat (msec) : 2=20.68% 00:11:05.237 cpu : usr=0.87%, sys=3.32%, ctx=3020, majf=0, minf=2 00:11:05.237 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:05.237 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:05.237 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:05.237 issued rwts: total=3018,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:05.237 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:05.237 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1235768: Wed Nov 20 09:44:36 2024 00:11:05.237 read: IOPS=24, BW=98.2KiB/s (101kB/s)(256KiB/2606msec) 00:11:05.237 slat (nsec): min=26270, max=36020, avg=26912.05, stdev=1163.73 00:11:05.237 clat (usec): min=672, max=41913, avg=40348.53, stdev=5040.09 00:11:05.237 lat (usec): min=708, max=41940, avg=40375.44, stdev=5038.93 00:11:05.237 clat percentiles (usec): 00:11:05.237 | 1.00th=[ 676], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:11:05.237 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:05.237 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:11:05.237 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:11:05.237 | 99.99th=[41681] 00:11:05.237 bw ( KiB/s): min= 96, max= 104, per=2.07%, avg=99.20, stdev= 4.38, samples=5 00:11:05.237 iops : min= 24, max= 26, avg=24.80, stdev= 1.10, samples=5 00:11:05.237 lat (usec) : 750=1.54% 00:11:05.237 lat (msec) : 50=96.92% 00:11:05.237 cpu : usr=0.12%, sys=0.00%, ctx=65, majf=0, minf=2 00:11:05.237 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:05.237 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:05.237 complete : 0=1.5%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:05.237 issued rwts: total=65,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:05.237 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:05.237 00:11:05.237 Run status group 0 (all jobs): 00:11:05.237 READ: bw=4789KiB/s (4904kB/s), 98.2KiB/s-4355KiB/s (101kB/s-4460kB/s), io=14.8MiB (15.5MB), run=2606-3168msec 00:11:05.237 00:11:05.237 Disk stats (read/write): 00:11:05.237 nvme0n1: ios=76/0, merge=0/0, ticks=2822/0, in_queue=2822, util=94.29% 00:11:05.237 nvme0n2: ios=631/0, merge=0/0, ticks=3029/0, in_queue=3029, util=95.23% 00:11:05.237 nvme0n3: ios=2870/0, merge=0/0, ticks=2536/0, in_queue=2536, util=95.99% 00:11:05.237 nvme0n4: ios=64/0, merge=0/0, ticks=2584/0, in_queue=2584, util=96.39% 00:11:05.237 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:05.237 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:05.497 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:05.497 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:05.757 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:05.757 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:05.757 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:05.757 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:06.018 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:06.018 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 1235546 00:11:06.018 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:06.018 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:06.018 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:06.018 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:06.018 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:11:06.278 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:06.278 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:06.278 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:06.278 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:06.278 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:11:06.278 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:06.278 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:06.278 nvmf hotplug test: fio failed as expected 00:11:06.278 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:06.278 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:06.278 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:06.278 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:06.278 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:06.278 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:06.278 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:06.278 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:11:06.278 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:06.278 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:11:06.278 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:06.278 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:06.278 rmmod nvme_tcp 00:11:06.278 rmmod nvme_fabrics 00:11:06.538 rmmod nvme_keyring 00:11:06.538 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:06.538 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:11:06.538 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:11:06.538 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 1232016 ']' 00:11:06.538 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 1232016 00:11:06.538 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 1232016 ']' 00:11:06.538 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 1232016 00:11:06.538 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:11:06.538 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:06.538 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1232016 00:11:06.538 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:06.538 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:06.538 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1232016' 00:11:06.538 killing process with pid 1232016 00:11:06.538 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 1232016 00:11:06.538 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 1232016 00:11:06.538 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:06.538 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:06.538 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:06.538 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:11:06.538 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:11:06.538 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:11:06.538 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:06.538 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:06.538 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:06.538 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:06.538 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:06.538 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:09.097 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:09.097 00:11:09.097 real 0m29.335s 00:11:09.097 user 2m41.141s 00:11:09.097 sys 0m9.493s 00:11:09.097 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:09.097 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.097 ************************************ 00:11:09.097 END TEST nvmf_fio_target 00:11:09.097 ************************************ 00:11:09.097 09:44:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:09.097 09:44:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:09.097 09:44:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:09.097 09:44:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:09.097 ************************************ 00:11:09.097 START TEST nvmf_bdevio 00:11:09.097 ************************************ 00:11:09.097 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:09.097 * Looking for test storage... 00:11:09.097 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:09.097 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:09.097 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:11:09.097 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:09.097 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:09.097 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:09.097 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:09.097 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:09.097 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:11:09.097 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:11:09.097 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:11:09.097 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:11:09.097 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:11:09.097 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:11:09.097 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:11:09.097 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:09.097 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:11:09.097 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:11:09.097 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:09.097 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:09.097 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:11:09.097 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:11:09.097 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:09.097 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:11:09.097 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:11:09.097 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:11:09.097 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:11:09.097 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:09.097 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:11:09.097 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:11:09.097 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:09.097 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:09.097 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:11:09.097 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:09.097 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:09.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.097 --rc genhtml_branch_coverage=1 00:11:09.097 --rc genhtml_function_coverage=1 00:11:09.097 --rc genhtml_legend=1 00:11:09.097 --rc geninfo_all_blocks=1 00:11:09.097 --rc geninfo_unexecuted_blocks=1 00:11:09.097 00:11:09.097 ' 00:11:09.097 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:09.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.097 --rc genhtml_branch_coverage=1 00:11:09.097 --rc genhtml_function_coverage=1 00:11:09.097 --rc genhtml_legend=1 00:11:09.097 --rc geninfo_all_blocks=1 00:11:09.097 --rc geninfo_unexecuted_blocks=1 00:11:09.097 00:11:09.097 ' 00:11:09.097 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:09.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.097 --rc genhtml_branch_coverage=1 00:11:09.097 --rc genhtml_function_coverage=1 00:11:09.097 --rc genhtml_legend=1 00:11:09.097 --rc geninfo_all_blocks=1 00:11:09.097 --rc geninfo_unexecuted_blocks=1 00:11:09.097 00:11:09.097 ' 00:11:09.097 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:09.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.097 --rc genhtml_branch_coverage=1 00:11:09.097 --rc genhtml_function_coverage=1 00:11:09.097 --rc genhtml_legend=1 00:11:09.097 --rc geninfo_all_blocks=1 00:11:09.097 --rc geninfo_unexecuted_blocks=1 00:11:09.097 00:11:09.097 ' 00:11:09.097 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:09.097 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:09.098 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:09.098 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:09.098 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:09.098 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:09.098 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:09.098 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:09.098 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:09.098 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:09.098 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:09.098 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:09.098 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:09.098 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:09.098 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:09.098 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:09.098 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:09.098 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:09.098 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:09.098 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:11:09.098 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:09.098 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:09.098 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:09.098 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.098 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.098 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.098 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:09.098 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.098 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:11:09.098 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:09.098 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:09.098 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:09.098 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:09.098 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:09.098 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:09.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:09.098 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:09.098 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:09.098 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:09.098 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:09.098 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:09.098 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:09.098 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:09.098 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:09.098 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:09.098 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:09.098 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:09.098 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:09.098 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:09.098 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:09.098 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:09.098 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:09.098 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:11:09.098 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:17.238 09:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:17.238 09:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:11:17.238 09:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:17.238 09:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:17.238 09:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:17.238 09:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:17.238 09:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:17.238 09:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:11:17.238 09:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:17.238 09:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:11:17.238 09:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:11:17.238 09:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:11:17.238 09:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:11:17.238 09:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:11:17.238 09:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:11:17.239 09:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:17.239 09:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:17.239 09:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:17.239 09:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:17.239 09:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:17.239 09:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:17.239 09:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:17.239 09:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:17.239 09:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:17.239 09:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:17.239 09:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:17.239 09:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:17.239 09:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:17.239 09:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:17.239 09:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:17.239 09:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:17.239 09:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:17.239 09:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:17.239 09:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:17.239 09:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:17.239 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:17.239 09:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:17.239 09:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:17.239 09:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:17.239 09:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:17.239 09:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:17.239 09:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:17.239 09:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:17.239 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:17.239 09:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:17.239 09:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:17.239 09:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:17.239 09:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:17.239 09:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:17.239 09:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:17.239 09:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:17.239 09:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:17.239 09:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:17.239 09:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:17.239 09:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:17.239 09:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:17.239 09:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:17.239 09:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:17.239 09:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:17.239 09:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:17.239 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:17.239 09:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:17.239 09:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:17.239 09:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:17.239 09:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:17.239 09:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:17.239 09:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:17.239 09:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:17.239 09:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:17.239 09:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:17.239 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:17.239 09:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:17.239 09:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:17.239 09:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:11:17.239 09:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:17.239 09:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:17.239 09:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:17.239 09:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:17.239 09:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:17.239 09:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:17.239 09:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:17.239 09:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:17.239 09:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:17.239 09:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:17.239 09:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:17.239 09:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:17.239 09:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:17.239 09:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:17.239 09:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:17.239 09:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:17.239 09:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:17.239 09:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:17.239 09:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:17.239 09:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:17.239 09:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:17.239 09:44:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:17.239 09:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:17.239 09:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:17.239 09:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:17.239 09:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:17.239 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:17.239 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.713 ms 00:11:17.239 00:11:17.239 --- 10.0.0.2 ping statistics --- 00:11:17.239 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:17.239 rtt min/avg/max/mdev = 0.713/0.713/0.713/0.000 ms 00:11:17.239 09:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:17.239 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:17.240 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.233 ms 00:11:17.240 00:11:17.240 --- 10.0.0.1 ping statistics --- 00:11:17.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:17.240 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:11:17.240 09:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:17.240 09:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:11:17.240 09:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:17.240 09:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:17.240 09:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:17.240 09:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:17.240 09:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:17.240 09:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:17.240 09:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:17.240 09:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:17.240 09:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:17.240 09:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:17.240 09:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:17.240 09:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=1241008 00:11:17.240 09:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 1241008 00:11:17.240 09:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:17.240 09:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 1241008 ']' 00:11:17.240 09:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:17.240 09:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:17.240 09:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:17.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:17.240 09:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:17.240 09:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:17.240 [2024-11-20 09:44:47.161420] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:11:17.240 [2024-11-20 09:44:47.161469] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:17.240 [2024-11-20 09:44:47.252997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:17.240 [2024-11-20 09:44:47.288200] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:17.240 [2024-11-20 09:44:47.288230] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:17.240 [2024-11-20 09:44:47.288236] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:17.240 [2024-11-20 09:44:47.288241] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:17.240 [2024-11-20 09:44:47.288245] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:17.240 [2024-11-20 09:44:47.289515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:17.240 [2024-11-20 09:44:47.289667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:11:17.240 [2024-11-20 09:44:47.289819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:17.240 [2024-11-20 09:44:47.289821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:11:17.240 09:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:17.240 09:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:11:17.240 09:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:17.240 09:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:17.240 09:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:17.240 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:17.240 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:17.240 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.240 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:17.240 [2024-11-20 09:44:48.012709] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:17.240 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.240 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:17.240 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.240 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:17.240 Malloc0 00:11:17.240 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.240 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:17.240 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.240 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:17.240 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.240 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:17.240 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.240 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:17.240 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.240 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:17.240 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.240 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:17.240 [2024-11-20 09:44:48.084220] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:17.240 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.240 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:17.240 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:17.240 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:11:17.240 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:11:17.240 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:17.240 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:17.240 { 00:11:17.240 "params": { 00:11:17.240 "name": "Nvme$subsystem", 00:11:17.240 "trtype": "$TEST_TRANSPORT", 00:11:17.240 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:17.240 "adrfam": "ipv4", 00:11:17.240 "trsvcid": "$NVMF_PORT", 00:11:17.240 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:17.240 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:17.240 "hdgst": ${hdgst:-false}, 00:11:17.240 "ddgst": ${ddgst:-false} 00:11:17.240 }, 00:11:17.240 "method": "bdev_nvme_attach_controller" 00:11:17.240 } 00:11:17.240 EOF 00:11:17.240 )") 00:11:17.240 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:11:17.240 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:11:17.240 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:11:17.240 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:17.240 "params": { 00:11:17.240 "name": "Nvme1", 00:11:17.240 "trtype": "tcp", 00:11:17.240 "traddr": "10.0.0.2", 00:11:17.240 "adrfam": "ipv4", 00:11:17.240 "trsvcid": "4420", 00:11:17.240 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:17.240 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:17.240 "hdgst": false, 00:11:17.240 "ddgst": false 00:11:17.240 }, 00:11:17.240 "method": "bdev_nvme_attach_controller" 00:11:17.240 }' 00:11:17.240 [2024-11-20 09:44:48.139855] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:11:17.240 [2024-11-20 09:44:48.139906] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1241127 ] 00:11:17.501 [2024-11-20 09:44:48.230465] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:17.501 [2024-11-20 09:44:48.269220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:17.501 [2024-11-20 09:44:48.269487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:17.501 [2024-11-20 09:44:48.269487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:17.761 I/O targets: 00:11:17.762 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:17.762 00:11:17.762 00:11:17.762 CUnit - A unit testing framework for C - Version 2.1-3 00:11:17.762 http://cunit.sourceforge.net/ 00:11:17.762 00:11:17.762 00:11:17.762 Suite: bdevio tests on: Nvme1n1 00:11:17.762 Test: blockdev write read block ...passed 00:11:17.762 Test: blockdev write zeroes read block ...passed 00:11:17.762 Test: blockdev write zeroes read no split ...passed 00:11:18.022 Test: blockdev write zeroes read split ...passed 00:11:18.022 Test: blockdev write zeroes read split partial ...passed 00:11:18.022 Test: blockdev reset ...[2024-11-20 09:44:48.739585] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:11:18.022 [2024-11-20 09:44:48.739656] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x752970 (9): Bad file descriptor 00:11:18.022 [2024-11-20 09:44:48.794548] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:11:18.022 passed 00:11:18.022 Test: blockdev write read 8 blocks ...passed 00:11:18.022 Test: blockdev write read size > 128k ...passed 00:11:18.022 Test: blockdev write read invalid size ...passed 00:11:18.022 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:18.022 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:18.022 Test: blockdev write read max offset ...passed 00:11:18.282 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:18.282 Test: blockdev writev readv 8 blocks ...passed 00:11:18.282 Test: blockdev writev readv 30 x 1block ...passed 00:11:18.282 Test: blockdev writev readv block ...passed 00:11:18.282 Test: blockdev writev readv size > 128k ...passed 00:11:18.282 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:18.283 Test: blockdev comparev and writev ...[2024-11-20 09:44:49.100377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:18.283 [2024-11-20 09:44:49.100410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:18.283 [2024-11-20 09:44:49.100426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:18.283 [2024-11-20 09:44:49.100434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:18.283 [2024-11-20 09:44:49.100904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:18.283 [2024-11-20 09:44:49.100916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:18.283 [2024-11-20 09:44:49.100930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:18.283 [2024-11-20 09:44:49.100939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:18.283 [2024-11-20 09:44:49.101417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:18.283 [2024-11-20 09:44:49.101435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:18.283 [2024-11-20 09:44:49.101449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:18.283 [2024-11-20 09:44:49.101456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:18.283 [2024-11-20 09:44:49.101922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:18.283 [2024-11-20 09:44:49.101934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:18.283 [2024-11-20 09:44:49.101947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:18.283 [2024-11-20 09:44:49.101955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:18.283 passed 00:11:18.283 Test: blockdev nvme passthru rw ...passed 00:11:18.283 Test: blockdev nvme passthru vendor specific ...[2024-11-20 09:44:49.185989] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:18.283 [2024-11-20 09:44:49.186003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:18.283 [2024-11-20 09:44:49.186367] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:18.283 [2024-11-20 09:44:49.186379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:18.283 [2024-11-20 09:44:49.186742] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:18.283 [2024-11-20 09:44:49.186753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:18.283 [2024-11-20 09:44:49.187112] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:18.283 [2024-11-20 09:44:49.187124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:18.283 passed 00:11:18.542 Test: blockdev nvme admin passthru ...passed 00:11:18.542 Test: blockdev copy ...passed 00:11:18.542 00:11:18.542 Run Summary: Type Total Ran Passed Failed Inactive 00:11:18.542 suites 1 1 n/a 0 0 00:11:18.542 tests 23 23 23 0 0 00:11:18.542 asserts 152 152 152 0 n/a 00:11:18.542 00:11:18.542 Elapsed time = 1.337 seconds 00:11:18.542 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:18.542 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.542 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:18.542 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.542 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:18.542 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:18.542 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:18.542 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:11:18.542 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:18.542 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:11:18.542 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:18.542 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:18.542 rmmod nvme_tcp 00:11:18.542 rmmod nvme_fabrics 00:11:18.542 rmmod nvme_keyring 00:11:18.542 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:18.542 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:11:18.542 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:11:18.542 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 1241008 ']' 00:11:18.542 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 1241008 00:11:18.542 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 1241008 ']' 00:11:18.542 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 1241008 00:11:18.542 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:11:18.542 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:18.542 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1241008 00:11:18.803 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:11:18.803 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:11:18.803 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1241008' 00:11:18.803 killing process with pid 1241008 00:11:18.803 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 1241008 00:11:18.803 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 1241008 00:11:18.803 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:18.803 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:18.803 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:18.803 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:11:18.803 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:11:18.803 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:18.803 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:11:18.803 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:18.803 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:18.803 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:18.803 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:18.803 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:21.348 09:44:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:21.348 00:11:21.348 real 0m12.131s 00:11:21.348 user 0m14.090s 00:11:21.348 sys 0m6.002s 00:11:21.348 09:44:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:21.348 09:44:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:21.348 ************************************ 00:11:21.348 END TEST nvmf_bdevio 00:11:21.348 ************************************ 00:11:21.348 09:44:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:21.348 00:11:21.348 real 5m4.298s 00:11:21.348 user 11m51.704s 00:11:21.348 sys 1m49.944s 00:11:21.348 09:44:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:21.348 09:44:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:21.348 ************************************ 00:11:21.348 END TEST nvmf_target_core 00:11:21.348 ************************************ 00:11:21.348 09:44:51 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:21.348 09:44:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:21.348 09:44:51 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:21.348 09:44:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:21.348 ************************************ 00:11:21.348 START TEST nvmf_target_extra 00:11:21.348 ************************************ 00:11:21.348 09:44:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:21.348 * Looking for test storage... 00:11:21.348 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:11:21.348 09:44:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:21.348 09:44:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:11:21.348 09:44:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:21.348 09:44:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:21.348 09:44:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:21.348 09:44:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:21.348 09:44:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:21.348 09:44:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:11:21.348 09:44:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:11:21.348 09:44:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:11:21.348 09:44:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:11:21.348 09:44:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:11:21.348 09:44:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:11:21.348 09:44:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:11:21.348 09:44:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:21.348 09:44:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:11:21.348 09:44:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:11:21.348 09:44:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:21.348 09:44:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:21.348 09:44:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:11:21.348 09:44:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:11:21.348 09:44:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:21.349 09:44:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:11:21.349 09:44:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:11:21.349 09:44:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:11:21.349 09:44:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:11:21.349 09:44:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:21.349 09:44:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:11:21.349 09:44:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:11:21.349 09:44:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:21.349 09:44:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:21.349 09:44:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:11:21.349 09:44:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:21.349 09:44:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:21.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.349 --rc genhtml_branch_coverage=1 00:11:21.349 --rc genhtml_function_coverage=1 00:11:21.349 --rc genhtml_legend=1 00:11:21.349 --rc geninfo_all_blocks=1 00:11:21.349 --rc geninfo_unexecuted_blocks=1 00:11:21.349 00:11:21.349 ' 00:11:21.349 09:44:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:21.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.349 --rc genhtml_branch_coverage=1 00:11:21.349 --rc genhtml_function_coverage=1 00:11:21.349 --rc genhtml_legend=1 00:11:21.349 --rc geninfo_all_blocks=1 00:11:21.349 --rc geninfo_unexecuted_blocks=1 00:11:21.349 00:11:21.349 ' 00:11:21.349 09:44:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:21.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.349 --rc genhtml_branch_coverage=1 00:11:21.349 --rc genhtml_function_coverage=1 00:11:21.349 --rc genhtml_legend=1 00:11:21.349 --rc geninfo_all_blocks=1 00:11:21.349 --rc geninfo_unexecuted_blocks=1 00:11:21.349 00:11:21.349 ' 00:11:21.349 09:44:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:21.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.349 --rc genhtml_branch_coverage=1 00:11:21.349 --rc genhtml_function_coverage=1 00:11:21.349 --rc genhtml_legend=1 00:11:21.349 --rc geninfo_all_blocks=1 00:11:21.349 --rc geninfo_unexecuted_blocks=1 00:11:21.349 00:11:21.349 ' 00:11:21.349 09:44:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:21.349 09:44:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:21.349 09:44:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:21.349 09:44:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:21.349 09:44:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:21.349 09:44:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:21.349 09:44:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:21.349 09:44:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:21.349 09:44:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:21.349 09:44:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:21.349 09:44:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:21.349 09:44:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:21.349 09:44:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:21.349 09:44:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:21.349 09:44:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:21.349 09:44:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:21.349 09:44:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:21.349 09:44:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:21.349 09:44:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:21.349 09:44:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:11:21.349 09:44:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:21.349 09:44:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:21.349 09:44:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:21.349 09:44:52 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.349 09:44:52 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.349 09:44:52 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.349 09:44:52 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:21.349 09:44:52 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.349 09:44:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:11:21.349 09:44:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:21.349 09:44:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:21.349 09:44:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:21.349 09:44:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:21.349 09:44:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:21.349 09:44:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:21.349 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:21.349 09:44:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:21.349 09:44:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:21.349 09:44:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:21.349 09:44:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:21.349 09:44:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:21.349 09:44:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:11:21.349 09:44:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:21.349 09:44:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:21.349 09:44:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:21.349 09:44:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:21.349 ************************************ 00:11:21.349 START TEST nvmf_example 00:11:21.349 ************************************ 00:11:21.349 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:21.349 * Looking for test storage... 00:11:21.349 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:21.349 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:21.349 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:21.349 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:11:21.611 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:21.611 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:21.611 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:21.611 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:21.611 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:11:21.611 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:11:21.611 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:11:21.611 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:11:21.611 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:11:21.611 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:11:21.611 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:11:21.611 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:21.611 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:11:21.611 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:11:21.611 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:21.611 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:21.611 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:11:21.611 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:11:21.611 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:21.611 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:11:21.611 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:11:21.611 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:11:21.611 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:11:21.611 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:21.611 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:11:21.611 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:11:21.611 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:21.611 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:21.611 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:11:21.611 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:21.611 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:21.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.611 --rc genhtml_branch_coverage=1 00:11:21.611 --rc genhtml_function_coverage=1 00:11:21.611 --rc genhtml_legend=1 00:11:21.611 --rc geninfo_all_blocks=1 00:11:21.611 --rc geninfo_unexecuted_blocks=1 00:11:21.611 00:11:21.611 ' 00:11:21.611 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:21.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.611 --rc genhtml_branch_coverage=1 00:11:21.611 --rc genhtml_function_coverage=1 00:11:21.611 --rc genhtml_legend=1 00:11:21.611 --rc geninfo_all_blocks=1 00:11:21.611 --rc geninfo_unexecuted_blocks=1 00:11:21.611 00:11:21.611 ' 00:11:21.611 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:21.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.611 --rc genhtml_branch_coverage=1 00:11:21.611 --rc genhtml_function_coverage=1 00:11:21.611 --rc genhtml_legend=1 00:11:21.611 --rc geninfo_all_blocks=1 00:11:21.611 --rc geninfo_unexecuted_blocks=1 00:11:21.611 00:11:21.611 ' 00:11:21.611 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:21.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.611 --rc genhtml_branch_coverage=1 00:11:21.611 --rc genhtml_function_coverage=1 00:11:21.611 --rc genhtml_legend=1 00:11:21.611 --rc geninfo_all_blocks=1 00:11:21.611 --rc geninfo_unexecuted_blocks=1 00:11:21.611 00:11:21.611 ' 00:11:21.611 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:21.611 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:11:21.611 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:21.611 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:21.611 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:21.611 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:21.611 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:21.611 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:21.611 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:21.611 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:21.611 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:21.611 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:21.611 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:21.611 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:21.611 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:21.611 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:21.611 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:21.611 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:21.612 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:21.612 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:11:21.612 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:21.612 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:21.612 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:21.612 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.612 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.612 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.612 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:11:21.612 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.612 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:11:21.612 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:21.612 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:21.612 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:21.612 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:21.612 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:21.612 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:21.612 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:21.612 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:21.612 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:21.612 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:21.612 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:21.612 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:21.612 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:21.612 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:21.612 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:21.612 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:21.612 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:21.612 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:21.612 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:21.612 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:21.612 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:21.612 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:21.612 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:21.612 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:21.612 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:21.612 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:21.612 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:21.612 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:21.612 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:21.612 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:21.612 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:21.612 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:11:21.612 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:29.750 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:29.750 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:11:29.750 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:29.750 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:29.751 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:29.751 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:29.751 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:29.751 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:11:29.751 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:29.751 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:11:29.751 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:11:29.751 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:11:29.751 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:11:29.751 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:11:29.751 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:11:29.751 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:29.751 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:29.751 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:29.751 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:29.751 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:29.751 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:29.751 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:29.751 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:29.751 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:29.751 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:29.751 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:29.751 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:29.751 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:29.751 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:29.751 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:29.751 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:29.751 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:29.751 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:29.751 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:29.751 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:29.751 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:29.751 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:29.751 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:29.751 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:29.751 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:29.751 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:29.751 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:29.751 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:29.751 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:29.751 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:29.751 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:29.751 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:29.751 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:29.751 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:29.751 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:29.751 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:29.751 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:29.751 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:29.751 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:29.751 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:29.751 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:29.751 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:29.751 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:29.751 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:29.751 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:29.751 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:29.751 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:29.751 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:29.751 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:29.751 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:29.751 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:29.751 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:29.751 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:29.751 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:29.751 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:29.751 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:29.751 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:29.751 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:29.751 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:11:29.751 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:29.751 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:29.751 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:29.751 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:29.751 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:29.751 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:29.751 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:29.751 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:29.751 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:29.751 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:29.751 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:29.751 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:29.751 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:29.751 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:29.751 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:29.751 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:29.751 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:29.751 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:29.751 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:29.751 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:29.751 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:29.751 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:29.751 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:29.751 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:29.751 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:29.751 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:29.751 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:29.751 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.617 ms 00:11:29.751 00:11:29.751 --- 10.0.0.2 ping statistics --- 00:11:29.751 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:29.751 rtt min/avg/max/mdev = 0.617/0.617/0.617/0.000 ms 00:11:29.751 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:29.751 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:29.751 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.294 ms 00:11:29.751 00:11:29.751 --- 10.0.0.1 ping statistics --- 00:11:29.751 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:29.751 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:11:29.751 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:29.751 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:11:29.751 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:29.751 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:29.752 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:29.752 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:29.752 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:29.752 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:29.752 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:29.752 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:29.752 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:29.752 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:29.752 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:29.752 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:29.752 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:29.752 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1245850 00:11:29.752 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:29.752 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:29.752 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1245850 00:11:29.752 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 1245850 ']' 00:11:29.752 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:29.752 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:29.752 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:29.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:29.752 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:29.752 09:44:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:30.013 09:45:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:30.013 09:45:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:11:30.013 09:45:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:30.013 09:45:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:30.013 09:45:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:30.013 09:45:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:30.013 09:45:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.013 09:45:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:30.013 09:45:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.013 09:45:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:30.013 09:45:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.013 09:45:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:30.013 09:45:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.013 09:45:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:30.013 09:45:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:30.013 09:45:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.013 09:45:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:30.013 09:45:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.013 09:45:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:30.013 09:45:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:30.013 09:45:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.013 09:45:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:30.013 09:45:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.013 09:45:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:30.013 09:45:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.013 09:45:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:30.013 09:45:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.013 09:45:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:30.013 09:45:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:42.248 Initializing NVMe Controllers 00:11:42.248 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:42.248 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:42.248 Initialization complete. Launching workers. 00:11:42.248 ======================================================== 00:11:42.248 Latency(us) 00:11:42.248 Device Information : IOPS MiB/s Average min max 00:11:42.248 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 17897.80 69.91 3575.68 612.67 19985.35 00:11:42.248 ======================================================== 00:11:42.248 Total : 17897.80 69.91 3575.68 612.67 19985.35 00:11:42.248 00:11:42.248 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:42.248 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:42.248 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:42.248 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:11:42.248 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:42.248 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:11:42.248 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:42.248 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:42.248 rmmod nvme_tcp 00:11:42.248 rmmod nvme_fabrics 00:11:42.248 rmmod nvme_keyring 00:11:42.248 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:42.248 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:11:42.248 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:11:42.248 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 1245850 ']' 00:11:42.248 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 1245850 00:11:42.248 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 1245850 ']' 00:11:42.248 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 1245850 00:11:42.248 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:11:42.248 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:42.248 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1245850 00:11:42.248 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:11:42.248 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:11:42.248 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1245850' 00:11:42.248 killing process with pid 1245850 00:11:42.248 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 1245850 00:11:42.248 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 1245850 00:11:42.248 nvmf threads initialize successfully 00:11:42.248 bdev subsystem init successfully 00:11:42.248 created a nvmf target service 00:11:42.248 create targets's poll groups done 00:11:42.248 all subsystems of target started 00:11:42.248 nvmf target is running 00:11:42.248 all subsystems of target stopped 00:11:42.248 destroy targets's poll groups done 00:11:42.248 destroyed the nvmf target service 00:11:42.248 bdev subsystem finish successfully 00:11:42.248 nvmf threads destroy successfully 00:11:42.248 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:42.248 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:42.248 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:42.248 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:11:42.248 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:11:42.248 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:42.248 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:11:42.248 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:42.248 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:42.248 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:42.248 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:42.248 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:42.509 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:42.509 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:42.509 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:42.509 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:42.769 00:11:42.769 real 0m21.356s 00:11:42.769 user 0m46.384s 00:11:42.769 sys 0m6.994s 00:11:42.769 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:42.769 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:42.769 ************************************ 00:11:42.769 END TEST nvmf_example 00:11:42.769 ************************************ 00:11:42.769 09:45:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:42.769 09:45:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:42.769 09:45:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:42.769 09:45:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:42.769 ************************************ 00:11:42.769 START TEST nvmf_filesystem 00:11:42.769 ************************************ 00:11:42.769 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:42.769 * Looking for test storage... 00:11:42.769 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:42.769 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:42.769 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:11:42.769 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:43.033 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:43.033 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:43.033 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:43.033 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:43.033 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:43.033 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:43.033 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:43.033 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:43.033 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:43.034 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:43.034 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:43.034 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:43.034 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:43.034 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:43.034 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:43.034 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:43.034 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:43.034 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:43.034 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:43.034 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:43.034 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:43.034 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:43.034 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:43.034 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:43.034 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:43.034 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:43.034 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:43.034 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:43.034 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:43.034 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:43.034 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:43.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.034 --rc genhtml_branch_coverage=1 00:11:43.034 --rc genhtml_function_coverage=1 00:11:43.034 --rc genhtml_legend=1 00:11:43.034 --rc geninfo_all_blocks=1 00:11:43.034 --rc geninfo_unexecuted_blocks=1 00:11:43.034 00:11:43.034 ' 00:11:43.034 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:43.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.034 --rc genhtml_branch_coverage=1 00:11:43.034 --rc genhtml_function_coverage=1 00:11:43.034 --rc genhtml_legend=1 00:11:43.034 --rc geninfo_all_blocks=1 00:11:43.034 --rc geninfo_unexecuted_blocks=1 00:11:43.034 00:11:43.034 ' 00:11:43.034 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:43.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.034 --rc genhtml_branch_coverage=1 00:11:43.034 --rc genhtml_function_coverage=1 00:11:43.034 --rc genhtml_legend=1 00:11:43.034 --rc geninfo_all_blocks=1 00:11:43.034 --rc geninfo_unexecuted_blocks=1 00:11:43.034 00:11:43.034 ' 00:11:43.034 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:43.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.034 --rc genhtml_branch_coverage=1 00:11:43.034 --rc genhtml_function_coverage=1 00:11:43.034 --rc genhtml_legend=1 00:11:43.034 --rc geninfo_all_blocks=1 00:11:43.034 --rc geninfo_unexecuted_blocks=1 00:11:43.034 00:11:43.034 ' 00:11:43.034 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:43.034 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:43.034 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:43.034 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:43.034 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:43.034 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:43.034 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:43.034 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:43.034 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:43.034 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:43.034 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:43.034 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:43.034 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:43.034 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:43.034 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:43.034 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:43.034 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:43.034 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:43.034 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:43.034 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:43.034 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:43.034 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:43.034 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:43.034 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:43.034 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:43.034 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:11:43.034 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:11:43.034 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:43.034 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:43.034 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:11:43.035 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:11:43.035 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:11:43.035 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:43.035 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:11:43.035 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:11:43.035 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:11:43.035 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:43.035 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:43.035 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:11:43.035 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:11:43.035 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:11:43.035 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:11:43.035 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:11:43.035 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:11:43.035 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:11:43.035 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:11:43.035 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:11:43.035 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:43.035 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:11:43.035 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:11:43.035 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:11:43.035 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:11:43.035 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:11:43.035 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:11:43.035 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:11:43.035 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:43.035 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:11:43.035 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:11:43.035 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:11:43.035 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:11:43.035 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:43.035 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:11:43.035 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:43.035 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:11:43.035 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:11:43.035 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:11:43.035 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:11:43.035 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:11:43.035 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:11:43.035 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:11:43.035 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:11:43.035 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:11:43.035 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:43.035 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:11:43.035 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:11:43.035 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:11:43.035 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:11:43.035 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:11:43.035 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:11:43.035 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:11:43.035 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:11:43.035 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:11:43.035 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:11:43.035 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:43.035 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:11:43.035 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:11:43.035 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:11:43.035 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:11:43.035 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:11:43.035 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:11:43.035 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:11:43.035 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:11:43.035 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:11:43.035 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:11:43.035 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:11:43.035 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:43.035 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:11:43.035 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:11:43.035 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:11:43.035 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:43.035 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:43.035 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:43.036 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:43.036 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:43.036 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:43.036 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:43.036 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:43.036 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:43.036 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:43.036 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:43.036 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:43.036 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:43.036 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:43.036 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:43.036 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:43.036 #define SPDK_CONFIG_H 00:11:43.036 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:43.036 #define SPDK_CONFIG_APPS 1 00:11:43.036 #define SPDK_CONFIG_ARCH native 00:11:43.036 #undef SPDK_CONFIG_ASAN 00:11:43.036 #undef SPDK_CONFIG_AVAHI 00:11:43.036 #undef SPDK_CONFIG_CET 00:11:43.036 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:43.036 #define SPDK_CONFIG_COVERAGE 1 00:11:43.036 #define SPDK_CONFIG_CROSS_PREFIX 00:11:43.036 #undef SPDK_CONFIG_CRYPTO 00:11:43.036 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:43.036 #undef SPDK_CONFIG_CUSTOMOCF 00:11:43.036 #undef SPDK_CONFIG_DAOS 00:11:43.036 #define SPDK_CONFIG_DAOS_DIR 00:11:43.036 #define SPDK_CONFIG_DEBUG 1 00:11:43.036 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:43.036 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:43.036 #define SPDK_CONFIG_DPDK_INC_DIR 00:11:43.036 #define SPDK_CONFIG_DPDK_LIB_DIR 00:11:43.036 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:43.036 #undef SPDK_CONFIG_DPDK_UADK 00:11:43.036 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:43.036 #define SPDK_CONFIG_EXAMPLES 1 00:11:43.036 #undef SPDK_CONFIG_FC 00:11:43.036 #define SPDK_CONFIG_FC_PATH 00:11:43.036 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:43.036 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:43.036 #define SPDK_CONFIG_FSDEV 1 00:11:43.036 #undef SPDK_CONFIG_FUSE 00:11:43.036 #undef SPDK_CONFIG_FUZZER 00:11:43.036 #define SPDK_CONFIG_FUZZER_LIB 00:11:43.036 #undef SPDK_CONFIG_GOLANG 00:11:43.036 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:43.036 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:43.036 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:43.036 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:43.036 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:43.036 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:43.036 #undef SPDK_CONFIG_HAVE_LZ4 00:11:43.036 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:43.036 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:43.036 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:43.036 #define SPDK_CONFIG_IDXD 1 00:11:43.036 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:43.036 #undef SPDK_CONFIG_IPSEC_MB 00:11:43.036 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:43.036 #define SPDK_CONFIG_ISAL 1 00:11:43.036 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:43.036 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:43.036 #define SPDK_CONFIG_LIBDIR 00:11:43.036 #undef SPDK_CONFIG_LTO 00:11:43.036 #define SPDK_CONFIG_MAX_LCORES 128 00:11:43.036 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:11:43.036 #define SPDK_CONFIG_NVME_CUSE 1 00:11:43.036 #undef SPDK_CONFIG_OCF 00:11:43.036 #define SPDK_CONFIG_OCF_PATH 00:11:43.036 #define SPDK_CONFIG_OPENSSL_PATH 00:11:43.036 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:43.036 #define SPDK_CONFIG_PGO_DIR 00:11:43.036 #undef SPDK_CONFIG_PGO_USE 00:11:43.036 #define SPDK_CONFIG_PREFIX /usr/local 00:11:43.036 #undef SPDK_CONFIG_RAID5F 00:11:43.036 #undef SPDK_CONFIG_RBD 00:11:43.036 #define SPDK_CONFIG_RDMA 1 00:11:43.036 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:43.036 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:43.036 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:43.036 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:43.036 #define SPDK_CONFIG_SHARED 1 00:11:43.036 #undef SPDK_CONFIG_SMA 00:11:43.036 #define SPDK_CONFIG_TESTS 1 00:11:43.036 #undef SPDK_CONFIG_TSAN 00:11:43.036 #define SPDK_CONFIG_UBLK 1 00:11:43.036 #define SPDK_CONFIG_UBSAN 1 00:11:43.036 #undef SPDK_CONFIG_UNIT_TESTS 00:11:43.036 #undef SPDK_CONFIG_URING 00:11:43.036 #define SPDK_CONFIG_URING_PATH 00:11:43.036 #undef SPDK_CONFIG_URING_ZNS 00:11:43.036 #undef SPDK_CONFIG_USDT 00:11:43.036 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:43.036 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:43.036 #define SPDK_CONFIG_VFIO_USER 1 00:11:43.036 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:43.036 #define SPDK_CONFIG_VHOST 1 00:11:43.036 #define SPDK_CONFIG_VIRTIO 1 00:11:43.036 #undef SPDK_CONFIG_VTUNE 00:11:43.036 #define SPDK_CONFIG_VTUNE_DIR 00:11:43.036 #define SPDK_CONFIG_WERROR 1 00:11:43.036 #define SPDK_CONFIG_WPDK_DIR 00:11:43.036 #undef SPDK_CONFIG_XNVME 00:11:43.036 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:43.036 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:43.036 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:43.036 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:43.036 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:43.036 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:43.036 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:43.037 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.037 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.037 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.037 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:43.037 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.037 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:43.037 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:43.037 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:43.037 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:43.037 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:43.037 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:43.037 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:43.037 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:43.037 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:43.037 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:43.037 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:43.037 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:43.037 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:43.037 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:43.037 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:43.037 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:43.037 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:43.037 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:43.037 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:43.037 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:43.037 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:43.037 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:43.037 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:43.037 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:43.037 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:43.037 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:43.037 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:43.037 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:11:43.037 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:43.037 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:43.037 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:43.037 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:43.037 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:43.037 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:43.037 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:43.037 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:43.037 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:43.037 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:43.037 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:43.037 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:43.037 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:43.037 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:43.037 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:43.037 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:43.037 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:43.037 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:43.037 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:43.037 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:43.037 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:43.037 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:43.037 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:43.037 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:43.037 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:43.038 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:43.038 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:43.038 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:43.038 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:43.038 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:43.038 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:43.038 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:43.038 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:43.038 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:11:43.038 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:43.038 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:43.038 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:43.038 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:43.038 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:43.038 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:43.038 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:43.038 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:43.038 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:43.038 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:43.038 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:43.038 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:43.038 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:43.038 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:43.038 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:43.038 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:43.038 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:43.038 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:43.038 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:43.038 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:43.038 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:43.038 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:43.038 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:43.038 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:43.038 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:43.038 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:43.038 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:43.038 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:11:43.038 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:43.038 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:11:43.038 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:43.038 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:11:43.038 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:43.038 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:43.038 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:43.038 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:43.038 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:43.038 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:43.038 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:43.038 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:43.038 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:43.038 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:43.038 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:43.038 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:11:43.038 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:43.038 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:11:43.038 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:43.038 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:11:43.038 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:43.038 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:43.038 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:43.038 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:43.038 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:43.038 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:43.038 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:43.038 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:43.038 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:43.038 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:43.038 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:43.038 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:43.038 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:43.038 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:43.038 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:43.038 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:43.038 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:43.038 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:43.039 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:43.039 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:43.039 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:43.039 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:43.039 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:43.039 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:43.039 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:43.039 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:43.039 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:43.039 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:43.039 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:43.039 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:43.039 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:43.039 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:11:43.039 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:43.039 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:11:43.039 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:11:43.039 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:43.039 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:43.039 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:43.039 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:43.039 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:43.039 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:43.039 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:43.039 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:43.039 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:43.039 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:43.039 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:43.039 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:43.039 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:43.039 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:11:43.039 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:43.039 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:43.039 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:43.039 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:43.039 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:43.039 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:11:43.039 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:11:43.039 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:11:43.039 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:43.039 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:43.039 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:43.039 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:43.039 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:11:43.039 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:11:43.039 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:43.039 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:43.040 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:43.040 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:43.040 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:43.040 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:43.040 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:43.040 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:43.040 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:43.040 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:43.040 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:43.040 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:43.040 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:11:43.040 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:11:43.040 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:11:43.040 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:11:43.040 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:11:43.040 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:43.040 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:11:43.040 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:11:43.040 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:11:43.040 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:11:43.040 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:11:43.040 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:11:43.040 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:11:43.040 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:11:43.040 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:11:43.040 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:11:43.040 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:11:43.040 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j144 00:11:43.040 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:11:43.040 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:11:43.040 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:11:43.040 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:11:43.040 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:11:43.040 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:11:43.040 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:11:43.040 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 1249203 ]] 00:11:43.040 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 1249203 00:11:43.040 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:11:43.040 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:11:43.040 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:11:43.040 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:11:43.040 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:11:43.040 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:11:43.040 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:11:43.040 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:11:43.040 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.UCbgfh 00:11:43.040 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:43.040 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:11:43.040 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:11:43.040 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.UCbgfh/tests/target /tmp/spdk.UCbgfh 00:11:43.040 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:11:43.041 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:43.041 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:11:43.041 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:11:43.041 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:11:43.041 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:11:43.041 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:11:43.041 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:11:43.041 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:11:43.041 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:43.041 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:11:43.041 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:11:43.041 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:11:43.041 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:11:43.041 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:11:43.041 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:43.041 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:11:43.041 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:11:43.041 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=118501175296 00:11:43.041 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=129356509184 00:11:43.041 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10855333888 00:11:43.041 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:43.041 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:43.041 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:43.041 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=64666886144 00:11:43.041 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=64678252544 00:11:43.041 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=11366400 00:11:43.041 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:43.041 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:43.041 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:43.041 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=25847934976 00:11:43.041 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=25871302656 00:11:43.041 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23367680 00:11:43.041 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:43.041 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=efivarfs 00:11:43.041 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=efivarfs 00:11:43.041 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=216064 00:11:43.041 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=507904 00:11:43.041 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=287744 00:11:43.041 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:43.041 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:43.041 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:43.041 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=64677810176 00:11:43.041 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=64678256640 00:11:43.041 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=446464 00:11:43.041 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:43.041 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:43.041 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:43.041 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=12935634944 00:11:43.041 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=12935647232 00:11:43.041 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:11:43.041 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:43.041 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:11:43.041 * Looking for test storage... 00:11:43.041 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:11:43.041 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:11:43.041 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:43.041 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:43.041 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:11:43.041 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=118501175296 00:11:43.041 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:11:43.041 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:11:43.041 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:11:43.041 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:11:43.041 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:11:43.041 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=13069926400 00:11:43.041 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:43.041 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:43.041 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:43.041 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:43.041 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:43.041 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:11:43.041 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:11:43.041 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:11:43.041 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:43.041 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:43.041 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:11:43.041 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:11:43.041 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:43.041 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:43.041 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:43.041 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:43.041 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:43.041 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:43.041 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:43.041 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:43.041 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:43.041 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:11:43.041 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:43.303 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:43.303 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:43.303 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:43.303 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:43.303 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:43.303 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:43.303 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:43.303 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:43.303 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:43.303 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:43.303 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:43.303 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:43.303 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:43.303 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:43.303 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:43.303 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:43.303 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:43.303 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:43.303 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:43.303 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:43.303 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:43.303 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:43.303 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:43.303 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:43.303 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:43.303 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:43.303 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:43.303 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:43.303 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:43.303 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:43.303 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:43.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.303 --rc genhtml_branch_coverage=1 00:11:43.303 --rc genhtml_function_coverage=1 00:11:43.303 --rc genhtml_legend=1 00:11:43.303 --rc geninfo_all_blocks=1 00:11:43.303 --rc geninfo_unexecuted_blocks=1 00:11:43.303 00:11:43.303 ' 00:11:43.303 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:43.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.303 --rc genhtml_branch_coverage=1 00:11:43.303 --rc genhtml_function_coverage=1 00:11:43.303 --rc genhtml_legend=1 00:11:43.303 --rc geninfo_all_blocks=1 00:11:43.303 --rc geninfo_unexecuted_blocks=1 00:11:43.303 00:11:43.303 ' 00:11:43.303 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:43.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.303 --rc genhtml_branch_coverage=1 00:11:43.303 --rc genhtml_function_coverage=1 00:11:43.303 --rc genhtml_legend=1 00:11:43.303 --rc geninfo_all_blocks=1 00:11:43.303 --rc geninfo_unexecuted_blocks=1 00:11:43.303 00:11:43.303 ' 00:11:43.303 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:43.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.303 --rc genhtml_branch_coverage=1 00:11:43.303 --rc genhtml_function_coverage=1 00:11:43.303 --rc genhtml_legend=1 00:11:43.303 --rc geninfo_all_blocks=1 00:11:43.303 --rc geninfo_unexecuted_blocks=1 00:11:43.303 00:11:43.303 ' 00:11:43.303 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:43.303 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:43.303 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:43.304 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:43.304 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:43.304 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:43.304 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:43.304 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:43.304 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:43.304 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:43.304 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:43.304 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:43.304 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:43.304 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:43.304 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:43.304 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:43.304 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:43.304 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:43.304 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:43.304 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:43.304 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:43.304 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:43.304 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:43.304 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.304 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.304 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.304 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:43.304 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.304 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:11:43.304 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:43.304 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:43.304 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:43.304 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:43.304 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:43.304 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:43.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:43.304 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:43.304 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:43.304 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:43.304 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:43.304 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:43.304 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:43.304 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:43.304 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:43.304 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:43.304 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:43.304 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:43.304 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:43.304 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:43.304 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:43.304 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:43.304 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:43.304 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:11:43.304 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:51.445 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:51.445 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:11:51.445 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:51.445 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:51.445 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:51.445 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:51.445 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:51.445 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:11:51.445 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:51.445 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:11:51.445 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:11:51.445 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:11:51.445 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:11:51.445 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:11:51.445 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:11:51.445 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:51.445 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:51.445 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:51.445 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:51.445 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:51.445 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:51.445 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:51.445 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:51.445 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:51.445 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:51.445 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:51.445 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:51.445 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:51.445 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:51.445 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:51.445 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:51.445 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:51.445 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:51.445 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:51.445 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:51.445 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:51.445 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:51.445 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:51.445 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:51.445 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:51.445 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:51.445 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:51.445 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:51.445 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:51.445 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:51.445 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:51.445 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:51.445 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:51.445 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:51.445 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:51.445 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:51.445 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:51.445 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:51.445 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:51.445 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:51.445 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:51.445 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:51.445 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:51.445 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:51.445 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:51.445 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:51.445 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:51.445 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:51.445 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:51.445 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:51.445 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:51.445 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:51.445 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:51.445 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:51.445 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:51.445 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:51.445 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:51.445 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:51.445 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:11:51.445 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:51.445 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:51.445 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:51.445 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:51.445 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:51.445 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:51.445 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:51.445 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:51.445 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:51.445 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:51.445 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:51.445 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:51.445 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:51.445 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:51.445 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:51.445 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:51.445 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:51.445 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:51.445 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:51.446 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:51.446 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:51.446 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:51.446 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:51.446 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:51.446 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:51.446 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:51.446 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:51.446 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.656 ms 00:11:51.446 00:11:51.446 --- 10.0.0.2 ping statistics --- 00:11:51.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:51.446 rtt min/avg/max/mdev = 0.656/0.656/0.656/0.000 ms 00:11:51.446 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:51.446 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:51.446 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:11:51.446 00:11:51.446 --- 10.0.0.1 ping statistics --- 00:11:51.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:51.446 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:11:51.446 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:51.446 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:11:51.446 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:51.446 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:51.446 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:51.446 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:51.446 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:51.446 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:51.446 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:51.446 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:51.446 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:51.446 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:51.446 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:51.446 ************************************ 00:11:51.446 START TEST nvmf_filesystem_no_in_capsule 00:11:51.446 ************************************ 00:11:51.446 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:11:51.446 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:51.446 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:51.446 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:51.446 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:51.446 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:51.446 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=1252897 00:11:51.446 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 1252897 00:11:51.446 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:51.446 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 1252897 ']' 00:11:51.446 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:51.446 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:51.446 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:51.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:51.446 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:51.446 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:51.446 [2024-11-20 09:45:21.690675] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:11:51.446 [2024-11-20 09:45:21.690741] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:51.446 [2024-11-20 09:45:21.789412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:51.446 [2024-11-20 09:45:21.843036] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:51.446 [2024-11-20 09:45:21.843090] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:51.446 [2024-11-20 09:45:21.843098] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:51.446 [2024-11-20 09:45:21.843106] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:51.446 [2024-11-20 09:45:21.843112] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:51.446 [2024-11-20 09:45:21.845450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:51.446 [2024-11-20 09:45:21.845612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:51.446 [2024-11-20 09:45:21.845777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:51.446 [2024-11-20 09:45:21.845777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:51.708 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:51.708 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:51.708 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:51.708 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:51.708 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:51.708 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:51.708 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:51.708 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:51.708 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.708 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:51.708 [2024-11-20 09:45:22.569372] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:51.708 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.708 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:51.708 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.708 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:51.970 Malloc1 00:11:51.970 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.970 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:51.970 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.970 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:51.970 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.970 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:51.970 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.970 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:51.970 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.970 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:51.970 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.970 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:51.970 [2024-11-20 09:45:22.720140] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:51.970 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.970 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:51.970 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:51.970 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:51.970 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:51.970 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:51.970 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:51.970 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.970 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:51.970 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.970 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:51.970 { 00:11:51.970 "name": "Malloc1", 00:11:51.970 "aliases": [ 00:11:51.970 "5d150efc-2539-4fa4-96c6-a9b609d8f0e0" 00:11:51.970 ], 00:11:51.970 "product_name": "Malloc disk", 00:11:51.970 "block_size": 512, 00:11:51.970 "num_blocks": 1048576, 00:11:51.970 "uuid": "5d150efc-2539-4fa4-96c6-a9b609d8f0e0", 00:11:51.970 "assigned_rate_limits": { 00:11:51.970 "rw_ios_per_sec": 0, 00:11:51.970 "rw_mbytes_per_sec": 0, 00:11:51.970 "r_mbytes_per_sec": 0, 00:11:51.970 "w_mbytes_per_sec": 0 00:11:51.970 }, 00:11:51.970 "claimed": true, 00:11:51.970 "claim_type": "exclusive_write", 00:11:51.970 "zoned": false, 00:11:51.970 "supported_io_types": { 00:11:51.970 "read": true, 00:11:51.970 "write": true, 00:11:51.970 "unmap": true, 00:11:51.970 "flush": true, 00:11:51.970 "reset": true, 00:11:51.970 "nvme_admin": false, 00:11:51.970 "nvme_io": false, 00:11:51.971 "nvme_io_md": false, 00:11:51.971 "write_zeroes": true, 00:11:51.971 "zcopy": true, 00:11:51.971 "get_zone_info": false, 00:11:51.971 "zone_management": false, 00:11:51.971 "zone_append": false, 00:11:51.971 "compare": false, 00:11:51.971 "compare_and_write": false, 00:11:51.971 "abort": true, 00:11:51.971 "seek_hole": false, 00:11:51.971 "seek_data": false, 00:11:51.971 "copy": true, 00:11:51.971 "nvme_iov_md": false 00:11:51.971 }, 00:11:51.971 "memory_domains": [ 00:11:51.971 { 00:11:51.971 "dma_device_id": "system", 00:11:51.971 "dma_device_type": 1 00:11:51.971 }, 00:11:51.971 { 00:11:51.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:51.971 "dma_device_type": 2 00:11:51.971 } 00:11:51.971 ], 00:11:51.971 "driver_specific": {} 00:11:51.971 } 00:11:51.971 ]' 00:11:51.971 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:51.971 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:51.971 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:51.971 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:51.971 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:51.971 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:51.971 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:51.971 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:53.884 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:53.884 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:53.884 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:53.884 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:53.884 09:45:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:55.915 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:55.915 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:55.915 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:55.915 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:55.915 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:55.915 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:55.915 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:55.915 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:55.915 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:55.915 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:55.915 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:55.915 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:55.915 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:55.915 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:55.915 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:55.915 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:55.915 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:55.915 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:56.486 09:45:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:57.427 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:57.427 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:57.427 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:57.427 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:57.427 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:57.427 ************************************ 00:11:57.427 START TEST filesystem_ext4 00:11:57.427 ************************************ 00:11:57.427 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:57.427 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:57.427 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:57.427 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:57.427 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:57.427 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:57.427 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:57.427 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:57.427 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:57.427 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:57.427 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:57.427 mke2fs 1.47.0 (5-Feb-2023) 00:11:57.427 Discarding device blocks: 0/522240 done 00:11:57.427 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:57.427 Filesystem UUID: cb187881-9ee6-479e-89bb-38085d96ec26 00:11:57.427 Superblock backups stored on blocks: 00:11:57.427 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:57.427 00:11:57.427 Allocating group tables: 0/64 done 00:11:57.427 Writing inode tables: 0/64 done 00:12:00.728 Creating journal (8192 blocks): done 00:12:00.728 Writing superblocks and filesystem accounting information: 0/64 done 00:12:00.728 00:12:00.728 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:12:00.728 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:07.313 09:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:07.313 09:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:12:07.313 09:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:07.313 09:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:12:07.313 09:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:07.313 09:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:07.313 09:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1252897 00:12:07.313 09:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:07.313 09:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:07.313 09:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:07.313 09:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:07.313 00:12:07.313 real 0m9.135s 00:12:07.313 user 0m0.032s 00:12:07.313 sys 0m0.075s 00:12:07.313 09:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:07.313 09:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:07.313 ************************************ 00:12:07.313 END TEST filesystem_ext4 00:12:07.313 ************************************ 00:12:07.313 09:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:07.313 09:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:07.313 09:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:07.313 09:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:07.313 ************************************ 00:12:07.313 START TEST filesystem_btrfs 00:12:07.313 ************************************ 00:12:07.313 09:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:07.313 09:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:07.313 09:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:07.313 09:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:07.313 09:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:12:07.313 09:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:07.313 09:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:12:07.313 09:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:12:07.313 09:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:12:07.313 09:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:12:07.313 09:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:07.313 btrfs-progs v6.8.1 00:12:07.313 See https://btrfs.readthedocs.io for more information. 00:12:07.313 00:12:07.313 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:07.313 NOTE: several default settings have changed in version 5.15, please make sure 00:12:07.313 this does not affect your deployments: 00:12:07.313 - DUP for metadata (-m dup) 00:12:07.313 - enabled no-holes (-O no-holes) 00:12:07.313 - enabled free-space-tree (-R free-space-tree) 00:12:07.314 00:12:07.314 Label: (null) 00:12:07.314 UUID: 838b10d6-715c-46ef-a3e9-87d8c9e18a3b 00:12:07.314 Node size: 16384 00:12:07.314 Sector size: 4096 (CPU page size: 4096) 00:12:07.314 Filesystem size: 510.00MiB 00:12:07.314 Block group profiles: 00:12:07.314 Data: single 8.00MiB 00:12:07.314 Metadata: DUP 32.00MiB 00:12:07.314 System: DUP 8.00MiB 00:12:07.314 SSD detected: yes 00:12:07.314 Zoned device: no 00:12:07.314 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:07.314 Checksum: crc32c 00:12:07.314 Number of devices: 1 00:12:07.314 Devices: 00:12:07.314 ID SIZE PATH 00:12:07.314 1 510.00MiB /dev/nvme0n1p1 00:12:07.314 00:12:07.314 09:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:12:07.314 09:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:07.885 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:07.885 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:12:07.885 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:07.885 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:12:07.885 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:07.885 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:07.885 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1252897 00:12:07.885 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:07.885 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:07.885 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:07.885 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:07.885 00:12:07.885 real 0m1.143s 00:12:07.885 user 0m0.026s 00:12:07.885 sys 0m0.119s 00:12:07.885 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:07.885 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:07.885 ************************************ 00:12:07.885 END TEST filesystem_btrfs 00:12:07.885 ************************************ 00:12:07.885 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:12:07.885 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:07.885 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:07.885 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:07.885 ************************************ 00:12:07.885 START TEST filesystem_xfs 00:12:07.885 ************************************ 00:12:07.885 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:12:07.885 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:07.885 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:07.885 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:07.885 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:12:07.885 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:07.885 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:12:07.885 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:12:07.885 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:12:07.885 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:12:07.885 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:07.885 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:07.885 = sectsz=512 attr=2, projid32bit=1 00:12:07.885 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:07.885 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:07.885 data = bsize=4096 blocks=130560, imaxpct=25 00:12:07.885 = sunit=0 swidth=0 blks 00:12:07.885 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:07.885 log =internal log bsize=4096 blocks=16384, version=2 00:12:07.885 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:07.885 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:08.832 Discarding blocks...Done. 00:12:08.832 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:12:08.832 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:11.410 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:11.410 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:12:11.410 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:11.410 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:12:11.410 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:12:11.410 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:11.410 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1252897 00:12:11.410 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:11.410 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:11.410 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:11.410 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:11.410 00:12:11.410 real 0m3.521s 00:12:11.410 user 0m0.027s 00:12:11.410 sys 0m0.079s 00:12:11.410 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:11.410 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:11.410 ************************************ 00:12:11.410 END TEST filesystem_xfs 00:12:11.410 ************************************ 00:12:11.410 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:11.410 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:11.410 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:11.671 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:11.671 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:11.671 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:12:11.671 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:11.671 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:11.671 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:11.671 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:11.671 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:12:11.671 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:11.671 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.671 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:11.671 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.671 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:11.671 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1252897 00:12:11.671 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 1252897 ']' 00:12:11.671 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 1252897 00:12:11.671 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:12:11.671 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:11.671 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1252897 00:12:11.671 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:11.671 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:11.671 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1252897' 00:12:11.671 killing process with pid 1252897 00:12:11.671 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 1252897 00:12:11.671 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 1252897 00:12:11.932 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:11.932 00:12:11.932 real 0m21.043s 00:12:11.932 user 1m23.174s 00:12:11.932 sys 0m1.460s 00:12:11.932 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:11.932 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:11.932 ************************************ 00:12:11.932 END TEST nvmf_filesystem_no_in_capsule 00:12:11.932 ************************************ 00:12:11.932 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:12:11.932 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:11.932 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:11.932 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:11.932 ************************************ 00:12:11.932 START TEST nvmf_filesystem_in_capsule 00:12:11.932 ************************************ 00:12:11.932 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:12:11.932 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:12:11.932 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:11.932 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:11.932 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:11.932 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:11.932 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=1257351 00:12:11.932 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 1257351 00:12:11.932 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:11.932 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 1257351 ']' 00:12:11.932 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:11.932 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:11.932 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:11.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:11.932 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:11.932 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:11.932 [2024-11-20 09:45:42.809802] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:12:11.932 [2024-11-20 09:45:42.809861] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:12.192 [2024-11-20 09:45:42.904906] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:12.192 [2024-11-20 09:45:42.944958] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:12.192 [2024-11-20 09:45:42.944999] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:12.192 [2024-11-20 09:45:42.945005] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:12.192 [2024-11-20 09:45:42.945010] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:12.192 [2024-11-20 09:45:42.945014] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:12.192 [2024-11-20 09:45:42.946818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:12.192 [2024-11-20 09:45:42.946968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:12.192 [2024-11-20 09:45:42.947123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:12.192 [2024-11-20 09:45:42.947126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:12.762 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:12.762 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:12:12.762 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:12.762 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:12.762 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:12.762 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:12.762 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:12.762 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:12:12.762 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.762 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:12.762 [2024-11-20 09:45:43.668203] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:12.762 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.762 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:12.762 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.762 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:13.024 Malloc1 00:12:13.024 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.024 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:13.024 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.024 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:13.024 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.024 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:13.024 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.024 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:13.024 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.024 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:13.024 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.024 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:13.024 [2024-11-20 09:45:43.791337] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:13.024 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.024 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:13.024 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:12:13.024 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:12:13.024 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:12:13.024 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:12:13.024 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:13.024 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.024 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:13.024 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.024 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:12:13.024 { 00:12:13.024 "name": "Malloc1", 00:12:13.024 "aliases": [ 00:12:13.024 "e45563c5-368a-415b-a2a2-4df4433e887a" 00:12:13.024 ], 00:12:13.024 "product_name": "Malloc disk", 00:12:13.024 "block_size": 512, 00:12:13.024 "num_blocks": 1048576, 00:12:13.024 "uuid": "e45563c5-368a-415b-a2a2-4df4433e887a", 00:12:13.024 "assigned_rate_limits": { 00:12:13.024 "rw_ios_per_sec": 0, 00:12:13.024 "rw_mbytes_per_sec": 0, 00:12:13.024 "r_mbytes_per_sec": 0, 00:12:13.024 "w_mbytes_per_sec": 0 00:12:13.024 }, 00:12:13.024 "claimed": true, 00:12:13.024 "claim_type": "exclusive_write", 00:12:13.024 "zoned": false, 00:12:13.024 "supported_io_types": { 00:12:13.024 "read": true, 00:12:13.024 "write": true, 00:12:13.024 "unmap": true, 00:12:13.024 "flush": true, 00:12:13.024 "reset": true, 00:12:13.024 "nvme_admin": false, 00:12:13.024 "nvme_io": false, 00:12:13.024 "nvme_io_md": false, 00:12:13.024 "write_zeroes": true, 00:12:13.024 "zcopy": true, 00:12:13.024 "get_zone_info": false, 00:12:13.024 "zone_management": false, 00:12:13.024 "zone_append": false, 00:12:13.024 "compare": false, 00:12:13.024 "compare_and_write": false, 00:12:13.024 "abort": true, 00:12:13.024 "seek_hole": false, 00:12:13.024 "seek_data": false, 00:12:13.024 "copy": true, 00:12:13.024 "nvme_iov_md": false 00:12:13.024 }, 00:12:13.024 "memory_domains": [ 00:12:13.024 { 00:12:13.024 "dma_device_id": "system", 00:12:13.024 "dma_device_type": 1 00:12:13.024 }, 00:12:13.024 { 00:12:13.024 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:13.024 "dma_device_type": 2 00:12:13.024 } 00:12:13.024 ], 00:12:13.024 "driver_specific": {} 00:12:13.024 } 00:12:13.024 ]' 00:12:13.024 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:12:13.024 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:12:13.024 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:12:13.024 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:12:13.024 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:12:13.024 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:12:13.024 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:13.024 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:14.935 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:14.935 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:12:14.935 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:14.935 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:14.935 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:12:16.847 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:16.847 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:16.847 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:16.847 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:16.847 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:16.847 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:12:16.847 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:16.847 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:16.847 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:16.847 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:16.847 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:16.847 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:16.847 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:16.847 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:16.847 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:16.847 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:16.847 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:17.110 09:45:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:17.371 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:18.314 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:12:18.314 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:18.314 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:18.314 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:18.314 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:18.314 ************************************ 00:12:18.314 START TEST filesystem_in_capsule_ext4 00:12:18.314 ************************************ 00:12:18.314 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:18.314 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:18.314 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:18.314 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:18.314 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:12:18.314 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:18.314 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:12:18.314 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:12:18.314 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:12:18.314 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:12:18.314 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:18.314 mke2fs 1.47.0 (5-Feb-2023) 00:12:18.575 Discarding device blocks: 0/522240 done 00:12:18.575 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:18.575 Filesystem UUID: 9693245d-886e-4667-b60f-c75652cb521e 00:12:18.575 Superblock backups stored on blocks: 00:12:18.575 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:18.575 00:12:18.575 Allocating group tables: 0/64 done 00:12:18.575 Writing inode tables: 0/64 done 00:12:18.575 Creating journal (8192 blocks): done 00:12:20.793 Writing superblocks and filesystem accounting information: 0/64 2/64 done 00:12:20.793 00:12:20.793 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:12:20.793 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:26.079 09:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:26.079 09:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:12:26.079 09:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:26.079 09:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:12:26.079 09:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:26.079 09:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:26.079 09:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1257351 00:12:26.079 09:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:26.079 09:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:26.079 09:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:26.079 09:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:26.079 00:12:26.079 real 0m7.706s 00:12:26.079 user 0m0.029s 00:12:26.079 sys 0m0.080s 00:12:26.079 09:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:26.079 09:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:26.079 ************************************ 00:12:26.079 END TEST filesystem_in_capsule_ext4 00:12:26.079 ************************************ 00:12:26.079 09:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:26.079 09:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:26.079 09:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:26.079 09:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:26.079 ************************************ 00:12:26.079 START TEST filesystem_in_capsule_btrfs 00:12:26.079 ************************************ 00:12:26.079 09:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:26.079 09:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:26.079 09:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:26.079 09:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:26.079 09:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:12:26.079 09:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:26.079 09:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:12:26.079 09:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:12:26.079 09:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:12:26.079 09:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:12:26.079 09:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:26.340 btrfs-progs v6.8.1 00:12:26.340 See https://btrfs.readthedocs.io for more information. 00:12:26.340 00:12:26.340 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:26.340 NOTE: several default settings have changed in version 5.15, please make sure 00:12:26.340 this does not affect your deployments: 00:12:26.340 - DUP for metadata (-m dup) 00:12:26.340 - enabled no-holes (-O no-holes) 00:12:26.340 - enabled free-space-tree (-R free-space-tree) 00:12:26.340 00:12:26.340 Label: (null) 00:12:26.340 UUID: 7daa4d72-77a0-4cb8-9898-b1199289bc6e 00:12:26.340 Node size: 16384 00:12:26.340 Sector size: 4096 (CPU page size: 4096) 00:12:26.340 Filesystem size: 510.00MiB 00:12:26.340 Block group profiles: 00:12:26.340 Data: single 8.00MiB 00:12:26.340 Metadata: DUP 32.00MiB 00:12:26.340 System: DUP 8.00MiB 00:12:26.340 SSD detected: yes 00:12:26.340 Zoned device: no 00:12:26.340 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:26.340 Checksum: crc32c 00:12:26.340 Number of devices: 1 00:12:26.340 Devices: 00:12:26.340 ID SIZE PATH 00:12:26.340 1 510.00MiB /dev/nvme0n1p1 00:12:26.340 00:12:26.340 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:12:26.340 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:26.600 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:26.600 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:12:26.600 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:26.600 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:12:26.600 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:26.600 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:26.600 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1257351 00:12:26.600 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:26.600 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:26.600 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:26.600 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:26.600 00:12:26.600 real 0m0.518s 00:12:26.600 user 0m0.024s 00:12:26.600 sys 0m0.127s 00:12:26.600 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:26.600 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:26.600 ************************************ 00:12:26.600 END TEST filesystem_in_capsule_btrfs 00:12:26.600 ************************************ 00:12:26.860 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:12:26.860 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:26.860 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:26.860 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:26.860 ************************************ 00:12:26.860 START TEST filesystem_in_capsule_xfs 00:12:26.860 ************************************ 00:12:26.860 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:12:26.860 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:26.860 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:26.860 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:26.860 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:12:26.860 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:26.860 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:12:26.860 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:12:26.860 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:12:26.860 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:12:26.860 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:26.860 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:26.860 = sectsz=512 attr=2, projid32bit=1 00:12:26.860 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:26.860 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:26.860 data = bsize=4096 blocks=130560, imaxpct=25 00:12:26.860 = sunit=0 swidth=0 blks 00:12:26.860 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:26.860 log =internal log bsize=4096 blocks=16384, version=2 00:12:26.860 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:26.860 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:27.801 Discarding blocks...Done. 00:12:27.801 09:45:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:12:27.801 09:45:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:29.714 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:29.714 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:12:29.714 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:29.714 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:12:29.714 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:12:29.714 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:29.714 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1257351 00:12:29.714 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:29.714 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:29.714 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:29.714 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:29.714 00:12:29.714 real 0m2.838s 00:12:29.714 user 0m0.022s 00:12:29.714 sys 0m0.084s 00:12:29.714 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:29.714 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:29.714 ************************************ 00:12:29.714 END TEST filesystem_in_capsule_xfs 00:12:29.714 ************************************ 00:12:29.714 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:29.714 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:29.714 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:29.714 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:29.714 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:29.714 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:12:29.714 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:29.714 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:29.714 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:29.714 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:29.714 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:12:29.714 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:29.714 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.714 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:29.714 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.714 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:29.714 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1257351 00:12:29.714 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 1257351 ']' 00:12:29.714 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 1257351 00:12:29.975 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:12:29.975 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:29.975 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1257351 00:12:29.975 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:29.975 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:29.975 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1257351' 00:12:29.975 killing process with pid 1257351 00:12:29.975 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 1257351 00:12:29.975 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 1257351 00:12:30.237 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:30.237 00:12:30.237 real 0m18.144s 00:12:30.237 user 1m11.693s 00:12:30.237 sys 0m1.456s 00:12:30.237 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:30.237 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:30.237 ************************************ 00:12:30.237 END TEST nvmf_filesystem_in_capsule 00:12:30.237 ************************************ 00:12:30.237 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:12:30.237 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:30.237 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:12:30.237 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:30.237 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:12:30.237 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:30.237 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:30.237 rmmod nvme_tcp 00:12:30.237 rmmod nvme_fabrics 00:12:30.237 rmmod nvme_keyring 00:12:30.237 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:30.237 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:12:30.237 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:12:30.237 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:12:30.237 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:30.237 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:30.237 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:30.237 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:12:30.237 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:12:30.237 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:30.237 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:12:30.237 09:46:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:30.237 09:46:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:30.237 09:46:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:30.237 09:46:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:30.237 09:46:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:32.787 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:32.787 00:12:32.787 real 0m49.547s 00:12:32.787 user 2m37.407s 00:12:32.787 sys 0m8.724s 00:12:32.787 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:32.787 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:32.787 ************************************ 00:12:32.787 END TEST nvmf_filesystem 00:12:32.787 ************************************ 00:12:32.787 09:46:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:32.787 09:46:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:32.787 09:46:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:32.787 09:46:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:32.787 ************************************ 00:12:32.787 START TEST nvmf_target_discovery 00:12:32.787 ************************************ 00:12:32.787 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:32.787 * Looking for test storage... 00:12:32.787 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:32.787 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:32.787 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:12:32.787 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:32.787 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:32.787 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:32.787 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:32.787 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:32.787 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:12:32.787 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:12:32.787 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:12:32.787 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:12:32.787 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:12:32.787 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:12:32.787 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:12:32.787 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:32.787 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:12:32.787 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:12:32.787 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:32.787 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:32.787 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:12:32.787 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:12:32.787 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:32.787 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:12:32.787 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:12:32.787 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:12:32.787 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:12:32.787 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:32.787 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:12:32.787 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:12:32.787 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:32.787 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:32.787 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:12:32.787 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:32.787 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:32.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:32.787 --rc genhtml_branch_coverage=1 00:12:32.787 --rc genhtml_function_coverage=1 00:12:32.787 --rc genhtml_legend=1 00:12:32.787 --rc geninfo_all_blocks=1 00:12:32.787 --rc geninfo_unexecuted_blocks=1 00:12:32.787 00:12:32.787 ' 00:12:32.787 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:32.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:32.787 --rc genhtml_branch_coverage=1 00:12:32.787 --rc genhtml_function_coverage=1 00:12:32.787 --rc genhtml_legend=1 00:12:32.787 --rc geninfo_all_blocks=1 00:12:32.787 --rc geninfo_unexecuted_blocks=1 00:12:32.787 00:12:32.787 ' 00:12:32.787 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:32.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:32.787 --rc genhtml_branch_coverage=1 00:12:32.787 --rc genhtml_function_coverage=1 00:12:32.787 --rc genhtml_legend=1 00:12:32.787 --rc geninfo_all_blocks=1 00:12:32.787 --rc geninfo_unexecuted_blocks=1 00:12:32.787 00:12:32.787 ' 00:12:32.787 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:32.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:32.787 --rc genhtml_branch_coverage=1 00:12:32.787 --rc genhtml_function_coverage=1 00:12:32.787 --rc genhtml_legend=1 00:12:32.787 --rc geninfo_all_blocks=1 00:12:32.787 --rc geninfo_unexecuted_blocks=1 00:12:32.787 00:12:32.787 ' 00:12:32.788 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:32.788 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:12:32.788 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:32.788 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:32.788 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:32.788 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:32.788 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:32.788 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:32.788 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:32.788 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:32.788 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:32.788 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:32.788 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:32.788 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:32.788 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:32.788 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:32.788 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:32.788 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:32.788 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:32.788 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:12:32.788 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:32.788 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:32.788 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:32.788 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.788 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.788 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.788 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:12:32.788 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.788 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:12:32.788 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:32.788 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:32.788 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:32.788 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:32.788 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:32.788 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:32.788 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:32.788 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:32.788 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:32.788 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:32.788 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:12:32.788 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:12:32.788 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:12:32.788 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:12:32.788 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:12:32.788 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:32.788 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:32.788 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:32.788 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:32.788 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:32.788 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:32.788 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:32.788 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:32.788 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:32.788 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:32.788 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:12:32.788 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:40.924 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:40.924 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:40.924 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:40.924 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:40.924 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:40.924 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.601 ms 00:12:40.924 00:12:40.924 --- 10.0.0.2 ping statistics --- 00:12:40.924 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:40.924 rtt min/avg/max/mdev = 0.601/0.601/0.601/0.000 ms 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:40.924 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:40.924 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.291 ms 00:12:40.924 00:12:40.924 --- 10.0.0.1 ping statistics --- 00:12:40.924 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:40.924 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=1265341 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 1265341 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 1265341 ']' 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:40.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:40.924 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:40.924 [2024-11-20 09:46:10.997719] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:12:40.924 [2024-11-20 09:46:10.997782] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:40.924 [2024-11-20 09:46:11.098662] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:40.924 [2024-11-20 09:46:11.151232] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:40.924 [2024-11-20 09:46:11.151286] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:40.924 [2024-11-20 09:46:11.151294] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:40.924 [2024-11-20 09:46:11.151301] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:40.924 [2024-11-20 09:46:11.151308] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:40.924 [2024-11-20 09:46:11.153698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:40.924 [2024-11-20 09:46:11.153858] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:40.924 [2024-11-20 09:46:11.154023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:40.924 [2024-11-20 09:46:11.154024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:40.924 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:40.924 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:12:40.924 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:40.924 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:40.924 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:41.185 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:41.185 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:41.185 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.185 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:41.185 [2024-11-20 09:46:11.873849] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:41.185 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.185 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:41.185 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:41.185 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:41.185 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.185 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:41.185 Null1 00:12:41.185 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.185 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:41.185 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.185 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:41.185 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.185 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:41.185 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.185 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:41.185 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.185 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:41.185 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.185 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:41.185 [2024-11-20 09:46:11.934359] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:41.185 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.185 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:41.185 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:41.185 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.185 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:41.185 Null2 00:12:41.185 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.185 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:41.185 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.185 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:41.185 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.185 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:41.185 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.185 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:41.185 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.185 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:41.185 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.185 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:41.185 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.185 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:41.185 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:41.185 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.185 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:41.185 Null3 00:12:41.185 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.185 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:41.185 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.185 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:41.185 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.185 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:41.185 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.185 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:41.185 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.185 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:41.185 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.185 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:41.185 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.185 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:41.185 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:41.185 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.185 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:41.185 Null4 00:12:41.185 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.185 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:41.185 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.185 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:41.185 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.185 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:41.185 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.185 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:41.185 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.185 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:41.185 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.185 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:41.185 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.185 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:41.185 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.185 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:41.186 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.186 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:12:41.186 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.186 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:41.447 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.447 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:12:41.447 00:12:41.447 Discovery Log Number of Records 6, Generation counter 6 00:12:41.447 =====Discovery Log Entry 0====== 00:12:41.447 trtype: tcp 00:12:41.447 adrfam: ipv4 00:12:41.447 subtype: current discovery subsystem 00:12:41.447 treq: not required 00:12:41.447 portid: 0 00:12:41.447 trsvcid: 4420 00:12:41.447 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:41.447 traddr: 10.0.0.2 00:12:41.447 eflags: explicit discovery connections, duplicate discovery information 00:12:41.447 sectype: none 00:12:41.447 =====Discovery Log Entry 1====== 00:12:41.447 trtype: tcp 00:12:41.447 adrfam: ipv4 00:12:41.447 subtype: nvme subsystem 00:12:41.447 treq: not required 00:12:41.447 portid: 0 00:12:41.447 trsvcid: 4420 00:12:41.447 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:41.447 traddr: 10.0.0.2 00:12:41.447 eflags: none 00:12:41.447 sectype: none 00:12:41.447 =====Discovery Log Entry 2====== 00:12:41.447 trtype: tcp 00:12:41.447 adrfam: ipv4 00:12:41.447 subtype: nvme subsystem 00:12:41.447 treq: not required 00:12:41.447 portid: 0 00:12:41.447 trsvcid: 4420 00:12:41.447 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:41.447 traddr: 10.0.0.2 00:12:41.447 eflags: none 00:12:41.447 sectype: none 00:12:41.447 =====Discovery Log Entry 3====== 00:12:41.447 trtype: tcp 00:12:41.447 adrfam: ipv4 00:12:41.447 subtype: nvme subsystem 00:12:41.447 treq: not required 00:12:41.447 portid: 0 00:12:41.447 trsvcid: 4420 00:12:41.447 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:41.447 traddr: 10.0.0.2 00:12:41.447 eflags: none 00:12:41.447 sectype: none 00:12:41.447 =====Discovery Log Entry 4====== 00:12:41.447 trtype: tcp 00:12:41.447 adrfam: ipv4 00:12:41.447 subtype: nvme subsystem 00:12:41.447 treq: not required 00:12:41.447 portid: 0 00:12:41.447 trsvcid: 4420 00:12:41.447 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:41.447 traddr: 10.0.0.2 00:12:41.447 eflags: none 00:12:41.447 sectype: none 00:12:41.447 =====Discovery Log Entry 5====== 00:12:41.447 trtype: tcp 00:12:41.447 adrfam: ipv4 00:12:41.447 subtype: discovery subsystem referral 00:12:41.447 treq: not required 00:12:41.447 portid: 0 00:12:41.447 trsvcid: 4430 00:12:41.447 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:41.447 traddr: 10.0.0.2 00:12:41.447 eflags: none 00:12:41.447 sectype: none 00:12:41.447 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:41.447 Perform nvmf subsystem discovery via RPC 00:12:41.447 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:41.447 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.448 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:41.448 [ 00:12:41.448 { 00:12:41.448 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:41.448 "subtype": "Discovery", 00:12:41.448 "listen_addresses": [ 00:12:41.448 { 00:12:41.448 "trtype": "TCP", 00:12:41.448 "adrfam": "IPv4", 00:12:41.448 "traddr": "10.0.0.2", 00:12:41.448 "trsvcid": "4420" 00:12:41.448 } 00:12:41.448 ], 00:12:41.448 "allow_any_host": true, 00:12:41.448 "hosts": [] 00:12:41.448 }, 00:12:41.448 { 00:12:41.448 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:41.448 "subtype": "NVMe", 00:12:41.448 "listen_addresses": [ 00:12:41.448 { 00:12:41.448 "trtype": "TCP", 00:12:41.448 "adrfam": "IPv4", 00:12:41.448 "traddr": "10.0.0.2", 00:12:41.448 "trsvcid": "4420" 00:12:41.448 } 00:12:41.448 ], 00:12:41.448 "allow_any_host": true, 00:12:41.448 "hosts": [], 00:12:41.448 "serial_number": "SPDK00000000000001", 00:12:41.448 "model_number": "SPDK bdev Controller", 00:12:41.448 "max_namespaces": 32, 00:12:41.448 "min_cntlid": 1, 00:12:41.448 "max_cntlid": 65519, 00:12:41.448 "namespaces": [ 00:12:41.448 { 00:12:41.448 "nsid": 1, 00:12:41.448 "bdev_name": "Null1", 00:12:41.448 "name": "Null1", 00:12:41.448 "nguid": "5454265441534758B5410BACC08BCE03", 00:12:41.448 "uuid": "54542654-4153-4758-b541-0bacc08bce03" 00:12:41.448 } 00:12:41.448 ] 00:12:41.448 }, 00:12:41.448 { 00:12:41.448 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:41.448 "subtype": "NVMe", 00:12:41.448 "listen_addresses": [ 00:12:41.448 { 00:12:41.448 "trtype": "TCP", 00:12:41.448 "adrfam": "IPv4", 00:12:41.448 "traddr": "10.0.0.2", 00:12:41.448 "trsvcid": "4420" 00:12:41.448 } 00:12:41.448 ], 00:12:41.448 "allow_any_host": true, 00:12:41.448 "hosts": [], 00:12:41.448 "serial_number": "SPDK00000000000002", 00:12:41.448 "model_number": "SPDK bdev Controller", 00:12:41.448 "max_namespaces": 32, 00:12:41.448 "min_cntlid": 1, 00:12:41.448 "max_cntlid": 65519, 00:12:41.448 "namespaces": [ 00:12:41.448 { 00:12:41.448 "nsid": 1, 00:12:41.448 "bdev_name": "Null2", 00:12:41.448 "name": "Null2", 00:12:41.448 "nguid": "2886C9F66ADC45E8AF7D6AA383FB117B", 00:12:41.448 "uuid": "2886c9f6-6adc-45e8-af7d-6aa383fb117b" 00:12:41.448 } 00:12:41.448 ] 00:12:41.448 }, 00:12:41.448 { 00:12:41.448 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:41.448 "subtype": "NVMe", 00:12:41.448 "listen_addresses": [ 00:12:41.448 { 00:12:41.448 "trtype": "TCP", 00:12:41.448 "adrfam": "IPv4", 00:12:41.448 "traddr": "10.0.0.2", 00:12:41.448 "trsvcid": "4420" 00:12:41.448 } 00:12:41.448 ], 00:12:41.448 "allow_any_host": true, 00:12:41.448 "hosts": [], 00:12:41.448 "serial_number": "SPDK00000000000003", 00:12:41.448 "model_number": "SPDK bdev Controller", 00:12:41.448 "max_namespaces": 32, 00:12:41.448 "min_cntlid": 1, 00:12:41.448 "max_cntlid": 65519, 00:12:41.448 "namespaces": [ 00:12:41.448 { 00:12:41.448 "nsid": 1, 00:12:41.448 "bdev_name": "Null3", 00:12:41.448 "name": "Null3", 00:12:41.448 "nguid": "81B34533EF2B4F738CB0800A875C2F67", 00:12:41.448 "uuid": "81b34533-ef2b-4f73-8cb0-800a875c2f67" 00:12:41.448 } 00:12:41.448 ] 00:12:41.448 }, 00:12:41.448 { 00:12:41.448 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:41.448 "subtype": "NVMe", 00:12:41.448 "listen_addresses": [ 00:12:41.448 { 00:12:41.448 "trtype": "TCP", 00:12:41.448 "adrfam": "IPv4", 00:12:41.448 "traddr": "10.0.0.2", 00:12:41.448 "trsvcid": "4420" 00:12:41.448 } 00:12:41.448 ], 00:12:41.448 "allow_any_host": true, 00:12:41.448 "hosts": [], 00:12:41.448 "serial_number": "SPDK00000000000004", 00:12:41.448 "model_number": "SPDK bdev Controller", 00:12:41.448 "max_namespaces": 32, 00:12:41.448 "min_cntlid": 1, 00:12:41.448 "max_cntlid": 65519, 00:12:41.448 "namespaces": [ 00:12:41.448 { 00:12:41.448 "nsid": 1, 00:12:41.448 "bdev_name": "Null4", 00:12:41.448 "name": "Null4", 00:12:41.448 "nguid": "8856BD12BEF94058A5E980AE5F7852BF", 00:12:41.448 "uuid": "8856bd12-bef9-4058-a5e9-80ae5f7852bf" 00:12:41.448 } 00:12:41.448 ] 00:12:41.448 } 00:12:41.448 ] 00:12:41.448 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.448 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:41.448 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:41.448 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:41.448 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.448 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:41.448 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.448 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:41.448 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.448 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:41.448 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.448 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:41.448 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:41.448 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.448 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:41.448 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.448 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:41.448 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.448 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:41.448 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.448 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:41.448 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:41.448 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.448 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:41.448 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.448 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:41.448 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.448 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:41.448 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.448 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:41.448 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:41.448 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.448 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:41.709 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.709 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:41.709 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.709 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:41.709 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.709 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:12:41.709 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.709 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:41.709 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.709 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:41.709 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:41.709 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.709 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:41.709 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.709 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:41.709 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:41.709 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:41.709 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:41.709 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:41.709 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:12:41.709 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:41.709 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:12:41.709 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:41.709 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:41.709 rmmod nvme_tcp 00:12:41.709 rmmod nvme_fabrics 00:12:41.709 rmmod nvme_keyring 00:12:41.709 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:41.709 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:12:41.709 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:12:41.709 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 1265341 ']' 00:12:41.709 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 1265341 00:12:41.709 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 1265341 ']' 00:12:41.709 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 1265341 00:12:41.709 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:12:41.709 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:41.709 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1265341 00:12:41.709 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:41.709 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:41.709 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1265341' 00:12:41.709 killing process with pid 1265341 00:12:41.709 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 1265341 00:12:41.709 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 1265341 00:12:41.969 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:41.969 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:41.969 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:41.969 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:12:41.969 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:41.969 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:12:41.969 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:12:41.969 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:41.969 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:41.969 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:41.969 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:41.969 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:44.514 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:44.514 00:12:44.514 real 0m11.668s 00:12:44.514 user 0m8.788s 00:12:44.514 sys 0m6.090s 00:12:44.514 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:44.514 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:44.514 ************************************ 00:12:44.514 END TEST nvmf_target_discovery 00:12:44.514 ************************************ 00:12:44.514 09:46:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:44.514 09:46:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:44.514 09:46:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:44.514 09:46:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:44.514 ************************************ 00:12:44.514 START TEST nvmf_referrals 00:12:44.514 ************************************ 00:12:44.514 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:44.514 * Looking for test storage... 00:12:44.514 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:44.514 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:44.514 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:12:44.514 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:44.514 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:44.514 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:44.514 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:44.514 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:44.514 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:12:44.514 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:12:44.514 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:12:44.514 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:12:44.514 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:12:44.514 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:12:44.514 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:12:44.514 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:44.514 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:12:44.514 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:12:44.514 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:44.514 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:44.514 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:12:44.514 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:12:44.514 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:44.514 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:12:44.514 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:12:44.514 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:12:44.514 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:12:44.515 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:44.515 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:12:44.515 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:12:44.515 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:44.515 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:44.515 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:12:44.515 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:44.515 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:44.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.515 --rc genhtml_branch_coverage=1 00:12:44.515 --rc genhtml_function_coverage=1 00:12:44.515 --rc genhtml_legend=1 00:12:44.515 --rc geninfo_all_blocks=1 00:12:44.515 --rc geninfo_unexecuted_blocks=1 00:12:44.515 00:12:44.515 ' 00:12:44.515 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:44.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.515 --rc genhtml_branch_coverage=1 00:12:44.515 --rc genhtml_function_coverage=1 00:12:44.515 --rc genhtml_legend=1 00:12:44.515 --rc geninfo_all_blocks=1 00:12:44.515 --rc geninfo_unexecuted_blocks=1 00:12:44.515 00:12:44.515 ' 00:12:44.515 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:44.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.515 --rc genhtml_branch_coverage=1 00:12:44.515 --rc genhtml_function_coverage=1 00:12:44.515 --rc genhtml_legend=1 00:12:44.515 --rc geninfo_all_blocks=1 00:12:44.515 --rc geninfo_unexecuted_blocks=1 00:12:44.515 00:12:44.515 ' 00:12:44.515 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:44.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.515 --rc genhtml_branch_coverage=1 00:12:44.515 --rc genhtml_function_coverage=1 00:12:44.515 --rc genhtml_legend=1 00:12:44.515 --rc geninfo_all_blocks=1 00:12:44.515 --rc geninfo_unexecuted_blocks=1 00:12:44.515 00:12:44.515 ' 00:12:44.515 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:44.515 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:44.515 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:44.515 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:44.515 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:44.515 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:44.515 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:44.515 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:44.515 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:44.515 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:44.515 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:44.515 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:44.515 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:44.515 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:44.515 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:44.515 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:44.515 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:44.515 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:44.515 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:44.515 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:12:44.515 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:44.515 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:44.515 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:44.515 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.515 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.515 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.515 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:44.515 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.515 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:12:44.515 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:44.515 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:44.515 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:44.515 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:44.515 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:44.515 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:44.515 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:44.515 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:44.515 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:44.515 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:44.515 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:44.515 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:44.515 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:44.515 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:44.515 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:44.515 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:44.516 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:44.516 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:44.516 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:44.516 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:44.516 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:44.516 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:44.516 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:44.516 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:44.516 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:44.516 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:44.516 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:44.516 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:12:44.516 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:52.652 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:52.652 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:12:52.652 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:52.652 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:52.652 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:52.652 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:52.652 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:52.652 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:12:52.652 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:52.652 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:12:52.652 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:12:52.652 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:12:52.652 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:12:52.652 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:12:52.652 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:12:52.652 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:52.652 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:52.652 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:52.652 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:52.652 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:52.652 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:52.652 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:52.652 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:52.652 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:52.652 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:52.652 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:52.652 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:52.652 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:52.652 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:52.652 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:52.652 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:52.652 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:52.652 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:52.652 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:52.652 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:52.652 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:52.652 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:52.652 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:52.652 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:52.652 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:52.652 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:52.652 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:52.652 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:52.652 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:52.652 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:52.652 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:52.652 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:52.652 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:52.652 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:52.652 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:52.652 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:52.652 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:52.652 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:52.652 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:52.652 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:52.652 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:52.652 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:52.652 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:52.652 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:52.652 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:52.652 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:52.652 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:52.652 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:52.652 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:52.652 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:52.652 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:52.652 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:52.652 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:52.652 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:52.652 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:52.652 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:52.652 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:52.652 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:52.652 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:12:52.652 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:52.652 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:52.652 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:52.652 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:52.652 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:52.652 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:52.652 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:52.652 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:52.652 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:52.652 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:52.652 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:52.652 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:52.652 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:52.652 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:52.652 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:52.652 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:52.652 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:52.652 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:52.652 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:52.652 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:52.653 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:52.653 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:52.653 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:52.653 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:52.653 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:52.653 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:52.653 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:52.653 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.402 ms 00:12:52.653 00:12:52.653 --- 10.0.0.2 ping statistics --- 00:12:52.653 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:52.653 rtt min/avg/max/mdev = 0.402/0.402/0.402/0.000 ms 00:12:52.653 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:52.653 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:52.653 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.335 ms 00:12:52.653 00:12:52.653 --- 10.0.0.1 ping statistics --- 00:12:52.653 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:52.653 rtt min/avg/max/mdev = 0.335/0.335/0.335/0.000 ms 00:12:52.653 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:52.653 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:12:52.653 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:52.653 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:52.653 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:52.653 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:52.653 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:52.653 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:52.653 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:52.653 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:52.653 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:52.653 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:52.653 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:52.653 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=1269738 00:12:52.653 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 1269738 00:12:52.653 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:52.653 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 1269738 ']' 00:12:52.653 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:52.653 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:52.653 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:52.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:52.653 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:52.653 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:52.653 [2024-11-20 09:46:22.766693] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:12:52.653 [2024-11-20 09:46:22.766760] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:52.653 [2024-11-20 09:46:22.865579] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:52.653 [2024-11-20 09:46:22.918845] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:52.653 [2024-11-20 09:46:22.918904] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:52.653 [2024-11-20 09:46:22.918913] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:52.653 [2024-11-20 09:46:22.918921] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:52.653 [2024-11-20 09:46:22.918927] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:52.653 [2024-11-20 09:46:22.920999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:52.653 [2024-11-20 09:46:22.921181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:52.653 [2024-11-20 09:46:22.921321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:52.653 [2024-11-20 09:46:22.921328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:52.914 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:52.914 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:12:52.914 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:52.914 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:52.914 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:52.914 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:52.914 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:52.914 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.914 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:52.914 [2024-11-20 09:46:23.649636] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:52.914 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.914 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:52.914 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.914 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:52.914 [2024-11-20 09:46:23.665947] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:52.914 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.914 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:52.914 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.914 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:52.914 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.914 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:52.914 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.914 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:52.914 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.914 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:52.914 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.914 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:52.914 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.914 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:52.914 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:52.914 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.914 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:52.914 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.914 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:52.914 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:52.914 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:52.914 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:52.914 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:52.914 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.914 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:52.914 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:52.914 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.914 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:52.914 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:52.914 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:52.914 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:52.914 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:52.914 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:52.914 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:52.914 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:53.175 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:53.175 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:53.175 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:53.175 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.175 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:53.175 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.175 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:53.175 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.175 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:53.175 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.175 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:53.175 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.175 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:53.175 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.175 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:53.175 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:53.175 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.175 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:53.175 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.175 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:53.175 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:53.175 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:53.175 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:53.175 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:53.175 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:53.175 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:53.436 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:53.436 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:53.436 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:53.437 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.437 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:53.437 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.437 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:53.437 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.437 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:53.437 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.437 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:53.437 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:53.437 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:53.437 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:53.437 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.437 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:53.437 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:53.437 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.437 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:53.437 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:53.437 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:53.437 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:53.437 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:53.437 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:53.697 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:53.697 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:53.697 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:53.697 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:53.697 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:53.697 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:53.697 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:53.697 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:53.697 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:53.957 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:53.957 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:53.957 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:53.957 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:53.957 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:53.957 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:54.218 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:54.218 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:54.218 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.218 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:54.218 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.218 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:54.218 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:54.218 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:54.218 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:54.218 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.218 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:54.218 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:54.218 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.218 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:54.218 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:54.218 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:54.218 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:54.218 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:54.218 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:54.218 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:54.218 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:54.218 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:54.218 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:54.478 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:54.478 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:54.478 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:54.478 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:54.478 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:54.478 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:54.478 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:54.478 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:54.478 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:54.478 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:54.478 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:54.738 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:54.738 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:54.738 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.738 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:54.738 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.738 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:54.738 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:54.738 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.738 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:54.738 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.738 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:54.738 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:54.738 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:54.738 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:54.738 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:54.738 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:54.738 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:54.998 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:54.998 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:54.998 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:54.998 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:54.998 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:54.998 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:12:54.998 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:54.998 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:12:54.998 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:54.998 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:54.998 rmmod nvme_tcp 00:12:54.998 rmmod nvme_fabrics 00:12:54.998 rmmod nvme_keyring 00:12:54.998 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:54.998 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:12:54.998 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:12:54.998 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 1269738 ']' 00:12:54.998 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 1269738 00:12:54.998 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 1269738 ']' 00:12:54.998 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 1269738 00:12:54.998 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:12:54.998 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:54.998 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1269738 00:12:54.998 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:54.998 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:54.998 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1269738' 00:12:54.998 killing process with pid 1269738 00:12:54.998 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 1269738 00:12:54.998 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 1269738 00:12:55.259 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:55.259 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:55.259 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:55.259 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:12:55.259 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:12:55.259 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:12:55.259 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:55.259 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:55.259 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:55.259 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:55.259 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:55.259 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:57.802 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:57.802 00:12:57.802 real 0m13.192s 00:12:57.802 user 0m15.474s 00:12:57.802 sys 0m6.605s 00:12:57.802 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:57.802 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:57.802 ************************************ 00:12:57.802 END TEST nvmf_referrals 00:12:57.802 ************************************ 00:12:57.802 09:46:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:57.802 09:46:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:57.802 09:46:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:57.802 09:46:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:57.802 ************************************ 00:12:57.802 START TEST nvmf_connect_disconnect 00:12:57.802 ************************************ 00:12:57.802 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:57.802 * Looking for test storage... 00:12:57.802 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:57.802 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:57.802 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:12:57.802 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:57.802 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:57.802 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:57.802 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:57.802 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:57.802 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:12:57.802 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:12:57.802 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:12:57.802 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:12:57.802 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:12:57.802 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:12:57.802 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:12:57.802 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:57.802 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:12:57.802 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:12:57.802 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:57.802 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:57.802 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:12:57.802 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:12:57.802 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:57.802 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:12:57.802 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:12:57.802 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:12:57.802 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:12:57.802 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:57.802 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:12:57.802 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:12:57.802 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:57.802 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:57.802 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:12:57.802 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:57.802 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:57.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:57.802 --rc genhtml_branch_coverage=1 00:12:57.802 --rc genhtml_function_coverage=1 00:12:57.802 --rc genhtml_legend=1 00:12:57.802 --rc geninfo_all_blocks=1 00:12:57.802 --rc geninfo_unexecuted_blocks=1 00:12:57.802 00:12:57.802 ' 00:12:57.802 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:57.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:57.802 --rc genhtml_branch_coverage=1 00:12:57.802 --rc genhtml_function_coverage=1 00:12:57.802 --rc genhtml_legend=1 00:12:57.802 --rc geninfo_all_blocks=1 00:12:57.802 --rc geninfo_unexecuted_blocks=1 00:12:57.802 00:12:57.802 ' 00:12:57.802 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:57.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:57.802 --rc genhtml_branch_coverage=1 00:12:57.802 --rc genhtml_function_coverage=1 00:12:57.802 --rc genhtml_legend=1 00:12:57.802 --rc geninfo_all_blocks=1 00:12:57.802 --rc geninfo_unexecuted_blocks=1 00:12:57.802 00:12:57.802 ' 00:12:57.802 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:57.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:57.802 --rc genhtml_branch_coverage=1 00:12:57.802 --rc genhtml_function_coverage=1 00:12:57.802 --rc genhtml_legend=1 00:12:57.802 --rc geninfo_all_blocks=1 00:12:57.802 --rc geninfo_unexecuted_blocks=1 00:12:57.802 00:12:57.802 ' 00:12:57.802 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:57.802 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:57.802 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:57.802 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:57.802 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:57.802 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:57.802 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:57.802 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:57.802 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:57.802 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:57.802 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:57.802 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:57.802 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:57.802 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:57.803 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:57.803 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:57.803 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:57.803 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:57.803 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:57.803 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:12:57.803 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:57.803 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:57.803 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:57.803 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.803 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.803 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.803 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:57.803 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.803 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:12:57.803 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:57.803 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:57.803 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:57.803 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:57.803 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:57.803 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:57.803 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:57.803 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:57.803 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:57.803 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:57.803 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:57.803 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:57.803 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:57.803 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:57.803 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:57.803 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:57.803 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:57.803 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:57.803 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:57.803 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:57.803 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:57.803 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:57.803 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:57.803 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:12:57.803 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:06.083 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:06.084 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:13:06.084 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:06.084 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:06.084 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:06.084 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:06.084 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:06.084 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:13:06.084 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:06.084 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:13:06.084 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:13:06.084 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:13:06.084 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:13:06.084 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:13:06.084 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:13:06.084 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:06.084 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:06.084 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:06.084 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:06.084 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:06.084 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:06.084 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:06.084 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:06.084 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:06.084 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:06.084 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:06.084 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:06.084 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:06.084 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:06.084 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:06.084 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:06.084 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:06.084 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:06.084 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:06.084 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:06.084 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:06.084 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:06.084 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:06.084 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:06.084 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:06.084 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:06.084 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:06.084 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:06.084 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:06.084 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:06.084 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:06.084 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:06.084 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:06.084 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:06.084 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:06.084 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:06.084 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:06.084 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:06.084 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:06.084 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:06.084 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:06.084 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:06.084 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:06.084 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:06.084 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:06.084 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:06.084 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:06.084 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:06.084 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:06.084 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:06.084 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:06.084 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:06.084 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:06.084 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:06.084 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:06.084 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:06.084 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:06.084 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:06.084 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:13:06.084 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:06.084 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:06.084 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:06.084 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:06.084 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:06.084 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:06.084 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:06.084 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:06.084 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:06.084 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:06.084 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:06.084 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:06.084 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:06.084 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:06.084 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:06.084 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:06.084 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:06.084 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:06.084 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:06.084 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:06.084 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:06.084 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:06.084 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:06.084 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:06.084 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:06.084 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:06.084 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:06.084 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.649 ms 00:13:06.084 00:13:06.084 --- 10.0.0.2 ping statistics --- 00:13:06.084 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:06.085 rtt min/avg/max/mdev = 0.649/0.649/0.649/0.000 ms 00:13:06.085 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:06.085 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:06.085 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:13:06.085 00:13:06.085 --- 10.0.0.1 ping statistics --- 00:13:06.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:06.085 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:13:06.085 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:06.085 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:13:06.085 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:06.085 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:06.085 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:06.085 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:06.085 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:06.085 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:06.085 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:06.085 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:13:06.085 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:06.085 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:06.085 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:06.085 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=1274813 00:13:06.085 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 1274813 00:13:06.085 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:06.085 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 1274813 ']' 00:13:06.085 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:06.085 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:06.085 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:06.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:06.085 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:06.085 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:06.085 [2024-11-20 09:46:36.036247] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:13:06.085 [2024-11-20 09:46:36.036311] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:06.085 [2024-11-20 09:46:36.137467] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:06.085 [2024-11-20 09:46:36.190716] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:06.085 [2024-11-20 09:46:36.190771] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:06.085 [2024-11-20 09:46:36.190780] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:06.085 [2024-11-20 09:46:36.190787] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:06.085 [2024-11-20 09:46:36.190794] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:06.085 [2024-11-20 09:46:36.192887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:06.085 [2024-11-20 09:46:36.193048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:06.085 [2024-11-20 09:46:36.193224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:06.085 [2024-11-20 09:46:36.193224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:06.085 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:06.085 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:13:06.085 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:06.085 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:06.085 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:06.085 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:06.085 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:13:06.085 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.085 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:06.085 [2024-11-20 09:46:36.917903] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:06.085 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.085 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:13:06.085 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.085 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:06.085 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.085 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:13:06.085 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:06.085 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.085 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:06.085 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.085 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:06.085 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.085 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:06.347 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.347 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:06.347 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.347 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:06.347 [2024-11-20 09:46:37.003339] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:06.347 09:46:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.347 09:46:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:13:06.347 09:46:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:13:06.347 09:46:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:13:10.545 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:13.837 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:17.132 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:21.338 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:24.636 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:24.636 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:13:24.636 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:13:24.636 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:24.636 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:13:24.636 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:24.636 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:13:24.636 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:24.636 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:24.636 rmmod nvme_tcp 00:13:24.636 rmmod nvme_fabrics 00:13:24.636 rmmod nvme_keyring 00:13:24.636 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:24.636 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:13:24.636 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:13:24.636 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 1274813 ']' 00:13:24.636 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 1274813 00:13:24.636 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 1274813 ']' 00:13:24.636 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 1274813 00:13:24.636 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:13:24.636 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:24.636 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1274813 00:13:24.636 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:24.636 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:24.636 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1274813' 00:13:24.636 killing process with pid 1274813 00:13:24.636 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 1274813 00:13:24.636 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 1274813 00:13:24.636 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:24.636 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:24.636 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:24.636 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:13:24.636 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:13:24.636 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:24.636 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:13:24.636 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:24.636 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:24.636 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:24.636 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:24.636 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:27.182 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:27.182 00:13:27.182 real 0m29.423s 00:13:27.182 user 1m19.231s 00:13:27.182 sys 0m7.149s 00:13:27.182 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:27.182 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:27.182 ************************************ 00:13:27.182 END TEST nvmf_connect_disconnect 00:13:27.182 ************************************ 00:13:27.182 09:46:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:27.182 09:46:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:27.182 09:46:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:27.182 09:46:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:27.182 ************************************ 00:13:27.182 START TEST nvmf_multitarget 00:13:27.182 ************************************ 00:13:27.182 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:27.182 * Looking for test storage... 00:13:27.182 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:27.182 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:27.182 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:13:27.182 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:27.182 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:27.182 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:27.182 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:27.182 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:27.182 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:13:27.182 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:13:27.182 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:13:27.182 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:13:27.182 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:13:27.182 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:13:27.182 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:13:27.182 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:27.182 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:13:27.182 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:13:27.182 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:27.182 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:27.182 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:13:27.182 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:13:27.182 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:27.182 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:13:27.182 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:13:27.182 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:13:27.182 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:13:27.182 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:27.182 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:13:27.182 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:13:27.182 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:27.182 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:27.182 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:13:27.182 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:27.182 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:27.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:27.182 --rc genhtml_branch_coverage=1 00:13:27.182 --rc genhtml_function_coverage=1 00:13:27.182 --rc genhtml_legend=1 00:13:27.182 --rc geninfo_all_blocks=1 00:13:27.182 --rc geninfo_unexecuted_blocks=1 00:13:27.182 00:13:27.182 ' 00:13:27.182 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:27.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:27.183 --rc genhtml_branch_coverage=1 00:13:27.183 --rc genhtml_function_coverage=1 00:13:27.183 --rc genhtml_legend=1 00:13:27.183 --rc geninfo_all_blocks=1 00:13:27.183 --rc geninfo_unexecuted_blocks=1 00:13:27.183 00:13:27.183 ' 00:13:27.183 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:27.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:27.183 --rc genhtml_branch_coverage=1 00:13:27.183 --rc genhtml_function_coverage=1 00:13:27.183 --rc genhtml_legend=1 00:13:27.183 --rc geninfo_all_blocks=1 00:13:27.183 --rc geninfo_unexecuted_blocks=1 00:13:27.183 00:13:27.183 ' 00:13:27.183 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:27.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:27.183 --rc genhtml_branch_coverage=1 00:13:27.183 --rc genhtml_function_coverage=1 00:13:27.183 --rc genhtml_legend=1 00:13:27.183 --rc geninfo_all_blocks=1 00:13:27.183 --rc geninfo_unexecuted_blocks=1 00:13:27.183 00:13:27.183 ' 00:13:27.183 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:27.183 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:13:27.183 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:27.183 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:27.183 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:27.183 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:27.183 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:27.183 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:27.183 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:27.183 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:27.183 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:27.183 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:27.183 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:27.183 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:27.183 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:27.183 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:27.183 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:27.183 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:27.183 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:27.183 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:13:27.183 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:27.183 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:27.183 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:27.183 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.183 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.183 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.183 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:13:27.183 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.183 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:13:27.183 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:27.183 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:27.183 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:27.183 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:27.183 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:27.183 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:27.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:27.183 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:27.183 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:27.183 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:27.183 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:27.183 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:13:27.183 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:27.183 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:27.183 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:27.183 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:27.183 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:27.183 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:27.183 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:27.183 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:27.183 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:27.183 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:27.183 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:13:27.183 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:35.325 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:35.325 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:13:35.325 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:35.325 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:35.325 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:35.325 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:35.325 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:35.325 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:13:35.325 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:35.325 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:13:35.325 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:13:35.325 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:13:35.325 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:13:35.325 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:13:35.325 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:13:35.325 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:35.325 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:35.325 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:35.325 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:35.325 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:35.325 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:35.325 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:35.325 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:35.325 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:35.325 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:35.325 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:35.325 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:35.325 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:35.325 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:35.325 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:35.325 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:35.325 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:35.325 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:35.325 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:35.325 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:35.325 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:35.325 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:35.325 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:35.325 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:35.325 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:35.325 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:35.325 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:35.325 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:35.325 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:35.325 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:35.325 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:35.325 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:35.325 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:35.325 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:35.325 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:35.325 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:35.325 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:35.325 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:35.325 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:35.325 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:35.325 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:35.325 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:35.325 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:35.325 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:35.325 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:35.325 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:35.325 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:35.325 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:35.325 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:35.325 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:35.325 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:35.325 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:35.325 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:35.325 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:35.325 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:35.325 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:35.325 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:35.325 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:35.325 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:13:35.325 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:35.325 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:35.325 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:35.325 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:35.325 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:35.325 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:35.325 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:35.325 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:35.325 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:35.325 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:35.325 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:35.325 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:35.325 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:35.325 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:35.325 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:35.325 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:35.326 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:35.326 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:35.326 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:35.326 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:35.326 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:35.326 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:35.326 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:35.326 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:35.326 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:35.326 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:35.326 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:35.326 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.576 ms 00:13:35.326 00:13:35.326 --- 10.0.0.2 ping statistics --- 00:13:35.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:35.326 rtt min/avg/max/mdev = 0.576/0.576/0.576/0.000 ms 00:13:35.326 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:35.326 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:35.326 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.327 ms 00:13:35.326 00:13:35.326 --- 10.0.0.1 ping statistics --- 00:13:35.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:35.326 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:13:35.326 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:35.326 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:13:35.326 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:35.326 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:35.326 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:35.326 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:35.326 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:35.326 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:35.326 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:35.326 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:13:35.326 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:35.326 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:35.326 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:35.326 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=1282903 00:13:35.326 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 1282903 00:13:35.326 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:35.326 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 1282903 ']' 00:13:35.326 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:35.326 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:35.326 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:35.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:35.326 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:35.326 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:35.326 [2024-11-20 09:47:05.522269] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:13:35.326 [2024-11-20 09:47:05.522334] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:35.326 [2024-11-20 09:47:05.622061] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:35.326 [2024-11-20 09:47:05.674550] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:35.326 [2024-11-20 09:47:05.674606] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:35.326 [2024-11-20 09:47:05.674614] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:35.326 [2024-11-20 09:47:05.674622] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:35.326 [2024-11-20 09:47:05.674628] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:35.326 [2024-11-20 09:47:05.676685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:35.326 [2024-11-20 09:47:05.676845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:35.326 [2024-11-20 09:47:05.677007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:35.326 [2024-11-20 09:47:05.677008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:35.587 09:47:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:35.587 09:47:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:13:35.587 09:47:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:35.587 09:47:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:35.587 09:47:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:35.587 09:47:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:35.587 09:47:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:35.587 09:47:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:35.587 09:47:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:13:35.848 09:47:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:13:35.848 09:47:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:13:35.848 "nvmf_tgt_1" 00:13:35.848 09:47:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:13:35.848 "nvmf_tgt_2" 00:13:35.848 09:47:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:35.848 09:47:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:13:36.109 09:47:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:13:36.109 09:47:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:13:36.109 true 00:13:36.109 09:47:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:13:36.371 true 00:13:36.371 09:47:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:36.371 09:47:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:13:36.371 09:47:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:13:36.371 09:47:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:13:36.371 09:47:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:13:36.371 09:47:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:36.371 09:47:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:13:36.371 09:47:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:36.371 09:47:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:13:36.371 09:47:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:36.371 09:47:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:36.371 rmmod nvme_tcp 00:13:36.371 rmmod nvme_fabrics 00:13:36.371 rmmod nvme_keyring 00:13:36.371 09:47:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:36.371 09:47:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:13:36.371 09:47:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:13:36.371 09:47:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 1282903 ']' 00:13:36.371 09:47:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 1282903 00:13:36.371 09:47:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 1282903 ']' 00:13:36.371 09:47:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 1282903 00:13:36.371 09:47:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:13:36.371 09:47:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:36.371 09:47:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1282903 00:13:36.633 09:47:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:36.633 09:47:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:36.633 09:47:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1282903' 00:13:36.633 killing process with pid 1282903 00:13:36.633 09:47:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 1282903 00:13:36.633 09:47:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 1282903 00:13:36.633 09:47:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:36.633 09:47:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:36.633 09:47:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:36.633 09:47:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:13:36.633 09:47:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:13:36.633 09:47:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:36.633 09:47:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:13:36.633 09:47:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:36.633 09:47:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:36.633 09:47:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:36.633 09:47:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:36.633 09:47:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:39.178 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:39.178 00:13:39.178 real 0m11.897s 00:13:39.178 user 0m10.360s 00:13:39.178 sys 0m6.145s 00:13:39.178 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:39.178 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:39.178 ************************************ 00:13:39.178 END TEST nvmf_multitarget 00:13:39.178 ************************************ 00:13:39.179 09:47:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:39.179 09:47:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:39.179 09:47:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:39.179 09:47:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:39.179 ************************************ 00:13:39.179 START TEST nvmf_rpc 00:13:39.179 ************************************ 00:13:39.179 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:39.179 * Looking for test storage... 00:13:39.179 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:39.179 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:39.179 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:13:39.179 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:39.179 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:39.179 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:39.179 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:39.179 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:39.179 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:13:39.179 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:13:39.179 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:13:39.179 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:13:39.179 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:13:39.179 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:13:39.179 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:13:39.179 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:39.179 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:13:39.179 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:13:39.179 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:39.179 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:39.179 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:13:39.179 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:13:39.179 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:39.179 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:13:39.179 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:13:39.179 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:13:39.179 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:13:39.179 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:39.179 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:13:39.179 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:13:39.179 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:39.179 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:39.179 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:13:39.179 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:39.179 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:39.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:39.179 --rc genhtml_branch_coverage=1 00:13:39.179 --rc genhtml_function_coverage=1 00:13:39.179 --rc genhtml_legend=1 00:13:39.179 --rc geninfo_all_blocks=1 00:13:39.179 --rc geninfo_unexecuted_blocks=1 00:13:39.179 00:13:39.179 ' 00:13:39.179 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:39.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:39.179 --rc genhtml_branch_coverage=1 00:13:39.179 --rc genhtml_function_coverage=1 00:13:39.179 --rc genhtml_legend=1 00:13:39.179 --rc geninfo_all_blocks=1 00:13:39.179 --rc geninfo_unexecuted_blocks=1 00:13:39.179 00:13:39.179 ' 00:13:39.179 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:39.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:39.179 --rc genhtml_branch_coverage=1 00:13:39.179 --rc genhtml_function_coverage=1 00:13:39.179 --rc genhtml_legend=1 00:13:39.179 --rc geninfo_all_blocks=1 00:13:39.179 --rc geninfo_unexecuted_blocks=1 00:13:39.179 00:13:39.179 ' 00:13:39.179 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:39.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:39.179 --rc genhtml_branch_coverage=1 00:13:39.179 --rc genhtml_function_coverage=1 00:13:39.179 --rc genhtml_legend=1 00:13:39.179 --rc geninfo_all_blocks=1 00:13:39.179 --rc geninfo_unexecuted_blocks=1 00:13:39.179 00:13:39.179 ' 00:13:39.179 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:39.179 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:13:39.179 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:39.179 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:39.179 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:39.179 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:39.179 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:39.179 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:39.179 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:39.179 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:39.179 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:39.179 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:39.179 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:39.179 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:39.179 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:39.179 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:39.179 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:39.179 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:39.179 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:39.179 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:13:39.179 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:39.179 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:39.179 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:39.179 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.179 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.179 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.179 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:13:39.179 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.179 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:13:39.179 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:39.179 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:39.179 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:39.179 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:39.179 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:39.179 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:39.180 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:39.180 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:39.180 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:39.180 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:39.180 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:13:39.180 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:13:39.180 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:39.180 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:39.180 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:39.180 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:39.180 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:39.180 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:39.180 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:39.180 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:39.180 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:39.180 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:39.180 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:13:39.180 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.324 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:47.324 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:13:47.324 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:47.324 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:47.324 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:47.324 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:47.324 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:47.324 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:13:47.324 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:47.324 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:13:47.324 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:13:47.324 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:13:47.324 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:13:47.324 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:13:47.324 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:13:47.324 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:47.324 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:47.324 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:47.324 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:47.324 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:47.324 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:47.324 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:47.324 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:47.324 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:47.324 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:47.324 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:47.324 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:47.324 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:47.324 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:47.324 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:47.324 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:47.324 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:47.324 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:47.324 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:47.324 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:47.324 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:47.324 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:47.324 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:47.324 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:47.324 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:47.324 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:47.324 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:47.324 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:47.324 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:47.324 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:47.324 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:47.324 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:47.324 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:47.324 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:47.324 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:47.324 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:47.324 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:47.324 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:47.324 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:47.324 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:47.324 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:47.324 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:47.324 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:47.324 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:47.324 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:47.324 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:47.324 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:47.324 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:47.324 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:47.324 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:47.324 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:47.324 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:47.324 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:47.324 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:47.324 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:47.324 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:47.324 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:47.324 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:47.324 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:13:47.324 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:47.324 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:47.324 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:47.324 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:47.324 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:47.324 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:47.324 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:47.324 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:47.324 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:47.324 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:47.324 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:47.324 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:47.324 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:47.324 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:47.324 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:47.324 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:47.324 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:47.324 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:47.324 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:47.324 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:47.324 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:47.324 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:47.324 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:47.324 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:47.324 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:47.324 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:47.324 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:47.324 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.707 ms 00:13:47.324 00:13:47.324 --- 10.0.0.2 ping statistics --- 00:13:47.324 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:47.324 rtt min/avg/max/mdev = 0.707/0.707/0.707/0.000 ms 00:13:47.324 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:47.324 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:47.324 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.229 ms 00:13:47.324 00:13:47.325 --- 10.0.0.1 ping statistics --- 00:13:47.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:47.325 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:13:47.325 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:47.325 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:13:47.325 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:47.325 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:47.325 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:47.325 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:47.325 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:47.325 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:47.325 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:47.325 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:13:47.325 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:47.325 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:47.325 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.325 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=1287372 00:13:47.325 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 1287372 00:13:47.325 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:47.325 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 1287372 ']' 00:13:47.325 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:47.325 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:47.325 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:47.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:47.325 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:47.325 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.325 [2024-11-20 09:47:17.502782] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:13:47.325 [2024-11-20 09:47:17.502847] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:47.325 [2024-11-20 09:47:17.603543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:47.325 [2024-11-20 09:47:17.657521] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:47.325 [2024-11-20 09:47:17.657575] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:47.325 [2024-11-20 09:47:17.657584] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:47.325 [2024-11-20 09:47:17.657591] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:47.325 [2024-11-20 09:47:17.657598] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:47.325 [2024-11-20 09:47:17.660059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:47.325 [2024-11-20 09:47:17.660222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:47.325 [2024-11-20 09:47:17.660334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:47.325 [2024-11-20 09:47:17.660336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:47.587 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:47.587 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:47.587 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:47.587 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:47.587 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.587 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:47.587 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:13:47.587 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.587 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.587 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.587 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:13:47.587 "tick_rate": 2400000000, 00:13:47.587 "poll_groups": [ 00:13:47.587 { 00:13:47.587 "name": "nvmf_tgt_poll_group_000", 00:13:47.587 "admin_qpairs": 0, 00:13:47.587 "io_qpairs": 0, 00:13:47.587 "current_admin_qpairs": 0, 00:13:47.587 "current_io_qpairs": 0, 00:13:47.587 "pending_bdev_io": 0, 00:13:47.587 "completed_nvme_io": 0, 00:13:47.587 "transports": [] 00:13:47.587 }, 00:13:47.587 { 00:13:47.587 "name": "nvmf_tgt_poll_group_001", 00:13:47.587 "admin_qpairs": 0, 00:13:47.587 "io_qpairs": 0, 00:13:47.587 "current_admin_qpairs": 0, 00:13:47.587 "current_io_qpairs": 0, 00:13:47.587 "pending_bdev_io": 0, 00:13:47.587 "completed_nvme_io": 0, 00:13:47.587 "transports": [] 00:13:47.587 }, 00:13:47.587 { 00:13:47.587 "name": "nvmf_tgt_poll_group_002", 00:13:47.587 "admin_qpairs": 0, 00:13:47.587 "io_qpairs": 0, 00:13:47.587 "current_admin_qpairs": 0, 00:13:47.587 "current_io_qpairs": 0, 00:13:47.587 "pending_bdev_io": 0, 00:13:47.587 "completed_nvme_io": 0, 00:13:47.587 "transports": [] 00:13:47.587 }, 00:13:47.587 { 00:13:47.587 "name": "nvmf_tgt_poll_group_003", 00:13:47.587 "admin_qpairs": 0, 00:13:47.587 "io_qpairs": 0, 00:13:47.587 "current_admin_qpairs": 0, 00:13:47.587 "current_io_qpairs": 0, 00:13:47.587 "pending_bdev_io": 0, 00:13:47.587 "completed_nvme_io": 0, 00:13:47.587 "transports": [] 00:13:47.587 } 00:13:47.587 ] 00:13:47.587 }' 00:13:47.587 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:13:47.587 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:13:47.587 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:13:47.587 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:13:47.587 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:13:47.587 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:13:47.587 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:13:47.587 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:47.587 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.587 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.849 [2024-11-20 09:47:18.501531] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:47.849 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.849 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:13:47.849 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.849 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.849 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.849 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:13:47.849 "tick_rate": 2400000000, 00:13:47.849 "poll_groups": [ 00:13:47.849 { 00:13:47.849 "name": "nvmf_tgt_poll_group_000", 00:13:47.849 "admin_qpairs": 0, 00:13:47.849 "io_qpairs": 0, 00:13:47.849 "current_admin_qpairs": 0, 00:13:47.849 "current_io_qpairs": 0, 00:13:47.849 "pending_bdev_io": 0, 00:13:47.849 "completed_nvme_io": 0, 00:13:47.849 "transports": [ 00:13:47.849 { 00:13:47.849 "trtype": "TCP" 00:13:47.849 } 00:13:47.849 ] 00:13:47.849 }, 00:13:47.849 { 00:13:47.849 "name": "nvmf_tgt_poll_group_001", 00:13:47.849 "admin_qpairs": 0, 00:13:47.849 "io_qpairs": 0, 00:13:47.849 "current_admin_qpairs": 0, 00:13:47.849 "current_io_qpairs": 0, 00:13:47.849 "pending_bdev_io": 0, 00:13:47.849 "completed_nvme_io": 0, 00:13:47.849 "transports": [ 00:13:47.849 { 00:13:47.849 "trtype": "TCP" 00:13:47.849 } 00:13:47.849 ] 00:13:47.849 }, 00:13:47.849 { 00:13:47.849 "name": "nvmf_tgt_poll_group_002", 00:13:47.849 "admin_qpairs": 0, 00:13:47.849 "io_qpairs": 0, 00:13:47.849 "current_admin_qpairs": 0, 00:13:47.849 "current_io_qpairs": 0, 00:13:47.849 "pending_bdev_io": 0, 00:13:47.849 "completed_nvme_io": 0, 00:13:47.849 "transports": [ 00:13:47.849 { 00:13:47.849 "trtype": "TCP" 00:13:47.849 } 00:13:47.849 ] 00:13:47.849 }, 00:13:47.849 { 00:13:47.849 "name": "nvmf_tgt_poll_group_003", 00:13:47.849 "admin_qpairs": 0, 00:13:47.849 "io_qpairs": 0, 00:13:47.849 "current_admin_qpairs": 0, 00:13:47.849 "current_io_qpairs": 0, 00:13:47.849 "pending_bdev_io": 0, 00:13:47.849 "completed_nvme_io": 0, 00:13:47.849 "transports": [ 00:13:47.849 { 00:13:47.849 "trtype": "TCP" 00:13:47.849 } 00:13:47.849 ] 00:13:47.849 } 00:13:47.849 ] 00:13:47.849 }' 00:13:47.849 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:13:47.849 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:47.849 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:47.849 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:47.849 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:13:47.849 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:13:47.849 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:47.849 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:47.849 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:47.849 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:13:47.849 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:13:47.849 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:13:47.849 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:13:47.849 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:47.849 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.849 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.849 Malloc1 00:13:47.849 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.849 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:47.849 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.849 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.849 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.849 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:47.849 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.849 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.849 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.849 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:13:47.849 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.849 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.849 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.849 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:47.849 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.849 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.849 [2024-11-20 09:47:18.709327] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:47.849 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.849 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:13:47.849 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:13:47.850 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:13:47.850 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:13:47.850 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:47.850 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:13:47.850 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:47.850 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:13:47.850 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:47.850 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:13:47.850 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:13:47.850 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:13:47.850 [2024-11-20 09:47:18.746293] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:13:48.111 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:48.111 could not add new controller: failed to write to nvme-fabrics device 00:13:48.111 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:13:48.111 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:48.111 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:48.111 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:48.111 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:48.111 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.111 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.111 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.111 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:49.495 09:47:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:13:49.495 09:47:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:49.495 09:47:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:49.495 09:47:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:49.495 09:47:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:52.036 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:52.036 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:52.036 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:52.036 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:52.036 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:52.036 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:52.036 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:52.036 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:52.036 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:52.036 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:52.036 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:52.036 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:52.036 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:52.036 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:52.036 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:52.036 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:52.036 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.036 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:52.036 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.036 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:52.036 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:13:52.036 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:52.036 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:13:52.036 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:52.036 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:13:52.036 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:52.036 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:13:52.036 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:52.036 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:13:52.036 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:13:52.036 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:52.036 [2024-11-20 09:47:22.521388] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:13:52.036 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:52.036 could not add new controller: failed to write to nvme-fabrics device 00:13:52.036 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:13:52.036 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:52.036 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:52.036 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:52.036 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:13:52.036 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.036 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:52.036 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.037 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:53.420 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:13:53.420 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:53.420 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:53.420 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:53.420 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:55.331 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:55.331 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:55.331 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:55.331 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:55.331 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:55.331 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:55.331 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:55.331 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:55.331 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:55.331 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:55.331 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:55.331 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:55.331 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:55.331 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:55.593 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:55.593 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:55.593 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.593 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:55.593 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.593 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:13:55.593 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:55.593 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:55.593 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.593 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:55.593 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.593 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:55.593 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.593 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:55.593 [2024-11-20 09:47:26.287147] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:55.593 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.593 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:55.593 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.593 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:55.593 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.593 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:55.593 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.593 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:55.593 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.593 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:56.979 09:47:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:56.979 09:47:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:56.979 09:47:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:56.979 09:47:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:56.979 09:47:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:59.526 09:47:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:59.526 09:47:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:59.526 09:47:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:59.526 09:47:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:59.526 09:47:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:59.526 09:47:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:59.526 09:47:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:59.526 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:59.526 09:47:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:59.526 09:47:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:59.526 09:47:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:59.526 09:47:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:59.526 09:47:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:59.526 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:59.526 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:59.526 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:59.526 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.526 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:59.526 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.526 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:59.526 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.526 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:59.526 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.526 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:59.526 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:59.526 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.526 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:59.526 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.526 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:59.526 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.526 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:59.526 [2024-11-20 09:47:30.060710] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:59.526 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.526 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:59.526 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.526 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:59.526 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.526 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:59.526 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.526 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:59.526 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.526 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:00.912 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:00.913 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:00.913 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:00.913 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:00.913 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:02.824 09:47:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:02.824 09:47:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:02.824 09:47:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:02.824 09:47:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:02.824 09:47:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:02.824 09:47:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:02.824 09:47:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:03.085 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:03.085 09:47:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:03.085 09:47:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:03.085 09:47:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:03.085 09:47:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:03.085 09:47:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:03.085 09:47:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:03.085 09:47:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:03.085 09:47:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:03.085 09:47:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.085 09:47:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:03.085 09:47:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.085 09:47:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:03.085 09:47:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.085 09:47:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:03.085 09:47:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.085 09:47:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:03.085 09:47:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:03.085 09:47:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.085 09:47:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:03.085 09:47:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.085 09:47:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:03.085 09:47:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.085 09:47:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:03.085 [2024-11-20 09:47:33.821161] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:03.085 09:47:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.085 09:47:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:03.085 09:47:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.085 09:47:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:03.085 09:47:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.085 09:47:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:03.085 09:47:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.085 09:47:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:03.085 09:47:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.085 09:47:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:04.997 09:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:04.997 09:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:04.997 09:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:04.997 09:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:04.997 09:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:06.912 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:06.912 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:06.912 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:06.912 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:06.912 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:06.912 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:06.912 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:06.912 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:06.912 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:06.912 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:06.912 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:06.912 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:06.912 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:06.912 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:06.912 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:06.912 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:06.912 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.912 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.912 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.912 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:06.912 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.912 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.912 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.912 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:06.912 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:06.912 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.912 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.912 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.912 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:06.912 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.912 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.912 [2024-11-20 09:47:37.581657] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:06.912 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.912 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:06.912 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.912 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.912 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.912 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:06.912 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.912 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.912 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.912 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:08.296 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:08.296 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:08.296 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:08.296 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:08.296 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:10.346 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:10.346 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:10.346 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:10.346 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:10.346 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:10.346 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:10.346 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:10.346 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:10.346 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:10.346 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:10.346 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:10.346 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:10.346 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:10.346 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:10.607 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:10.607 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:10.607 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.607 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:10.607 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.607 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:10.607 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.607 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:10.607 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.607 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:10.607 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:10.607 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.607 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:10.607 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.607 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:10.607 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.607 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:10.607 [2024-11-20 09:47:41.303686] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:10.607 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.607 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:10.607 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.607 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:10.607 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.607 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:10.607 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.607 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:10.607 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.607 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:11.992 09:47:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:11.992 09:47:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:11.992 09:47:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:11.992 09:47:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:11.992 09:47:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:13.906 09:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:14.166 09:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:14.166 09:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:14.166 09:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:14.166 09:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:14.166 09:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:14.166 09:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:14.166 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:14.166 09:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:14.166 09:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:14.166 09:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:14.166 09:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:14.166 09:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:14.166 09:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:14.166 09:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:14.166 09:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:14.166 09:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.166 09:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.166 09:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.166 09:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:14.166 09:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.166 09:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.166 09:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.166 09:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:14:14.166 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:14.166 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:14.166 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.166 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.166 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.166 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:14.166 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.166 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.166 [2024-11-20 09:47:45.022340] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:14.166 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.166 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:14.166 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.166 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.166 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.166 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:14.166 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.166 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.166 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.166 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:14.166 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.166 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.166 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.166 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:14.166 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.166 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.166 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.166 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:14.166 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:14.166 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.166 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.427 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.427 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:14.427 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.427 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.427 [2024-11-20 09:47:45.090500] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:14.427 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.427 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:14.427 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.427 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.427 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.427 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:14.427 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.427 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.427 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.427 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:14.427 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.427 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.427 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.427 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:14.427 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.427 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.427 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.427 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:14.427 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:14.427 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.427 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.427 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.427 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:14.427 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.427 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.427 [2024-11-20 09:47:45.158679] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:14.427 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.427 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:14.427 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.427 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.427 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.427 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:14.427 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.427 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.427 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.427 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:14.427 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.427 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.427 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.427 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:14.427 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.427 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.427 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.427 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:14.427 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:14.427 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.427 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.427 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.427 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:14.427 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.427 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.427 [2024-11-20 09:47:45.230899] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:14.427 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.427 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:14.427 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.427 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.427 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.427 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:14.427 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.427 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.427 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.428 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:14.428 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.428 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.428 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.428 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:14.428 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.428 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.428 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.428 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:14.428 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:14.428 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.428 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.428 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.428 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:14.428 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.428 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.428 [2024-11-20 09:47:45.295113] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:14.428 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.428 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:14.428 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.428 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.428 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.428 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:14.428 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.428 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.428 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.428 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:14.428 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.428 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.428 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.428 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:14.428 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.428 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.689 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.689 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:14:14.689 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.689 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.689 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.689 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:14:14.689 "tick_rate": 2400000000, 00:14:14.689 "poll_groups": [ 00:14:14.689 { 00:14:14.689 "name": "nvmf_tgt_poll_group_000", 00:14:14.689 "admin_qpairs": 0, 00:14:14.689 "io_qpairs": 224, 00:14:14.689 "current_admin_qpairs": 0, 00:14:14.689 "current_io_qpairs": 0, 00:14:14.689 "pending_bdev_io": 0, 00:14:14.689 "completed_nvme_io": 274, 00:14:14.689 "transports": [ 00:14:14.689 { 00:14:14.689 "trtype": "TCP" 00:14:14.689 } 00:14:14.689 ] 00:14:14.689 }, 00:14:14.689 { 00:14:14.689 "name": "nvmf_tgt_poll_group_001", 00:14:14.689 "admin_qpairs": 1, 00:14:14.689 "io_qpairs": 223, 00:14:14.689 "current_admin_qpairs": 0, 00:14:14.689 "current_io_qpairs": 0, 00:14:14.689 "pending_bdev_io": 0, 00:14:14.689 "completed_nvme_io": 422, 00:14:14.689 "transports": [ 00:14:14.689 { 00:14:14.689 "trtype": "TCP" 00:14:14.689 } 00:14:14.689 ] 00:14:14.689 }, 00:14:14.689 { 00:14:14.689 "name": "nvmf_tgt_poll_group_002", 00:14:14.689 "admin_qpairs": 6, 00:14:14.689 "io_qpairs": 218, 00:14:14.689 "current_admin_qpairs": 0, 00:14:14.689 "current_io_qpairs": 0, 00:14:14.689 "pending_bdev_io": 0, 00:14:14.689 "completed_nvme_io": 318, 00:14:14.689 "transports": [ 00:14:14.689 { 00:14:14.689 "trtype": "TCP" 00:14:14.689 } 00:14:14.689 ] 00:14:14.689 }, 00:14:14.689 { 00:14:14.689 "name": "nvmf_tgt_poll_group_003", 00:14:14.689 "admin_qpairs": 0, 00:14:14.689 "io_qpairs": 224, 00:14:14.689 "current_admin_qpairs": 0, 00:14:14.689 "current_io_qpairs": 0, 00:14:14.689 "pending_bdev_io": 0, 00:14:14.689 "completed_nvme_io": 225, 00:14:14.689 "transports": [ 00:14:14.689 { 00:14:14.689 "trtype": "TCP" 00:14:14.689 } 00:14:14.689 ] 00:14:14.689 } 00:14:14.689 ] 00:14:14.689 }' 00:14:14.689 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:14:14.689 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:14.689 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:14.689 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:14.689 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:14:14.689 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:14:14.689 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:14.689 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:14.689 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:14.689 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:14:14.689 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:14:14.689 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:14:14.689 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:14:14.689 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:14.689 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:14:14.689 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:14.689 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:14:14.689 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:14.689 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:14.689 rmmod nvme_tcp 00:14:14.689 rmmod nvme_fabrics 00:14:14.689 rmmod nvme_keyring 00:14:14.689 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:14.689 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:14:14.689 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:14:14.689 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 1287372 ']' 00:14:14.689 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 1287372 00:14:14.689 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 1287372 ']' 00:14:14.690 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 1287372 00:14:14.690 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:14:14.690 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:14.690 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1287372 00:14:14.690 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:14.690 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:14.690 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1287372' 00:14:14.690 killing process with pid 1287372 00:14:14.690 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 1287372 00:14:14.690 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 1287372 00:14:14.950 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:14.950 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:14.950 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:14.950 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:14:14.950 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:14:14.950 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:14.950 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:14:14.950 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:14.950 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:14.950 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:14.950 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:14.950 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:17.497 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:17.497 00:14:17.497 real 0m38.123s 00:14:17.497 user 1m54.097s 00:14:17.497 sys 0m8.002s 00:14:17.497 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:17.497 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:17.497 ************************************ 00:14:17.497 END TEST nvmf_rpc 00:14:17.497 ************************************ 00:14:17.497 09:47:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:14:17.497 09:47:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:17.497 09:47:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:17.497 09:47:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:17.497 ************************************ 00:14:17.497 START TEST nvmf_invalid 00:14:17.497 ************************************ 00:14:17.497 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:14:17.497 * Looking for test storage... 00:14:17.497 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:17.497 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:17.497 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:14:17.497 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:17.497 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:17.497 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:17.497 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:17.497 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:17.497 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:14:17.497 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:14:17.497 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:14:17.497 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:14:17.497 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:14:17.497 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:14:17.497 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:14:17.497 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:17.497 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:14:17.497 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:14:17.497 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:17.497 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:17.497 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:14:17.497 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:14:17.497 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:17.497 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:14:17.497 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:14:17.497 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:14:17.497 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:14:17.497 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:17.497 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:14:17.497 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:14:17.497 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:17.497 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:17.497 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:14:17.497 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:17.497 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:17.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:17.497 --rc genhtml_branch_coverage=1 00:14:17.497 --rc genhtml_function_coverage=1 00:14:17.497 --rc genhtml_legend=1 00:14:17.497 --rc geninfo_all_blocks=1 00:14:17.497 --rc geninfo_unexecuted_blocks=1 00:14:17.497 00:14:17.497 ' 00:14:17.497 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:17.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:17.497 --rc genhtml_branch_coverage=1 00:14:17.497 --rc genhtml_function_coverage=1 00:14:17.497 --rc genhtml_legend=1 00:14:17.497 --rc geninfo_all_blocks=1 00:14:17.497 --rc geninfo_unexecuted_blocks=1 00:14:17.497 00:14:17.497 ' 00:14:17.497 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:17.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:17.497 --rc genhtml_branch_coverage=1 00:14:17.497 --rc genhtml_function_coverage=1 00:14:17.497 --rc genhtml_legend=1 00:14:17.497 --rc geninfo_all_blocks=1 00:14:17.497 --rc geninfo_unexecuted_blocks=1 00:14:17.497 00:14:17.497 ' 00:14:17.497 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:17.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:17.497 --rc genhtml_branch_coverage=1 00:14:17.497 --rc genhtml_function_coverage=1 00:14:17.497 --rc genhtml_legend=1 00:14:17.497 --rc geninfo_all_blocks=1 00:14:17.497 --rc geninfo_unexecuted_blocks=1 00:14:17.497 00:14:17.497 ' 00:14:17.497 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:17.497 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:14:17.497 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:17.497 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:17.497 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:17.498 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:17.498 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:17.498 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:17.498 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:17.498 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:17.498 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:17.498 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:17.498 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:17.498 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:17.498 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:17.498 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:17.498 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:17.498 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:17.498 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:17.498 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:14:17.498 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:17.498 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:17.498 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:17.498 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.498 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.498 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.498 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:14:17.498 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.498 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:14:17.498 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:17.498 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:17.498 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:17.498 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:17.498 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:17.498 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:17.498 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:17.498 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:17.498 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:17.498 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:17.498 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:14:17.498 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:17.498 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:14:17.498 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:14:17.498 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:14:17.498 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:14:17.498 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:17.498 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:17.498 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:17.498 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:17.498 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:17.498 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:17.498 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:17.498 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:17.498 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:17.498 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:17.498 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:14:17.498 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:25.646 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:25.646 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:14:25.646 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:25.646 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:25.646 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:25.646 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:25.646 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:25.646 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:14:25.646 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:25.646 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:14:25.646 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:14:25.646 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:14:25.646 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:14:25.646 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:14:25.646 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:14:25.646 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:25.646 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:25.646 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:25.646 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:25.646 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:25.646 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:25.646 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:25.646 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:25.646 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:25.646 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:25.646 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:25.646 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:25.646 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:25.646 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:25.646 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:25.646 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:25.646 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:25.646 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:25.646 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:25.646 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:25.646 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:25.646 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:25.646 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:25.646 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:25.646 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:25.646 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:25.646 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:25.646 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:25.646 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:25.646 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:25.646 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:25.646 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:25.646 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:25.646 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:25.646 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:25.646 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:25.646 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:25.646 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:25.646 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:25.646 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:25.646 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:25.646 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:25.646 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:25.646 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:25.646 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:25.646 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:25.646 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:25.646 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:25.646 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:25.646 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:25.646 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:25.646 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:25.646 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:25.646 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:25.646 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:25.646 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:25.646 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:25.646 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:25.646 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:14:25.646 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:25.646 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:25.646 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:25.646 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:25.646 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:25.646 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:25.646 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:25.646 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:25.646 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:25.646 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:25.646 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:25.646 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:25.647 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:25.647 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:25.647 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:25.647 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:25.647 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:25.647 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:25.647 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:25.647 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:25.647 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:25.647 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:25.647 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:25.647 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:25.647 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:25.647 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:25.647 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:25.647 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.667 ms 00:14:25.647 00:14:25.647 --- 10.0.0.2 ping statistics --- 00:14:25.647 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:25.647 rtt min/avg/max/mdev = 0.667/0.667/0.667/0.000 ms 00:14:25.647 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:25.647 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:25.647 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.328 ms 00:14:25.647 00:14:25.647 --- 10.0.0.1 ping statistics --- 00:14:25.647 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:25.647 rtt min/avg/max/mdev = 0.328/0.328/0.328/0.000 ms 00:14:25.647 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:25.647 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:14:25.647 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:25.647 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:25.647 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:25.647 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:25.647 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:25.647 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:25.647 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:25.647 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:14:25.647 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:25.647 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:25.647 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:25.647 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=1297173 00:14:25.647 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 1297173 00:14:25.647 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:25.647 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 1297173 ']' 00:14:25.647 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:25.647 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:25.647 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:25.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:25.647 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:25.647 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:25.647 [2024-11-20 09:47:55.674137] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:14:25.647 [2024-11-20 09:47:55.674218] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:25.647 [2024-11-20 09:47:55.749302] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:25.647 [2024-11-20 09:47:55.797653] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:25.647 [2024-11-20 09:47:55.797707] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:25.647 [2024-11-20 09:47:55.797714] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:25.647 [2024-11-20 09:47:55.797720] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:25.647 [2024-11-20 09:47:55.797725] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:25.647 [2024-11-20 09:47:55.801192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:25.647 [2024-11-20 09:47:55.801267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:25.647 [2024-11-20 09:47:55.801428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:25.647 [2024-11-20 09:47:55.801428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:25.647 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:25.647 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:14:25.647 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:25.647 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:25.647 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:25.647 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:25.647 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:25.647 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode24082 00:14:25.647 [2024-11-20 09:47:56.127621] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:14:25.647 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:14:25.647 { 00:14:25.647 "nqn": "nqn.2016-06.io.spdk:cnode24082", 00:14:25.647 "tgt_name": "foobar", 00:14:25.647 "method": "nvmf_create_subsystem", 00:14:25.647 "req_id": 1 00:14:25.647 } 00:14:25.647 Got JSON-RPC error response 00:14:25.647 response: 00:14:25.647 { 00:14:25.647 "code": -32603, 00:14:25.647 "message": "Unable to find target foobar" 00:14:25.647 }' 00:14:25.647 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:14:25.647 { 00:14:25.647 "nqn": "nqn.2016-06.io.spdk:cnode24082", 00:14:25.647 "tgt_name": "foobar", 00:14:25.647 "method": "nvmf_create_subsystem", 00:14:25.647 "req_id": 1 00:14:25.647 } 00:14:25.647 Got JSON-RPC error response 00:14:25.647 response: 00:14:25.647 { 00:14:25.647 "code": -32603, 00:14:25.647 "message": "Unable to find target foobar" 00:14:25.647 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:14:25.647 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:14:25.647 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode5770 00:14:25.647 [2024-11-20 09:47:56.336398] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5770: invalid serial number 'SPDKISFASTANDAWESOME' 00:14:25.647 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:14:25.647 { 00:14:25.647 "nqn": "nqn.2016-06.io.spdk:cnode5770", 00:14:25.647 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:25.647 "method": "nvmf_create_subsystem", 00:14:25.647 "req_id": 1 00:14:25.647 } 00:14:25.647 Got JSON-RPC error response 00:14:25.647 response: 00:14:25.647 { 00:14:25.647 "code": -32602, 00:14:25.647 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:25.647 }' 00:14:25.647 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:14:25.647 { 00:14:25.647 "nqn": "nqn.2016-06.io.spdk:cnode5770", 00:14:25.647 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:25.647 "method": "nvmf_create_subsystem", 00:14:25.647 "req_id": 1 00:14:25.647 } 00:14:25.647 Got JSON-RPC error response 00:14:25.647 response: 00:14:25.647 { 00:14:25.647 "code": -32602, 00:14:25.647 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:25.647 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:25.647 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:14:25.647 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode22491 00:14:25.647 [2024-11-20 09:47:56.549194] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22491: invalid model number 'SPDK_Controller' 00:14:25.910 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:14:25.910 { 00:14:25.910 "nqn": "nqn.2016-06.io.spdk:cnode22491", 00:14:25.910 "model_number": "SPDK_Controller\u001f", 00:14:25.910 "method": "nvmf_create_subsystem", 00:14:25.910 "req_id": 1 00:14:25.910 } 00:14:25.910 Got JSON-RPC error response 00:14:25.910 response: 00:14:25.910 { 00:14:25.910 "code": -32602, 00:14:25.910 "message": "Invalid MN SPDK_Controller\u001f" 00:14:25.910 }' 00:14:25.910 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:14:25.910 { 00:14:25.910 "nqn": "nqn.2016-06.io.spdk:cnode22491", 00:14:25.910 "model_number": "SPDK_Controller\u001f", 00:14:25.910 "method": "nvmf_create_subsystem", 00:14:25.910 "req_id": 1 00:14:25.910 } 00:14:25.910 Got JSON-RPC error response 00:14:25.910 response: 00:14:25.910 { 00:14:25.910 "code": -32602, 00:14:25.910 "message": "Invalid MN SPDK_Controller\u001f" 00:14:25.910 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:25.910 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:14:25.910 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:14:25.910 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:25.910 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:14:25.910 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:14:25.910 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:25.910 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:25.910 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:14:25.910 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:14:25.910 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:14:25.910 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:25.910 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:25.910 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:14:25.910 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:14:25.910 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:14:25.910 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:25.910 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:25.910 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:14:25.910 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:14:25.910 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:14:25.910 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:25.910 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:25.910 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:14:25.910 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:14:25.910 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:14:25.910 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:25.910 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:25.910 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:14:25.910 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:14:25.910 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:14:25.910 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:25.910 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:25.910 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:14:25.910 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:14:25.910 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:14:25.910 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:25.910 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:25.910 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:14:25.910 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:14:25.910 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:14:25.910 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:25.910 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:25.910 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:14:25.910 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:14:25.910 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:14:25.910 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:25.910 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:25.910 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:14:25.910 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:14:25.910 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:14:25.910 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:25.910 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:25.910 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:14:25.910 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:14:25.910 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:14:25.910 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:25.910 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:25.910 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:14:25.910 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:14:25.910 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:14:25.910 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:25.910 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:25.910 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:14:25.910 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:14:25.910 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:14:25.910 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:25.910 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:25.910 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:14:25.910 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:14:25.910 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:14:25.910 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:25.910 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:25.910 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:14:25.910 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:14:25.910 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:14:25.910 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:25.910 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:25.910 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:14:25.910 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:14:25.910 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:14:25.910 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:25.910 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:25.910 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:14:25.910 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:14:25.910 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:14:25.910 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:25.910 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:25.910 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:14:25.910 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:14:25.910 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:14:25.910 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:25.910 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:25.910 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:14:25.910 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:14:25.910 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:14:25.910 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:25.911 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:25.911 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:14:25.911 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:14:25.911 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:14:25.911 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:25.911 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:25.911 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:14:25.911 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:14:25.911 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:14:25.911 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:25.911 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:25.911 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:14:25.911 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:14:25.911 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:14:25.911 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:25.911 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:25.911 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ ; == \- ]] 00:14:25.911 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo ';]#[H3p~+oUJcgH1FylT' 00:14:25.911 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s ';]#[H3p~+oUJcgH1FylT' nqn.2016-06.io.spdk:cnode31524 00:14:26.173 [2024-11-20 09:47:56.926586] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31524: invalid serial number ';]#[H3p~+oUJcgH1FylT' 00:14:26.173 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:14:26.173 { 00:14:26.173 "nqn": "nqn.2016-06.io.spdk:cnode31524", 00:14:26.173 "serial_number": ";]#[H\u007f3p~+oUJcgH1FylT", 00:14:26.173 "method": "nvmf_create_subsystem", 00:14:26.173 "req_id": 1 00:14:26.173 } 00:14:26.173 Got JSON-RPC error response 00:14:26.173 response: 00:14:26.173 { 00:14:26.173 "code": -32602, 00:14:26.173 "message": "Invalid SN ;]#[H\u007f3p~+oUJcgH1FylT" 00:14:26.173 }' 00:14:26.173 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:14:26.173 { 00:14:26.173 "nqn": "nqn.2016-06.io.spdk:cnode31524", 00:14:26.173 "serial_number": ";]#[H\u007f3p~+oUJcgH1FylT", 00:14:26.173 "method": "nvmf_create_subsystem", 00:14:26.173 "req_id": 1 00:14:26.173 } 00:14:26.173 Got JSON-RPC error response 00:14:26.173 response: 00:14:26.173 { 00:14:26.173 "code": -32602, 00:14:26.173 "message": "Invalid SN ;]#[H\u007f3p~+oUJcgH1FylT" 00:14:26.173 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:26.173 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:14:26.173 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:14:26.173 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:26.173 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:14:26.173 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:14:26.173 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:26.173 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.173 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:14:26.173 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:14:26.173 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:14:26.173 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.173 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.173 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:14:26.173 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:14:26.173 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:14:26.173 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.173 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.173 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:14:26.173 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:14:26.173 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:14:26.173 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.173 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.173 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:14:26.173 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:14:26.173 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:14:26.173 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.173 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.173 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:14:26.173 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:14:26.173 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:14:26.173 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.173 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.173 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:14:26.173 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:14:26.173 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:14:26.173 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.173 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.173 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:14:26.173 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:14:26.173 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:14:26.173 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.173 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.173 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:14:26.173 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:14:26.173 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:14:26.173 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.173 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.173 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:14:26.173 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:14:26.173 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:14:26.173 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.173 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.173 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:14:26.173 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:14:26.173 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:14:26.173 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.173 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.173 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:14:26.173 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:14:26.173 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:14:26.173 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.173 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.173 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:14:26.173 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:14:26.173 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:14:26.173 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.173 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.173 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:14:26.173 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:14:26.173 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:14:26.173 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.173 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.173 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:14:26.173 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:14:26.173 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:14:26.173 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.173 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.436 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:14:26.436 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:14:26.436 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:14:26.436 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.436 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.436 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:14:26.436 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:14:26.436 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:14:26.436 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.436 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.436 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:14:26.436 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:14:26.436 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:14:26.436 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.436 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.436 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:14:26.436 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:14:26.436 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:14:26.436 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.436 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.436 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:14:26.436 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:14:26.436 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:14:26.436 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.436 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.436 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:14:26.436 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:14:26.436 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:14:26.436 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.436 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.436 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:14:26.436 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:14:26.436 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:14:26.436 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.436 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.436 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:14:26.436 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:14:26.436 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:14:26.436 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.436 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.436 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:14:26.436 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:14:26.436 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:14:26.436 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.436 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.436 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:14:26.436 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:14:26.436 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:14:26.436 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.436 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.436 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:14:26.436 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:14:26.436 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:14:26.436 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.436 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.436 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:14:26.436 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:14:26.436 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:14:26.436 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.436 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.436 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:14:26.436 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:14:26.436 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:14:26.436 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.436 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.436 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:14:26.436 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:14:26.436 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:14:26.436 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.436 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.436 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:14:26.436 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:14:26.436 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:14:26.436 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.436 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.436 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:14:26.436 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:14:26.436 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:14:26.436 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.436 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.436 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:14:26.437 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:14:26.437 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:14:26.437 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.437 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.437 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:14:26.437 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:14:26.437 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:14:26.437 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.437 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.437 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:14:26.437 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:14:26.437 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:14:26.437 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.437 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.437 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:14:26.437 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:14:26.437 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:14:26.437 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.437 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.437 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:14:26.437 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:14:26.437 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:14:26.437 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.437 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.437 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:14:26.437 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:14:26.437 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:14:26.437 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.437 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.437 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:14:26.437 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:14:26.437 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:14:26.437 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.437 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.437 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:14:26.437 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:14:26.437 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:14:26.437 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.437 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.437 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:14:26.437 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:14:26.437 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:14:26.437 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.437 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.437 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:14:26.437 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:14:26.437 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:14:26.437 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.437 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.437 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:14:26.437 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:14:26.437 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:14:26.437 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.437 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.437 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ " == \- ]] 00:14:26.437 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '"CKABI")(`$6 /dev/null' 00:14:28.788 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:30.704 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:30.704 00:14:30.704 real 0m13.641s 00:14:30.704 user 0m19.157s 00:14:30.704 sys 0m6.664s 00:14:30.704 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:30.704 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:30.704 ************************************ 00:14:30.704 END TEST nvmf_invalid 00:14:30.704 ************************************ 00:14:30.704 09:48:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:30.704 09:48:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:30.704 09:48:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:30.704 09:48:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:30.704 ************************************ 00:14:30.704 START TEST nvmf_connect_stress 00:14:30.704 ************************************ 00:14:30.704 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:30.966 * Looking for test storage... 00:14:30.966 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:30.966 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:30.966 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:14:30.966 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:30.966 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:30.967 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:30.967 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:30.967 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:30.967 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:14:30.967 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:14:30.967 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:14:30.967 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:14:30.967 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:14:30.967 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:14:30.967 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:14:30.967 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:30.967 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:14:30.967 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:14:30.967 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:30.967 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:30.967 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:14:30.967 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:14:30.967 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:30.967 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:14:30.967 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:14:30.967 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:14:30.967 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:14:30.967 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:30.967 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:14:30.967 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:14:30.967 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:30.967 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:30.967 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:14:30.967 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:30.967 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:30.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:30.967 --rc genhtml_branch_coverage=1 00:14:30.967 --rc genhtml_function_coverage=1 00:14:30.967 --rc genhtml_legend=1 00:14:30.967 --rc geninfo_all_blocks=1 00:14:30.967 --rc geninfo_unexecuted_blocks=1 00:14:30.967 00:14:30.967 ' 00:14:30.967 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:30.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:30.967 --rc genhtml_branch_coverage=1 00:14:30.967 --rc genhtml_function_coverage=1 00:14:30.967 --rc genhtml_legend=1 00:14:30.967 --rc geninfo_all_blocks=1 00:14:30.967 --rc geninfo_unexecuted_blocks=1 00:14:30.967 00:14:30.967 ' 00:14:30.967 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:30.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:30.967 --rc genhtml_branch_coverage=1 00:14:30.967 --rc genhtml_function_coverage=1 00:14:30.967 --rc genhtml_legend=1 00:14:30.967 --rc geninfo_all_blocks=1 00:14:30.967 --rc geninfo_unexecuted_blocks=1 00:14:30.967 00:14:30.967 ' 00:14:30.967 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:30.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:30.967 --rc genhtml_branch_coverage=1 00:14:30.967 --rc genhtml_function_coverage=1 00:14:30.967 --rc genhtml_legend=1 00:14:30.967 --rc geninfo_all_blocks=1 00:14:30.967 --rc geninfo_unexecuted_blocks=1 00:14:30.967 00:14:30.967 ' 00:14:30.967 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:30.967 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:14:30.967 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:30.967 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:30.967 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:30.967 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:30.967 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:30.967 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:30.967 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:30.967 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:30.967 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:30.967 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:30.967 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:30.967 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:30.967 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:30.967 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:30.967 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:30.967 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:30.967 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:30.967 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:14:30.967 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:30.967 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:30.967 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:30.967 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.967 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.967 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.967 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:14:30.967 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.967 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:14:30.967 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:30.967 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:30.967 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:30.967 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:30.967 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:30.967 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:30.967 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:30.967 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:30.967 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:30.967 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:30.968 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:14:30.968 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:30.968 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:30.968 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:30.968 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:30.968 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:30.968 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:30.968 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:30.968 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:30.968 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:30.968 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:30.968 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:14:30.968 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:39.116 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:39.116 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:14:39.116 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:39.116 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:39.116 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:39.116 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:39.116 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:39.116 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:14:39.116 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:39.116 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:14:39.116 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:14:39.116 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:14:39.116 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:14:39.116 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:14:39.116 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:14:39.116 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:39.116 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:39.116 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:39.116 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:39.116 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:39.116 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:39.116 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:39.116 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:39.116 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:39.116 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:39.116 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:39.116 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:39.116 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:39.116 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:39.116 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:39.116 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:39.116 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:39.116 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:39.116 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:39.116 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:39.116 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:39.116 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:39.116 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:39.116 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:39.116 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:39.116 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:39.116 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:39.116 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:39.116 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:39.116 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:39.116 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:39.116 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:39.116 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:39.116 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:39.116 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:39.116 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:39.116 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:39.116 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:39.116 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:39.116 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:39.116 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:39.116 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:39.116 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:39.116 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:39.116 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:39.116 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:39.116 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:39.116 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:39.116 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:39.116 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:39.116 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:39.116 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:39.116 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:39.116 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:39.116 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:39.116 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:39.116 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:39.116 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:39.116 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:14:39.116 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:39.116 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:39.116 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:39.116 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:39.116 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:39.116 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:39.116 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:39.116 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:39.116 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:39.116 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:39.116 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:39.116 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:39.116 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:39.116 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:39.116 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:39.116 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:39.116 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:39.117 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:39.117 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:39.117 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:39.117 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:39.117 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:39.117 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:39.117 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:39.117 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:39.117 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:39.117 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:39.117 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.725 ms 00:14:39.117 00:14:39.117 --- 10.0.0.2 ping statistics --- 00:14:39.117 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:39.117 rtt min/avg/max/mdev = 0.725/0.725/0.725/0.000 ms 00:14:39.117 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:39.117 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:39.117 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:14:39.117 00:14:39.117 --- 10.0.0.1 ping statistics --- 00:14:39.117 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:39.117 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:14:39.117 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:39.117 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:14:39.117 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:39.117 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:39.117 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:39.117 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:39.117 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:39.117 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:39.117 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:39.117 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:14:39.117 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:39.117 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:39.117 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:39.117 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=1302618 00:14:39.117 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 1302618 00:14:39.117 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:39.117 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 1302618 ']' 00:14:39.117 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:39.117 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:39.117 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:39.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:39.117 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:39.117 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:39.117 [2024-11-20 09:48:09.384996] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:14:39.117 [2024-11-20 09:48:09.385062] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:39.117 [2024-11-20 09:48:09.486700] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:39.117 [2024-11-20 09:48:09.538170] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:39.117 [2024-11-20 09:48:09.538222] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:39.117 [2024-11-20 09:48:09.538237] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:39.117 [2024-11-20 09:48:09.538243] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:39.117 [2024-11-20 09:48:09.538250] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:39.117 [2024-11-20 09:48:09.540075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:39.117 [2024-11-20 09:48:09.540265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:39.117 [2024-11-20 09:48:09.540265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:39.378 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:39.378 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:14:39.378 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:39.378 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:39.379 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:39.379 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:39.379 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:39.379 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.379 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:39.379 [2024-11-20 09:48:10.253546] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:39.379 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.379 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:39.379 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.379 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:39.379 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.379 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:39.379 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.379 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:39.379 [2024-11-20 09:48:10.279369] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:39.379 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.379 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:39.379 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.379 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:39.640 NULL1 00:14:39.640 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.640 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1302926 00:14:39.640 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:39.640 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:14:39.640 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:39.640 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:14:39.640 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:39.640 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:39.640 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:39.640 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:39.640 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:39.640 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:39.640 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:39.640 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:39.640 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:39.640 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:39.640 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:39.640 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:39.640 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:39.640 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:39.640 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:39.640 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:39.640 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:39.640 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:39.640 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:39.640 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:39.640 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:39.640 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:39.640 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:39.640 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:39.640 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:39.640 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:39.640 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:39.640 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:39.640 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:39.640 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:39.640 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:39.640 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:39.640 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:39.640 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:39.640 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:39.640 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:39.640 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:39.640 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:39.640 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:39.640 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:39.640 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1302926 00:14:39.640 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:39.640 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.640 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:39.901 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.901 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1302926 00:14:39.901 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:39.901 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.901 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:40.162 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.162 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1302926 00:14:40.162 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:40.162 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.162 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:40.733 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.733 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1302926 00:14:40.733 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:40.733 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.733 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:40.994 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.994 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1302926 00:14:40.994 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:40.994 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.994 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:41.254 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.254 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1302926 00:14:41.254 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:41.254 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.254 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:41.514 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.514 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1302926 00:14:41.514 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:41.514 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.514 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:42.084 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.084 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1302926 00:14:42.084 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:42.084 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.084 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:42.344 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.344 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1302926 00:14:42.344 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:42.344 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.344 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:42.605 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.605 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1302926 00:14:42.605 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:42.605 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.605 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:42.865 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.865 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1302926 00:14:42.865 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:42.865 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.865 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:43.125 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.125 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1302926 00:14:43.125 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:43.125 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.125 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:43.696 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.696 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1302926 00:14:43.696 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:43.696 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.696 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:43.955 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.956 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1302926 00:14:43.956 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:43.956 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.956 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:44.216 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.216 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1302926 00:14:44.216 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:44.216 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.216 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:44.476 09:48:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.476 09:48:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1302926 00:14:44.476 09:48:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:44.476 09:48:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.476 09:48:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:44.736 09:48:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.736 09:48:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1302926 00:14:44.736 09:48:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:44.736 09:48:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.736 09:48:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:45.307 09:48:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.307 09:48:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1302926 00:14:45.307 09:48:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:45.307 09:48:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.307 09:48:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:45.568 09:48:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.568 09:48:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1302926 00:14:45.568 09:48:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:45.568 09:48:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.568 09:48:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:45.829 09:48:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.829 09:48:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1302926 00:14:45.829 09:48:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:45.829 09:48:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.829 09:48:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:46.090 09:48:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.090 09:48:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1302926 00:14:46.090 09:48:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:46.090 09:48:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.090 09:48:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:46.350 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.350 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1302926 00:14:46.350 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:46.350 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.350 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:46.931 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.931 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1302926 00:14:46.931 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:46.931 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.931 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:47.192 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.192 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1302926 00:14:47.192 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:47.192 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.192 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:47.453 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.453 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1302926 00:14:47.453 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:47.453 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.453 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:47.713 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.713 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1302926 00:14:47.713 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:47.713 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.713 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:48.285 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.285 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1302926 00:14:48.285 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:48.285 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.285 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:48.545 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.545 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1302926 00:14:48.545 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:48.545 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.545 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:48.804 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.804 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1302926 00:14:48.804 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:48.804 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.804 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:49.064 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.064 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1302926 00:14:49.064 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:49.064 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.064 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:49.324 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.324 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1302926 00:14:49.325 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:49.325 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.325 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:49.585 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:49.846 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.846 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1302926 00:14:49.846 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1302926) - No such process 00:14:49.846 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1302926 00:14:49.846 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:49.846 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:49.846 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:49.846 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:49.846 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:14:49.846 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:49.846 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:14:49.846 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:49.846 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:49.846 rmmod nvme_tcp 00:14:49.846 rmmod nvme_fabrics 00:14:49.846 rmmod nvme_keyring 00:14:49.846 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:49.846 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:14:49.846 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:14:49.846 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 1302618 ']' 00:14:49.846 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 1302618 00:14:49.846 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 1302618 ']' 00:14:49.846 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 1302618 00:14:49.846 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:14:49.846 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:49.846 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1302618 00:14:49.846 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:49.846 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:49.846 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1302618' 00:14:49.846 killing process with pid 1302618 00:14:49.846 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 1302618 00:14:49.846 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 1302618 00:14:50.107 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:50.107 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:50.107 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:50.107 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:14:50.107 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:14:50.107 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:50.107 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:14:50.107 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:50.107 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:50.107 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:50.107 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:50.107 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:52.019 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:52.019 00:14:52.019 real 0m21.252s 00:14:52.019 user 0m42.228s 00:14:52.019 sys 0m9.262s 00:14:52.019 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:52.019 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:52.019 ************************************ 00:14:52.019 END TEST nvmf_connect_stress 00:14:52.019 ************************************ 00:14:52.019 09:48:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:52.019 09:48:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:52.019 09:48:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:52.019 09:48:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:52.019 ************************************ 00:14:52.019 START TEST nvmf_fused_ordering 00:14:52.019 ************************************ 00:14:52.281 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:52.281 * Looking for test storage... 00:14:52.281 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:52.281 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:52.281 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:14:52.281 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:52.281 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:52.281 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:52.281 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:52.281 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:52.281 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:14:52.281 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:14:52.281 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:14:52.281 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:14:52.281 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:14:52.281 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:14:52.281 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:14:52.281 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:52.281 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:14:52.281 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:14:52.281 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:52.281 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:52.281 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:14:52.281 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:14:52.281 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:52.281 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:14:52.281 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:14:52.281 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:14:52.281 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:14:52.281 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:52.281 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:14:52.282 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:14:52.282 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:52.282 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:52.282 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:14:52.282 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:52.282 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:52.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.282 --rc genhtml_branch_coverage=1 00:14:52.282 --rc genhtml_function_coverage=1 00:14:52.282 --rc genhtml_legend=1 00:14:52.282 --rc geninfo_all_blocks=1 00:14:52.282 --rc geninfo_unexecuted_blocks=1 00:14:52.282 00:14:52.282 ' 00:14:52.282 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:52.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.282 --rc genhtml_branch_coverage=1 00:14:52.282 --rc genhtml_function_coverage=1 00:14:52.282 --rc genhtml_legend=1 00:14:52.282 --rc geninfo_all_blocks=1 00:14:52.282 --rc geninfo_unexecuted_blocks=1 00:14:52.282 00:14:52.282 ' 00:14:52.282 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:52.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.282 --rc genhtml_branch_coverage=1 00:14:52.282 --rc genhtml_function_coverage=1 00:14:52.282 --rc genhtml_legend=1 00:14:52.282 --rc geninfo_all_blocks=1 00:14:52.282 --rc geninfo_unexecuted_blocks=1 00:14:52.282 00:14:52.282 ' 00:14:52.282 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:52.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.282 --rc genhtml_branch_coverage=1 00:14:52.282 --rc genhtml_function_coverage=1 00:14:52.282 --rc genhtml_legend=1 00:14:52.282 --rc geninfo_all_blocks=1 00:14:52.282 --rc geninfo_unexecuted_blocks=1 00:14:52.282 00:14:52.282 ' 00:14:52.282 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:52.282 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:14:52.282 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:52.282 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:52.282 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:52.282 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:52.282 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:52.282 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:52.282 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:52.282 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:52.282 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:52.282 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:52.282 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:52.282 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:52.282 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:52.282 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:52.282 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:52.282 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:52.282 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:52.282 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:14:52.282 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:52.282 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:52.282 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:52.282 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.282 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.282 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.282 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:14:52.282 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.282 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:14:52.282 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:52.282 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:52.282 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:52.282 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:52.282 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:52.282 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:52.282 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:52.282 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:52.282 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:52.282 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:52.282 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:52.282 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:52.282 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:52.282 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:52.282 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:52.282 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:52.282 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:52.282 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:52.282 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:52.282 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:52.282 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:52.282 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:14:52.282 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:00.424 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:00.424 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:15:00.424 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:00.424 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:00.424 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:00.424 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:00.424 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:00.424 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:15:00.424 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:00.424 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:15:00.424 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:15:00.424 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:15:00.424 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:15:00.424 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:15:00.424 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:15:00.424 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:00.424 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:00.424 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:00.424 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:00.424 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:00.424 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:00.424 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:00.424 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:00.424 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:00.424 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:00.424 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:00.424 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:00.424 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:00.424 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:00.424 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:00.424 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:00.424 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:00.424 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:00.424 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:00.424 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:00.424 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:00.424 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:00.424 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:00.424 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:00.424 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:00.424 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:00.424 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:00.424 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:00.424 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:00.424 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:00.424 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:00.424 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:00.424 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:00.424 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:00.424 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:00.424 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:00.424 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:00.424 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:00.424 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:00.424 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:00.424 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:00.424 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:00.424 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:00.424 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:00.424 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:00.424 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:00.424 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:00.424 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:00.424 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:00.424 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:00.424 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:00.424 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:00.424 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:00.424 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:00.424 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:00.424 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:00.424 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:00.424 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:00.424 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:15:00.424 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:00.424 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:00.424 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:00.424 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:00.424 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:00.424 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:00.424 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:00.424 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:00.424 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:00.424 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:00.424 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:00.424 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:00.424 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:00.424 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:00.424 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:00.424 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:00.424 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:00.424 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:00.424 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:00.424 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:00.424 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:00.424 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:00.424 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:00.424 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:00.424 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:00.424 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:00.424 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:00.424 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.656 ms 00:15:00.424 00:15:00.424 --- 10.0.0.2 ping statistics --- 00:15:00.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:00.424 rtt min/avg/max/mdev = 0.656/0.656/0.656/0.000 ms 00:15:00.425 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:00.425 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:00.425 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:15:00.425 00:15:00.425 --- 10.0.0.1 ping statistics --- 00:15:00.425 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:00.425 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:15:00.425 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:00.425 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:15:00.425 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:00.425 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:00.425 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:00.425 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:00.425 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:00.425 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:00.425 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:00.425 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:15:00.425 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:00.425 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:00.425 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:00.425 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=1309316 00:15:00.425 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 1309316 00:15:00.425 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:00.425 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 1309316 ']' 00:15:00.425 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:00.425 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:00.425 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:00.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:00.425 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:00.425 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:00.425 [2024-11-20 09:48:30.744785] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:15:00.425 [2024-11-20 09:48:30.744851] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:00.425 [2024-11-20 09:48:30.842185] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:00.425 [2024-11-20 09:48:30.891902] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:00.425 [2024-11-20 09:48:30.891950] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:00.425 [2024-11-20 09:48:30.891959] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:00.425 [2024-11-20 09:48:30.891965] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:00.425 [2024-11-20 09:48:30.891973] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:00.425 [2024-11-20 09:48:30.892778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:00.687 09:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:00.687 09:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:15:00.687 09:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:00.687 09:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:00.687 09:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:00.948 09:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:00.948 09:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:00.948 09:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.948 09:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:00.948 [2024-11-20 09:48:31.610201] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:00.948 09:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.948 09:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:00.948 09:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.948 09:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:00.948 09:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.948 09:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:00.948 09:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.948 09:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:00.948 [2024-11-20 09:48:31.634497] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:00.948 09:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.948 09:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:00.948 09:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.948 09:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:00.948 NULL1 00:15:00.948 09:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.948 09:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:15:00.948 09:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.948 09:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:00.948 09:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.948 09:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:15:00.948 09:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.948 09:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:00.948 09:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.948 09:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:15:00.948 [2024-11-20 09:48:31.703940] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:15:00.948 [2024-11-20 09:48:31.703986] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1309570 ] 00:15:01.519 Attached to nqn.2016-06.io.spdk:cnode1 00:15:01.519 Namespace ID: 1 size: 1GB 00:15:01.519 fused_ordering(0) 00:15:01.519 fused_ordering(1) 00:15:01.519 fused_ordering(2) 00:15:01.519 fused_ordering(3) 00:15:01.519 fused_ordering(4) 00:15:01.519 fused_ordering(5) 00:15:01.519 fused_ordering(6) 00:15:01.519 fused_ordering(7) 00:15:01.519 fused_ordering(8) 00:15:01.519 fused_ordering(9) 00:15:01.519 fused_ordering(10) 00:15:01.519 fused_ordering(11) 00:15:01.519 fused_ordering(12) 00:15:01.519 fused_ordering(13) 00:15:01.519 fused_ordering(14) 00:15:01.519 fused_ordering(15) 00:15:01.519 fused_ordering(16) 00:15:01.519 fused_ordering(17) 00:15:01.519 fused_ordering(18) 00:15:01.519 fused_ordering(19) 00:15:01.519 fused_ordering(20) 00:15:01.519 fused_ordering(21) 00:15:01.519 fused_ordering(22) 00:15:01.519 fused_ordering(23) 00:15:01.519 fused_ordering(24) 00:15:01.520 fused_ordering(25) 00:15:01.520 fused_ordering(26) 00:15:01.520 fused_ordering(27) 00:15:01.520 fused_ordering(28) 00:15:01.520 fused_ordering(29) 00:15:01.520 fused_ordering(30) 00:15:01.520 fused_ordering(31) 00:15:01.520 fused_ordering(32) 00:15:01.520 fused_ordering(33) 00:15:01.520 fused_ordering(34) 00:15:01.520 fused_ordering(35) 00:15:01.520 fused_ordering(36) 00:15:01.520 fused_ordering(37) 00:15:01.520 fused_ordering(38) 00:15:01.520 fused_ordering(39) 00:15:01.520 fused_ordering(40) 00:15:01.520 fused_ordering(41) 00:15:01.520 fused_ordering(42) 00:15:01.520 fused_ordering(43) 00:15:01.520 fused_ordering(44) 00:15:01.520 fused_ordering(45) 00:15:01.520 fused_ordering(46) 00:15:01.520 fused_ordering(47) 00:15:01.520 fused_ordering(48) 00:15:01.520 fused_ordering(49) 00:15:01.520 fused_ordering(50) 00:15:01.520 fused_ordering(51) 00:15:01.520 fused_ordering(52) 00:15:01.520 fused_ordering(53) 00:15:01.520 fused_ordering(54) 00:15:01.520 fused_ordering(55) 00:15:01.520 fused_ordering(56) 00:15:01.520 fused_ordering(57) 00:15:01.520 fused_ordering(58) 00:15:01.520 fused_ordering(59) 00:15:01.520 fused_ordering(60) 00:15:01.520 fused_ordering(61) 00:15:01.520 fused_ordering(62) 00:15:01.520 fused_ordering(63) 00:15:01.520 fused_ordering(64) 00:15:01.520 fused_ordering(65) 00:15:01.520 fused_ordering(66) 00:15:01.520 fused_ordering(67) 00:15:01.520 fused_ordering(68) 00:15:01.520 fused_ordering(69) 00:15:01.520 fused_ordering(70) 00:15:01.520 fused_ordering(71) 00:15:01.520 fused_ordering(72) 00:15:01.520 fused_ordering(73) 00:15:01.520 fused_ordering(74) 00:15:01.520 fused_ordering(75) 00:15:01.520 fused_ordering(76) 00:15:01.520 fused_ordering(77) 00:15:01.520 fused_ordering(78) 00:15:01.520 fused_ordering(79) 00:15:01.520 fused_ordering(80) 00:15:01.520 fused_ordering(81) 00:15:01.520 fused_ordering(82) 00:15:01.520 fused_ordering(83) 00:15:01.520 fused_ordering(84) 00:15:01.520 fused_ordering(85) 00:15:01.520 fused_ordering(86) 00:15:01.520 fused_ordering(87) 00:15:01.520 fused_ordering(88) 00:15:01.520 fused_ordering(89) 00:15:01.520 fused_ordering(90) 00:15:01.520 fused_ordering(91) 00:15:01.520 fused_ordering(92) 00:15:01.520 fused_ordering(93) 00:15:01.520 fused_ordering(94) 00:15:01.520 fused_ordering(95) 00:15:01.520 fused_ordering(96) 00:15:01.520 fused_ordering(97) 00:15:01.520 fused_ordering(98) 00:15:01.520 fused_ordering(99) 00:15:01.520 fused_ordering(100) 00:15:01.520 fused_ordering(101) 00:15:01.520 fused_ordering(102) 00:15:01.520 fused_ordering(103) 00:15:01.520 fused_ordering(104) 00:15:01.520 fused_ordering(105) 00:15:01.520 fused_ordering(106) 00:15:01.520 fused_ordering(107) 00:15:01.520 fused_ordering(108) 00:15:01.520 fused_ordering(109) 00:15:01.520 fused_ordering(110) 00:15:01.520 fused_ordering(111) 00:15:01.520 fused_ordering(112) 00:15:01.520 fused_ordering(113) 00:15:01.520 fused_ordering(114) 00:15:01.520 fused_ordering(115) 00:15:01.520 fused_ordering(116) 00:15:01.520 fused_ordering(117) 00:15:01.520 fused_ordering(118) 00:15:01.520 fused_ordering(119) 00:15:01.520 fused_ordering(120) 00:15:01.520 fused_ordering(121) 00:15:01.520 fused_ordering(122) 00:15:01.520 fused_ordering(123) 00:15:01.520 fused_ordering(124) 00:15:01.520 fused_ordering(125) 00:15:01.520 fused_ordering(126) 00:15:01.520 fused_ordering(127) 00:15:01.520 fused_ordering(128) 00:15:01.520 fused_ordering(129) 00:15:01.520 fused_ordering(130) 00:15:01.520 fused_ordering(131) 00:15:01.520 fused_ordering(132) 00:15:01.520 fused_ordering(133) 00:15:01.520 fused_ordering(134) 00:15:01.520 fused_ordering(135) 00:15:01.520 fused_ordering(136) 00:15:01.520 fused_ordering(137) 00:15:01.520 fused_ordering(138) 00:15:01.520 fused_ordering(139) 00:15:01.520 fused_ordering(140) 00:15:01.520 fused_ordering(141) 00:15:01.520 fused_ordering(142) 00:15:01.520 fused_ordering(143) 00:15:01.520 fused_ordering(144) 00:15:01.520 fused_ordering(145) 00:15:01.520 fused_ordering(146) 00:15:01.520 fused_ordering(147) 00:15:01.520 fused_ordering(148) 00:15:01.520 fused_ordering(149) 00:15:01.520 fused_ordering(150) 00:15:01.520 fused_ordering(151) 00:15:01.520 fused_ordering(152) 00:15:01.520 fused_ordering(153) 00:15:01.520 fused_ordering(154) 00:15:01.520 fused_ordering(155) 00:15:01.520 fused_ordering(156) 00:15:01.520 fused_ordering(157) 00:15:01.520 fused_ordering(158) 00:15:01.520 fused_ordering(159) 00:15:01.520 fused_ordering(160) 00:15:01.520 fused_ordering(161) 00:15:01.520 fused_ordering(162) 00:15:01.520 fused_ordering(163) 00:15:01.520 fused_ordering(164) 00:15:01.520 fused_ordering(165) 00:15:01.520 fused_ordering(166) 00:15:01.520 fused_ordering(167) 00:15:01.520 fused_ordering(168) 00:15:01.520 fused_ordering(169) 00:15:01.520 fused_ordering(170) 00:15:01.520 fused_ordering(171) 00:15:01.520 fused_ordering(172) 00:15:01.520 fused_ordering(173) 00:15:01.520 fused_ordering(174) 00:15:01.520 fused_ordering(175) 00:15:01.520 fused_ordering(176) 00:15:01.520 fused_ordering(177) 00:15:01.520 fused_ordering(178) 00:15:01.520 fused_ordering(179) 00:15:01.520 fused_ordering(180) 00:15:01.520 fused_ordering(181) 00:15:01.520 fused_ordering(182) 00:15:01.520 fused_ordering(183) 00:15:01.520 fused_ordering(184) 00:15:01.520 fused_ordering(185) 00:15:01.520 fused_ordering(186) 00:15:01.520 fused_ordering(187) 00:15:01.520 fused_ordering(188) 00:15:01.520 fused_ordering(189) 00:15:01.520 fused_ordering(190) 00:15:01.520 fused_ordering(191) 00:15:01.520 fused_ordering(192) 00:15:01.520 fused_ordering(193) 00:15:01.520 fused_ordering(194) 00:15:01.520 fused_ordering(195) 00:15:01.520 fused_ordering(196) 00:15:01.520 fused_ordering(197) 00:15:01.520 fused_ordering(198) 00:15:01.520 fused_ordering(199) 00:15:01.520 fused_ordering(200) 00:15:01.520 fused_ordering(201) 00:15:01.520 fused_ordering(202) 00:15:01.520 fused_ordering(203) 00:15:01.520 fused_ordering(204) 00:15:01.520 fused_ordering(205) 00:15:01.782 fused_ordering(206) 00:15:01.782 fused_ordering(207) 00:15:01.782 fused_ordering(208) 00:15:01.782 fused_ordering(209) 00:15:01.782 fused_ordering(210) 00:15:01.782 fused_ordering(211) 00:15:01.782 fused_ordering(212) 00:15:01.782 fused_ordering(213) 00:15:01.782 fused_ordering(214) 00:15:01.782 fused_ordering(215) 00:15:01.782 fused_ordering(216) 00:15:01.782 fused_ordering(217) 00:15:01.782 fused_ordering(218) 00:15:01.782 fused_ordering(219) 00:15:01.782 fused_ordering(220) 00:15:01.782 fused_ordering(221) 00:15:01.782 fused_ordering(222) 00:15:01.782 fused_ordering(223) 00:15:01.782 fused_ordering(224) 00:15:01.782 fused_ordering(225) 00:15:01.782 fused_ordering(226) 00:15:01.782 fused_ordering(227) 00:15:01.782 fused_ordering(228) 00:15:01.782 fused_ordering(229) 00:15:01.782 fused_ordering(230) 00:15:01.782 fused_ordering(231) 00:15:01.782 fused_ordering(232) 00:15:01.782 fused_ordering(233) 00:15:01.782 fused_ordering(234) 00:15:01.782 fused_ordering(235) 00:15:01.782 fused_ordering(236) 00:15:01.782 fused_ordering(237) 00:15:01.782 fused_ordering(238) 00:15:01.782 fused_ordering(239) 00:15:01.782 fused_ordering(240) 00:15:01.782 fused_ordering(241) 00:15:01.782 fused_ordering(242) 00:15:01.782 fused_ordering(243) 00:15:01.782 fused_ordering(244) 00:15:01.782 fused_ordering(245) 00:15:01.782 fused_ordering(246) 00:15:01.782 fused_ordering(247) 00:15:01.782 fused_ordering(248) 00:15:01.782 fused_ordering(249) 00:15:01.782 fused_ordering(250) 00:15:01.782 fused_ordering(251) 00:15:01.782 fused_ordering(252) 00:15:01.782 fused_ordering(253) 00:15:01.782 fused_ordering(254) 00:15:01.782 fused_ordering(255) 00:15:01.782 fused_ordering(256) 00:15:01.782 fused_ordering(257) 00:15:01.782 fused_ordering(258) 00:15:01.782 fused_ordering(259) 00:15:01.782 fused_ordering(260) 00:15:01.782 fused_ordering(261) 00:15:01.782 fused_ordering(262) 00:15:01.782 fused_ordering(263) 00:15:01.782 fused_ordering(264) 00:15:01.782 fused_ordering(265) 00:15:01.782 fused_ordering(266) 00:15:01.782 fused_ordering(267) 00:15:01.782 fused_ordering(268) 00:15:01.782 fused_ordering(269) 00:15:01.782 fused_ordering(270) 00:15:01.782 fused_ordering(271) 00:15:01.782 fused_ordering(272) 00:15:01.782 fused_ordering(273) 00:15:01.782 fused_ordering(274) 00:15:01.782 fused_ordering(275) 00:15:01.782 fused_ordering(276) 00:15:01.782 fused_ordering(277) 00:15:01.782 fused_ordering(278) 00:15:01.782 fused_ordering(279) 00:15:01.782 fused_ordering(280) 00:15:01.782 fused_ordering(281) 00:15:01.782 fused_ordering(282) 00:15:01.782 fused_ordering(283) 00:15:01.782 fused_ordering(284) 00:15:01.782 fused_ordering(285) 00:15:01.782 fused_ordering(286) 00:15:01.782 fused_ordering(287) 00:15:01.782 fused_ordering(288) 00:15:01.782 fused_ordering(289) 00:15:01.782 fused_ordering(290) 00:15:01.782 fused_ordering(291) 00:15:01.782 fused_ordering(292) 00:15:01.782 fused_ordering(293) 00:15:01.782 fused_ordering(294) 00:15:01.782 fused_ordering(295) 00:15:01.782 fused_ordering(296) 00:15:01.782 fused_ordering(297) 00:15:01.782 fused_ordering(298) 00:15:01.782 fused_ordering(299) 00:15:01.782 fused_ordering(300) 00:15:01.782 fused_ordering(301) 00:15:01.782 fused_ordering(302) 00:15:01.782 fused_ordering(303) 00:15:01.782 fused_ordering(304) 00:15:01.782 fused_ordering(305) 00:15:01.782 fused_ordering(306) 00:15:01.782 fused_ordering(307) 00:15:01.782 fused_ordering(308) 00:15:01.782 fused_ordering(309) 00:15:01.782 fused_ordering(310) 00:15:01.782 fused_ordering(311) 00:15:01.782 fused_ordering(312) 00:15:01.782 fused_ordering(313) 00:15:01.782 fused_ordering(314) 00:15:01.782 fused_ordering(315) 00:15:01.782 fused_ordering(316) 00:15:01.782 fused_ordering(317) 00:15:01.782 fused_ordering(318) 00:15:01.782 fused_ordering(319) 00:15:01.782 fused_ordering(320) 00:15:01.782 fused_ordering(321) 00:15:01.782 fused_ordering(322) 00:15:01.782 fused_ordering(323) 00:15:01.782 fused_ordering(324) 00:15:01.782 fused_ordering(325) 00:15:01.782 fused_ordering(326) 00:15:01.782 fused_ordering(327) 00:15:01.782 fused_ordering(328) 00:15:01.782 fused_ordering(329) 00:15:01.782 fused_ordering(330) 00:15:01.782 fused_ordering(331) 00:15:01.782 fused_ordering(332) 00:15:01.782 fused_ordering(333) 00:15:01.782 fused_ordering(334) 00:15:01.782 fused_ordering(335) 00:15:01.782 fused_ordering(336) 00:15:01.782 fused_ordering(337) 00:15:01.782 fused_ordering(338) 00:15:01.782 fused_ordering(339) 00:15:01.782 fused_ordering(340) 00:15:01.782 fused_ordering(341) 00:15:01.782 fused_ordering(342) 00:15:01.782 fused_ordering(343) 00:15:01.782 fused_ordering(344) 00:15:01.782 fused_ordering(345) 00:15:01.782 fused_ordering(346) 00:15:01.782 fused_ordering(347) 00:15:01.782 fused_ordering(348) 00:15:01.782 fused_ordering(349) 00:15:01.782 fused_ordering(350) 00:15:01.782 fused_ordering(351) 00:15:01.782 fused_ordering(352) 00:15:01.782 fused_ordering(353) 00:15:01.782 fused_ordering(354) 00:15:01.782 fused_ordering(355) 00:15:01.782 fused_ordering(356) 00:15:01.782 fused_ordering(357) 00:15:01.782 fused_ordering(358) 00:15:01.782 fused_ordering(359) 00:15:01.782 fused_ordering(360) 00:15:01.782 fused_ordering(361) 00:15:01.782 fused_ordering(362) 00:15:01.782 fused_ordering(363) 00:15:01.782 fused_ordering(364) 00:15:01.782 fused_ordering(365) 00:15:01.782 fused_ordering(366) 00:15:01.782 fused_ordering(367) 00:15:01.782 fused_ordering(368) 00:15:01.782 fused_ordering(369) 00:15:01.782 fused_ordering(370) 00:15:01.782 fused_ordering(371) 00:15:01.782 fused_ordering(372) 00:15:01.782 fused_ordering(373) 00:15:01.782 fused_ordering(374) 00:15:01.782 fused_ordering(375) 00:15:01.782 fused_ordering(376) 00:15:01.782 fused_ordering(377) 00:15:01.782 fused_ordering(378) 00:15:01.782 fused_ordering(379) 00:15:01.782 fused_ordering(380) 00:15:01.782 fused_ordering(381) 00:15:01.782 fused_ordering(382) 00:15:01.782 fused_ordering(383) 00:15:01.782 fused_ordering(384) 00:15:01.782 fused_ordering(385) 00:15:01.782 fused_ordering(386) 00:15:01.782 fused_ordering(387) 00:15:01.782 fused_ordering(388) 00:15:01.782 fused_ordering(389) 00:15:01.782 fused_ordering(390) 00:15:01.782 fused_ordering(391) 00:15:01.782 fused_ordering(392) 00:15:01.782 fused_ordering(393) 00:15:01.782 fused_ordering(394) 00:15:01.782 fused_ordering(395) 00:15:01.782 fused_ordering(396) 00:15:01.782 fused_ordering(397) 00:15:01.782 fused_ordering(398) 00:15:01.782 fused_ordering(399) 00:15:01.782 fused_ordering(400) 00:15:01.782 fused_ordering(401) 00:15:01.782 fused_ordering(402) 00:15:01.782 fused_ordering(403) 00:15:01.782 fused_ordering(404) 00:15:01.782 fused_ordering(405) 00:15:01.782 fused_ordering(406) 00:15:01.782 fused_ordering(407) 00:15:01.782 fused_ordering(408) 00:15:01.782 fused_ordering(409) 00:15:01.782 fused_ordering(410) 00:15:02.354 fused_ordering(411) 00:15:02.354 fused_ordering(412) 00:15:02.354 fused_ordering(413) 00:15:02.354 fused_ordering(414) 00:15:02.354 fused_ordering(415) 00:15:02.354 fused_ordering(416) 00:15:02.354 fused_ordering(417) 00:15:02.354 fused_ordering(418) 00:15:02.354 fused_ordering(419) 00:15:02.355 fused_ordering(420) 00:15:02.355 fused_ordering(421) 00:15:02.355 fused_ordering(422) 00:15:02.355 fused_ordering(423) 00:15:02.355 fused_ordering(424) 00:15:02.355 fused_ordering(425) 00:15:02.355 fused_ordering(426) 00:15:02.355 fused_ordering(427) 00:15:02.355 fused_ordering(428) 00:15:02.355 fused_ordering(429) 00:15:02.355 fused_ordering(430) 00:15:02.355 fused_ordering(431) 00:15:02.355 fused_ordering(432) 00:15:02.355 fused_ordering(433) 00:15:02.355 fused_ordering(434) 00:15:02.355 fused_ordering(435) 00:15:02.355 fused_ordering(436) 00:15:02.355 fused_ordering(437) 00:15:02.355 fused_ordering(438) 00:15:02.355 fused_ordering(439) 00:15:02.355 fused_ordering(440) 00:15:02.355 fused_ordering(441) 00:15:02.355 fused_ordering(442) 00:15:02.355 fused_ordering(443) 00:15:02.355 fused_ordering(444) 00:15:02.355 fused_ordering(445) 00:15:02.355 fused_ordering(446) 00:15:02.355 fused_ordering(447) 00:15:02.355 fused_ordering(448) 00:15:02.355 fused_ordering(449) 00:15:02.355 fused_ordering(450) 00:15:02.355 fused_ordering(451) 00:15:02.355 fused_ordering(452) 00:15:02.355 fused_ordering(453) 00:15:02.355 fused_ordering(454) 00:15:02.355 fused_ordering(455) 00:15:02.355 fused_ordering(456) 00:15:02.355 fused_ordering(457) 00:15:02.355 fused_ordering(458) 00:15:02.355 fused_ordering(459) 00:15:02.355 fused_ordering(460) 00:15:02.355 fused_ordering(461) 00:15:02.355 fused_ordering(462) 00:15:02.355 fused_ordering(463) 00:15:02.355 fused_ordering(464) 00:15:02.355 fused_ordering(465) 00:15:02.355 fused_ordering(466) 00:15:02.355 fused_ordering(467) 00:15:02.355 fused_ordering(468) 00:15:02.355 fused_ordering(469) 00:15:02.355 fused_ordering(470) 00:15:02.355 fused_ordering(471) 00:15:02.355 fused_ordering(472) 00:15:02.355 fused_ordering(473) 00:15:02.355 fused_ordering(474) 00:15:02.355 fused_ordering(475) 00:15:02.355 fused_ordering(476) 00:15:02.355 fused_ordering(477) 00:15:02.355 fused_ordering(478) 00:15:02.355 fused_ordering(479) 00:15:02.355 fused_ordering(480) 00:15:02.355 fused_ordering(481) 00:15:02.355 fused_ordering(482) 00:15:02.355 fused_ordering(483) 00:15:02.355 fused_ordering(484) 00:15:02.355 fused_ordering(485) 00:15:02.355 fused_ordering(486) 00:15:02.355 fused_ordering(487) 00:15:02.355 fused_ordering(488) 00:15:02.355 fused_ordering(489) 00:15:02.355 fused_ordering(490) 00:15:02.355 fused_ordering(491) 00:15:02.355 fused_ordering(492) 00:15:02.355 fused_ordering(493) 00:15:02.355 fused_ordering(494) 00:15:02.355 fused_ordering(495) 00:15:02.355 fused_ordering(496) 00:15:02.355 fused_ordering(497) 00:15:02.355 fused_ordering(498) 00:15:02.355 fused_ordering(499) 00:15:02.355 fused_ordering(500) 00:15:02.355 fused_ordering(501) 00:15:02.355 fused_ordering(502) 00:15:02.355 fused_ordering(503) 00:15:02.355 fused_ordering(504) 00:15:02.355 fused_ordering(505) 00:15:02.355 fused_ordering(506) 00:15:02.355 fused_ordering(507) 00:15:02.355 fused_ordering(508) 00:15:02.355 fused_ordering(509) 00:15:02.355 fused_ordering(510) 00:15:02.355 fused_ordering(511) 00:15:02.355 fused_ordering(512) 00:15:02.355 fused_ordering(513) 00:15:02.355 fused_ordering(514) 00:15:02.355 fused_ordering(515) 00:15:02.355 fused_ordering(516) 00:15:02.355 fused_ordering(517) 00:15:02.355 fused_ordering(518) 00:15:02.355 fused_ordering(519) 00:15:02.355 fused_ordering(520) 00:15:02.355 fused_ordering(521) 00:15:02.355 fused_ordering(522) 00:15:02.355 fused_ordering(523) 00:15:02.355 fused_ordering(524) 00:15:02.355 fused_ordering(525) 00:15:02.355 fused_ordering(526) 00:15:02.355 fused_ordering(527) 00:15:02.355 fused_ordering(528) 00:15:02.355 fused_ordering(529) 00:15:02.355 fused_ordering(530) 00:15:02.355 fused_ordering(531) 00:15:02.355 fused_ordering(532) 00:15:02.355 fused_ordering(533) 00:15:02.355 fused_ordering(534) 00:15:02.355 fused_ordering(535) 00:15:02.355 fused_ordering(536) 00:15:02.355 fused_ordering(537) 00:15:02.355 fused_ordering(538) 00:15:02.355 fused_ordering(539) 00:15:02.355 fused_ordering(540) 00:15:02.355 fused_ordering(541) 00:15:02.355 fused_ordering(542) 00:15:02.355 fused_ordering(543) 00:15:02.355 fused_ordering(544) 00:15:02.355 fused_ordering(545) 00:15:02.355 fused_ordering(546) 00:15:02.355 fused_ordering(547) 00:15:02.355 fused_ordering(548) 00:15:02.355 fused_ordering(549) 00:15:02.355 fused_ordering(550) 00:15:02.355 fused_ordering(551) 00:15:02.355 fused_ordering(552) 00:15:02.355 fused_ordering(553) 00:15:02.355 fused_ordering(554) 00:15:02.355 fused_ordering(555) 00:15:02.355 fused_ordering(556) 00:15:02.355 fused_ordering(557) 00:15:02.355 fused_ordering(558) 00:15:02.355 fused_ordering(559) 00:15:02.355 fused_ordering(560) 00:15:02.355 fused_ordering(561) 00:15:02.355 fused_ordering(562) 00:15:02.355 fused_ordering(563) 00:15:02.355 fused_ordering(564) 00:15:02.355 fused_ordering(565) 00:15:02.355 fused_ordering(566) 00:15:02.355 fused_ordering(567) 00:15:02.355 fused_ordering(568) 00:15:02.355 fused_ordering(569) 00:15:02.355 fused_ordering(570) 00:15:02.355 fused_ordering(571) 00:15:02.355 fused_ordering(572) 00:15:02.355 fused_ordering(573) 00:15:02.355 fused_ordering(574) 00:15:02.355 fused_ordering(575) 00:15:02.355 fused_ordering(576) 00:15:02.355 fused_ordering(577) 00:15:02.355 fused_ordering(578) 00:15:02.355 fused_ordering(579) 00:15:02.355 fused_ordering(580) 00:15:02.355 fused_ordering(581) 00:15:02.355 fused_ordering(582) 00:15:02.355 fused_ordering(583) 00:15:02.355 fused_ordering(584) 00:15:02.355 fused_ordering(585) 00:15:02.355 fused_ordering(586) 00:15:02.355 fused_ordering(587) 00:15:02.355 fused_ordering(588) 00:15:02.355 fused_ordering(589) 00:15:02.355 fused_ordering(590) 00:15:02.355 fused_ordering(591) 00:15:02.355 fused_ordering(592) 00:15:02.355 fused_ordering(593) 00:15:02.355 fused_ordering(594) 00:15:02.355 fused_ordering(595) 00:15:02.355 fused_ordering(596) 00:15:02.355 fused_ordering(597) 00:15:02.355 fused_ordering(598) 00:15:02.355 fused_ordering(599) 00:15:02.355 fused_ordering(600) 00:15:02.355 fused_ordering(601) 00:15:02.355 fused_ordering(602) 00:15:02.355 fused_ordering(603) 00:15:02.355 fused_ordering(604) 00:15:02.355 fused_ordering(605) 00:15:02.355 fused_ordering(606) 00:15:02.355 fused_ordering(607) 00:15:02.355 fused_ordering(608) 00:15:02.355 fused_ordering(609) 00:15:02.355 fused_ordering(610) 00:15:02.355 fused_ordering(611) 00:15:02.355 fused_ordering(612) 00:15:02.355 fused_ordering(613) 00:15:02.355 fused_ordering(614) 00:15:02.355 fused_ordering(615) 00:15:02.929 fused_ordering(616) 00:15:02.929 fused_ordering(617) 00:15:02.929 fused_ordering(618) 00:15:02.929 fused_ordering(619) 00:15:02.929 fused_ordering(620) 00:15:02.929 fused_ordering(621) 00:15:02.929 fused_ordering(622) 00:15:02.929 fused_ordering(623) 00:15:02.929 fused_ordering(624) 00:15:02.929 fused_ordering(625) 00:15:02.929 fused_ordering(626) 00:15:02.929 fused_ordering(627) 00:15:02.929 fused_ordering(628) 00:15:02.929 fused_ordering(629) 00:15:02.929 fused_ordering(630) 00:15:02.929 fused_ordering(631) 00:15:02.929 fused_ordering(632) 00:15:02.929 fused_ordering(633) 00:15:02.929 fused_ordering(634) 00:15:02.929 fused_ordering(635) 00:15:02.929 fused_ordering(636) 00:15:02.929 fused_ordering(637) 00:15:02.929 fused_ordering(638) 00:15:02.929 fused_ordering(639) 00:15:02.929 fused_ordering(640) 00:15:02.929 fused_ordering(641) 00:15:02.929 fused_ordering(642) 00:15:02.929 fused_ordering(643) 00:15:02.929 fused_ordering(644) 00:15:02.929 fused_ordering(645) 00:15:02.929 fused_ordering(646) 00:15:02.929 fused_ordering(647) 00:15:02.929 fused_ordering(648) 00:15:02.929 fused_ordering(649) 00:15:02.929 fused_ordering(650) 00:15:02.929 fused_ordering(651) 00:15:02.929 fused_ordering(652) 00:15:02.929 fused_ordering(653) 00:15:02.929 fused_ordering(654) 00:15:02.929 fused_ordering(655) 00:15:02.929 fused_ordering(656) 00:15:02.929 fused_ordering(657) 00:15:02.929 fused_ordering(658) 00:15:02.929 fused_ordering(659) 00:15:02.929 fused_ordering(660) 00:15:02.929 fused_ordering(661) 00:15:02.929 fused_ordering(662) 00:15:02.929 fused_ordering(663) 00:15:02.929 fused_ordering(664) 00:15:02.929 fused_ordering(665) 00:15:02.929 fused_ordering(666) 00:15:02.929 fused_ordering(667) 00:15:02.929 fused_ordering(668) 00:15:02.929 fused_ordering(669) 00:15:02.929 fused_ordering(670) 00:15:02.929 fused_ordering(671) 00:15:02.929 fused_ordering(672) 00:15:02.929 fused_ordering(673) 00:15:02.929 fused_ordering(674) 00:15:02.929 fused_ordering(675) 00:15:02.929 fused_ordering(676) 00:15:02.929 fused_ordering(677) 00:15:02.929 fused_ordering(678) 00:15:02.929 fused_ordering(679) 00:15:02.929 fused_ordering(680) 00:15:02.929 fused_ordering(681) 00:15:02.929 fused_ordering(682) 00:15:02.929 fused_ordering(683) 00:15:02.929 fused_ordering(684) 00:15:02.929 fused_ordering(685) 00:15:02.929 fused_ordering(686) 00:15:02.929 fused_ordering(687) 00:15:02.929 fused_ordering(688) 00:15:02.929 fused_ordering(689) 00:15:02.929 fused_ordering(690) 00:15:02.929 fused_ordering(691) 00:15:02.929 fused_ordering(692) 00:15:02.929 fused_ordering(693) 00:15:02.929 fused_ordering(694) 00:15:02.929 fused_ordering(695) 00:15:02.929 fused_ordering(696) 00:15:02.929 fused_ordering(697) 00:15:02.929 fused_ordering(698) 00:15:02.929 fused_ordering(699) 00:15:02.929 fused_ordering(700) 00:15:02.929 fused_ordering(701) 00:15:02.929 fused_ordering(702) 00:15:02.929 fused_ordering(703) 00:15:02.929 fused_ordering(704) 00:15:02.929 fused_ordering(705) 00:15:02.929 fused_ordering(706) 00:15:02.929 fused_ordering(707) 00:15:02.929 fused_ordering(708) 00:15:02.929 fused_ordering(709) 00:15:02.929 fused_ordering(710) 00:15:02.929 fused_ordering(711) 00:15:02.929 fused_ordering(712) 00:15:02.929 fused_ordering(713) 00:15:02.929 fused_ordering(714) 00:15:02.929 fused_ordering(715) 00:15:02.929 fused_ordering(716) 00:15:02.929 fused_ordering(717) 00:15:02.929 fused_ordering(718) 00:15:02.929 fused_ordering(719) 00:15:02.929 fused_ordering(720) 00:15:02.929 fused_ordering(721) 00:15:02.929 fused_ordering(722) 00:15:02.929 fused_ordering(723) 00:15:02.929 fused_ordering(724) 00:15:02.929 fused_ordering(725) 00:15:02.929 fused_ordering(726) 00:15:02.929 fused_ordering(727) 00:15:02.929 fused_ordering(728) 00:15:02.929 fused_ordering(729) 00:15:02.929 fused_ordering(730) 00:15:02.929 fused_ordering(731) 00:15:02.929 fused_ordering(732) 00:15:02.929 fused_ordering(733) 00:15:02.929 fused_ordering(734) 00:15:02.929 fused_ordering(735) 00:15:02.929 fused_ordering(736) 00:15:02.929 fused_ordering(737) 00:15:02.929 fused_ordering(738) 00:15:02.929 fused_ordering(739) 00:15:02.929 fused_ordering(740) 00:15:02.929 fused_ordering(741) 00:15:02.929 fused_ordering(742) 00:15:02.929 fused_ordering(743) 00:15:02.929 fused_ordering(744) 00:15:02.929 fused_ordering(745) 00:15:02.929 fused_ordering(746) 00:15:02.929 fused_ordering(747) 00:15:02.929 fused_ordering(748) 00:15:02.929 fused_ordering(749) 00:15:02.929 fused_ordering(750) 00:15:02.929 fused_ordering(751) 00:15:02.929 fused_ordering(752) 00:15:02.929 fused_ordering(753) 00:15:02.929 fused_ordering(754) 00:15:02.929 fused_ordering(755) 00:15:02.929 fused_ordering(756) 00:15:02.929 fused_ordering(757) 00:15:02.929 fused_ordering(758) 00:15:02.929 fused_ordering(759) 00:15:02.929 fused_ordering(760) 00:15:02.929 fused_ordering(761) 00:15:02.929 fused_ordering(762) 00:15:02.929 fused_ordering(763) 00:15:02.929 fused_ordering(764) 00:15:02.929 fused_ordering(765) 00:15:02.929 fused_ordering(766) 00:15:02.929 fused_ordering(767) 00:15:02.929 fused_ordering(768) 00:15:02.929 fused_ordering(769) 00:15:02.929 fused_ordering(770) 00:15:02.929 fused_ordering(771) 00:15:02.929 fused_ordering(772) 00:15:02.929 fused_ordering(773) 00:15:02.929 fused_ordering(774) 00:15:02.929 fused_ordering(775) 00:15:02.929 fused_ordering(776) 00:15:02.929 fused_ordering(777) 00:15:02.929 fused_ordering(778) 00:15:02.929 fused_ordering(779) 00:15:02.929 fused_ordering(780) 00:15:02.929 fused_ordering(781) 00:15:02.929 fused_ordering(782) 00:15:02.929 fused_ordering(783) 00:15:02.929 fused_ordering(784) 00:15:02.929 fused_ordering(785) 00:15:02.929 fused_ordering(786) 00:15:02.929 fused_ordering(787) 00:15:02.929 fused_ordering(788) 00:15:02.929 fused_ordering(789) 00:15:02.929 fused_ordering(790) 00:15:02.929 fused_ordering(791) 00:15:02.929 fused_ordering(792) 00:15:02.929 fused_ordering(793) 00:15:02.929 fused_ordering(794) 00:15:02.929 fused_ordering(795) 00:15:02.929 fused_ordering(796) 00:15:02.929 fused_ordering(797) 00:15:02.929 fused_ordering(798) 00:15:02.929 fused_ordering(799) 00:15:02.929 fused_ordering(800) 00:15:02.929 fused_ordering(801) 00:15:02.929 fused_ordering(802) 00:15:02.929 fused_ordering(803) 00:15:02.929 fused_ordering(804) 00:15:02.929 fused_ordering(805) 00:15:02.929 fused_ordering(806) 00:15:02.929 fused_ordering(807) 00:15:02.929 fused_ordering(808) 00:15:02.929 fused_ordering(809) 00:15:02.929 fused_ordering(810) 00:15:02.929 fused_ordering(811) 00:15:02.929 fused_ordering(812) 00:15:02.929 fused_ordering(813) 00:15:02.929 fused_ordering(814) 00:15:02.929 fused_ordering(815) 00:15:02.929 fused_ordering(816) 00:15:02.929 fused_ordering(817) 00:15:02.929 fused_ordering(818) 00:15:02.929 fused_ordering(819) 00:15:02.929 fused_ordering(820) 00:15:03.502 fused_ordering(821) 00:15:03.502 fused_ordering(822) 00:15:03.502 fused_ordering(823) 00:15:03.502 fused_ordering(824) 00:15:03.502 fused_ordering(825) 00:15:03.502 fused_ordering(826) 00:15:03.502 fused_ordering(827) 00:15:03.502 fused_ordering(828) 00:15:03.502 fused_ordering(829) 00:15:03.502 fused_ordering(830) 00:15:03.502 fused_ordering(831) 00:15:03.502 fused_ordering(832) 00:15:03.502 fused_ordering(833) 00:15:03.502 fused_ordering(834) 00:15:03.502 fused_ordering(835) 00:15:03.502 fused_ordering(836) 00:15:03.502 fused_ordering(837) 00:15:03.502 fused_ordering(838) 00:15:03.502 fused_ordering(839) 00:15:03.502 fused_ordering(840) 00:15:03.502 fused_ordering(841) 00:15:03.502 fused_ordering(842) 00:15:03.502 fused_ordering(843) 00:15:03.502 fused_ordering(844) 00:15:03.502 fused_ordering(845) 00:15:03.502 fused_ordering(846) 00:15:03.502 fused_ordering(847) 00:15:03.502 fused_ordering(848) 00:15:03.502 fused_ordering(849) 00:15:03.502 fused_ordering(850) 00:15:03.502 fused_ordering(851) 00:15:03.502 fused_ordering(852) 00:15:03.502 fused_ordering(853) 00:15:03.502 fused_ordering(854) 00:15:03.502 fused_ordering(855) 00:15:03.502 fused_ordering(856) 00:15:03.502 fused_ordering(857) 00:15:03.502 fused_ordering(858) 00:15:03.502 fused_ordering(859) 00:15:03.502 fused_ordering(860) 00:15:03.502 fused_ordering(861) 00:15:03.502 fused_ordering(862) 00:15:03.502 fused_ordering(863) 00:15:03.502 fused_ordering(864) 00:15:03.502 fused_ordering(865) 00:15:03.502 fused_ordering(866) 00:15:03.502 fused_ordering(867) 00:15:03.502 fused_ordering(868) 00:15:03.502 fused_ordering(869) 00:15:03.502 fused_ordering(870) 00:15:03.502 fused_ordering(871) 00:15:03.502 fused_ordering(872) 00:15:03.502 fused_ordering(873) 00:15:03.502 fused_ordering(874) 00:15:03.502 fused_ordering(875) 00:15:03.502 fused_ordering(876) 00:15:03.502 fused_ordering(877) 00:15:03.502 fused_ordering(878) 00:15:03.502 fused_ordering(879) 00:15:03.502 fused_ordering(880) 00:15:03.502 fused_ordering(881) 00:15:03.502 fused_ordering(882) 00:15:03.502 fused_ordering(883) 00:15:03.502 fused_ordering(884) 00:15:03.502 fused_ordering(885) 00:15:03.502 fused_ordering(886) 00:15:03.502 fused_ordering(887) 00:15:03.502 fused_ordering(888) 00:15:03.502 fused_ordering(889) 00:15:03.502 fused_ordering(890) 00:15:03.502 fused_ordering(891) 00:15:03.502 fused_ordering(892) 00:15:03.502 fused_ordering(893) 00:15:03.502 fused_ordering(894) 00:15:03.502 fused_ordering(895) 00:15:03.502 fused_ordering(896) 00:15:03.502 fused_ordering(897) 00:15:03.502 fused_ordering(898) 00:15:03.502 fused_ordering(899) 00:15:03.502 fused_ordering(900) 00:15:03.502 fused_ordering(901) 00:15:03.502 fused_ordering(902) 00:15:03.502 fused_ordering(903) 00:15:03.502 fused_ordering(904) 00:15:03.502 fused_ordering(905) 00:15:03.502 fused_ordering(906) 00:15:03.502 fused_ordering(907) 00:15:03.502 fused_ordering(908) 00:15:03.502 fused_ordering(909) 00:15:03.502 fused_ordering(910) 00:15:03.502 fused_ordering(911) 00:15:03.502 fused_ordering(912) 00:15:03.502 fused_ordering(913) 00:15:03.502 fused_ordering(914) 00:15:03.502 fused_ordering(915) 00:15:03.502 fused_ordering(916) 00:15:03.502 fused_ordering(917) 00:15:03.502 fused_ordering(918) 00:15:03.502 fused_ordering(919) 00:15:03.503 fused_ordering(920) 00:15:03.503 fused_ordering(921) 00:15:03.503 fused_ordering(922) 00:15:03.503 fused_ordering(923) 00:15:03.503 fused_ordering(924) 00:15:03.503 fused_ordering(925) 00:15:03.503 fused_ordering(926) 00:15:03.503 fused_ordering(927) 00:15:03.503 fused_ordering(928) 00:15:03.503 fused_ordering(929) 00:15:03.503 fused_ordering(930) 00:15:03.503 fused_ordering(931) 00:15:03.503 fused_ordering(932) 00:15:03.503 fused_ordering(933) 00:15:03.503 fused_ordering(934) 00:15:03.503 fused_ordering(935) 00:15:03.503 fused_ordering(936) 00:15:03.503 fused_ordering(937) 00:15:03.503 fused_ordering(938) 00:15:03.503 fused_ordering(939) 00:15:03.503 fused_ordering(940) 00:15:03.503 fused_ordering(941) 00:15:03.503 fused_ordering(942) 00:15:03.503 fused_ordering(943) 00:15:03.503 fused_ordering(944) 00:15:03.503 fused_ordering(945) 00:15:03.503 fused_ordering(946) 00:15:03.503 fused_ordering(947) 00:15:03.503 fused_ordering(948) 00:15:03.503 fused_ordering(949) 00:15:03.503 fused_ordering(950) 00:15:03.503 fused_ordering(951) 00:15:03.503 fused_ordering(952) 00:15:03.503 fused_ordering(953) 00:15:03.503 fused_ordering(954) 00:15:03.503 fused_ordering(955) 00:15:03.503 fused_ordering(956) 00:15:03.503 fused_ordering(957) 00:15:03.503 fused_ordering(958) 00:15:03.503 fused_ordering(959) 00:15:03.503 fused_ordering(960) 00:15:03.503 fused_ordering(961) 00:15:03.503 fused_ordering(962) 00:15:03.503 fused_ordering(963) 00:15:03.503 fused_ordering(964) 00:15:03.503 fused_ordering(965) 00:15:03.503 fused_ordering(966) 00:15:03.503 fused_ordering(967) 00:15:03.503 fused_ordering(968) 00:15:03.503 fused_ordering(969) 00:15:03.503 fused_ordering(970) 00:15:03.503 fused_ordering(971) 00:15:03.503 fused_ordering(972) 00:15:03.503 fused_ordering(973) 00:15:03.503 fused_ordering(974) 00:15:03.503 fused_ordering(975) 00:15:03.503 fused_ordering(976) 00:15:03.503 fused_ordering(977) 00:15:03.503 fused_ordering(978) 00:15:03.503 fused_ordering(979) 00:15:03.503 fused_ordering(980) 00:15:03.503 fused_ordering(981) 00:15:03.503 fused_ordering(982) 00:15:03.503 fused_ordering(983) 00:15:03.503 fused_ordering(984) 00:15:03.503 fused_ordering(985) 00:15:03.503 fused_ordering(986) 00:15:03.503 fused_ordering(987) 00:15:03.503 fused_ordering(988) 00:15:03.503 fused_ordering(989) 00:15:03.503 fused_ordering(990) 00:15:03.503 fused_ordering(991) 00:15:03.503 fused_ordering(992) 00:15:03.503 fused_ordering(993) 00:15:03.503 fused_ordering(994) 00:15:03.503 fused_ordering(995) 00:15:03.503 fused_ordering(996) 00:15:03.503 fused_ordering(997) 00:15:03.503 fused_ordering(998) 00:15:03.503 fused_ordering(999) 00:15:03.503 fused_ordering(1000) 00:15:03.503 fused_ordering(1001) 00:15:03.503 fused_ordering(1002) 00:15:03.503 fused_ordering(1003) 00:15:03.503 fused_ordering(1004) 00:15:03.503 fused_ordering(1005) 00:15:03.503 fused_ordering(1006) 00:15:03.503 fused_ordering(1007) 00:15:03.503 fused_ordering(1008) 00:15:03.503 fused_ordering(1009) 00:15:03.503 fused_ordering(1010) 00:15:03.503 fused_ordering(1011) 00:15:03.503 fused_ordering(1012) 00:15:03.503 fused_ordering(1013) 00:15:03.503 fused_ordering(1014) 00:15:03.503 fused_ordering(1015) 00:15:03.503 fused_ordering(1016) 00:15:03.503 fused_ordering(1017) 00:15:03.503 fused_ordering(1018) 00:15:03.503 fused_ordering(1019) 00:15:03.503 fused_ordering(1020) 00:15:03.503 fused_ordering(1021) 00:15:03.503 fused_ordering(1022) 00:15:03.503 fused_ordering(1023) 00:15:03.503 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:15:03.503 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:15:03.503 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:03.503 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:15:03.503 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:03.503 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:15:03.503 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:03.503 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:03.503 rmmod nvme_tcp 00:15:03.503 rmmod nvme_fabrics 00:15:03.503 rmmod nvme_keyring 00:15:03.503 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:03.503 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:15:03.503 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:15:03.503 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 1309316 ']' 00:15:03.503 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 1309316 00:15:03.503 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 1309316 ']' 00:15:03.503 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 1309316 00:15:03.503 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:15:03.503 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:03.503 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1309316 00:15:03.503 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:03.503 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:03.503 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1309316' 00:15:03.503 killing process with pid 1309316 00:15:03.503 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 1309316 00:15:03.503 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 1309316 00:15:03.764 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:03.764 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:03.764 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:03.764 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:15:03.764 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:15:03.764 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:03.764 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:15:03.764 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:03.764 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:03.764 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:03.764 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:03.764 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:05.710 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:05.710 00:15:05.710 real 0m13.656s 00:15:05.710 user 0m7.281s 00:15:05.710 sys 0m7.351s 00:15:05.710 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:05.710 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:05.710 ************************************ 00:15:05.710 END TEST nvmf_fused_ordering 00:15:05.710 ************************************ 00:15:06.010 09:48:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:15:06.010 09:48:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:06.010 09:48:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:06.010 09:48:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:06.010 ************************************ 00:15:06.010 START TEST nvmf_ns_masking 00:15:06.010 ************************************ 00:15:06.010 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:15:06.010 * Looking for test storage... 00:15:06.010 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:06.010 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:06.010 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:15:06.010 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:06.010 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:06.010 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:06.010 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:06.010 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:06.010 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:15:06.010 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:15:06.010 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:15:06.010 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:15:06.010 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:15:06.010 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:15:06.010 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:15:06.010 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:06.011 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:15:06.011 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:15:06.011 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:06.011 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:06.011 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:15:06.011 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:15:06.011 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:06.011 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:15:06.011 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:15:06.011 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:15:06.011 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:15:06.011 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:06.011 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:15:06.011 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:15:06.011 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:06.011 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:06.011 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:15:06.011 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:06.011 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:06.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:06.011 --rc genhtml_branch_coverage=1 00:15:06.011 --rc genhtml_function_coverage=1 00:15:06.011 --rc genhtml_legend=1 00:15:06.011 --rc geninfo_all_blocks=1 00:15:06.011 --rc geninfo_unexecuted_blocks=1 00:15:06.011 00:15:06.011 ' 00:15:06.011 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:06.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:06.011 --rc genhtml_branch_coverage=1 00:15:06.011 --rc genhtml_function_coverage=1 00:15:06.011 --rc genhtml_legend=1 00:15:06.011 --rc geninfo_all_blocks=1 00:15:06.011 --rc geninfo_unexecuted_blocks=1 00:15:06.011 00:15:06.011 ' 00:15:06.011 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:06.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:06.011 --rc genhtml_branch_coverage=1 00:15:06.011 --rc genhtml_function_coverage=1 00:15:06.011 --rc genhtml_legend=1 00:15:06.011 --rc geninfo_all_blocks=1 00:15:06.011 --rc geninfo_unexecuted_blocks=1 00:15:06.011 00:15:06.011 ' 00:15:06.011 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:06.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:06.011 --rc genhtml_branch_coverage=1 00:15:06.011 --rc genhtml_function_coverage=1 00:15:06.011 --rc genhtml_legend=1 00:15:06.011 --rc geninfo_all_blocks=1 00:15:06.011 --rc geninfo_unexecuted_blocks=1 00:15:06.011 00:15:06.011 ' 00:15:06.011 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:06.011 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:15:06.011 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:06.011 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:06.011 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:06.011 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:06.011 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:06.011 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:06.011 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:06.011 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:06.011 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:06.011 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:06.011 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:06.011 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:06.011 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:06.011 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:06.011 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:06.011 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:06.011 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:06.011 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:15:06.011 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:06.011 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:06.011 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:06.011 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.011 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.011 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.011 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:15:06.011 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.011 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:15:06.011 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:06.011 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:06.011 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:06.011 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:06.011 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:06.011 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:06.011 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:06.011 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:06.011 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:06.011 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:06.011 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:06.011 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:15:06.011 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:15:06.011 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:15:06.011 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=4ae71ba9-d405-428c-bb0e-432c91494328 00:15:06.272 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:15:06.272 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=c6e5b343-8f79-46de-8b37-862cb1405db6 00:15:06.272 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:15:06.272 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:15:06.272 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:15:06.272 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:15:06.272 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=1d88c4a1-c375-44cc-8ac4-4b77c63e9647 00:15:06.272 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:15:06.272 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:06.272 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:06.272 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:06.272 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:06.272 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:06.272 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:06.272 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:06.272 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:06.272 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:06.272 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:06.272 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:15:06.272 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:14.410 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:14.410 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:15:14.410 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:14.410 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:14.410 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:14.410 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:14.410 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:14.410 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:15:14.410 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:14.410 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:15:14.410 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:15:14.410 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:15:14.410 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:15:14.410 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:15:14.410 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:15:14.410 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:14.410 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:14.410 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:14.410 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:14.410 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:14.410 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:14.410 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:14.410 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:14.410 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:14.410 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:14.410 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:14.410 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:14.410 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:14.410 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:14.410 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:14.410 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:14.410 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:14.410 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:14.410 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:14.410 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:14.410 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:14.410 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:14.410 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:14.410 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:14.410 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:14.410 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:14.410 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:14.410 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:14.410 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:14.410 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:14.410 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:14.410 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:14.410 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:14.410 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:14.410 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:14.410 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:14.410 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:14.410 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:14.410 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:14.410 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:14.411 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:14.411 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:14.411 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:14.411 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:14.411 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:14.411 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:14.411 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:14.411 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:14.411 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:14.411 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:14.411 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:14.411 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:14.411 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:14.411 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:14.411 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:14.411 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:14.411 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:14.411 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:14.411 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:15:14.411 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:14.411 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:14.411 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:14.411 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:14.411 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:14.411 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:14.411 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:14.411 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:14.411 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:14.411 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:14.411 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:14.411 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:14.411 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:14.411 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:14.411 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:14.411 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:14.411 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:14.411 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:14.411 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:14.411 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:14.411 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:14.411 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:14.411 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:14.411 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:14.411 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:14.411 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:14.411 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:14.411 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.616 ms 00:15:14.411 00:15:14.411 --- 10.0.0.2 ping statistics --- 00:15:14.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:14.411 rtt min/avg/max/mdev = 0.616/0.616/0.616/0.000 ms 00:15:14.411 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:14.411 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:14.411 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:15:14.411 00:15:14.411 --- 10.0.0.1 ping statistics --- 00:15:14.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:14.411 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:15:14.411 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:14.411 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:15:14.411 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:14.411 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:14.411 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:14.411 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:14.411 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:14.411 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:14.411 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:14.411 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:15:14.411 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:14.411 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:14.411 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:14.411 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=1314337 00:15:14.411 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 1314337 00:15:14.411 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:14.411 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 1314337 ']' 00:15:14.411 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:14.411 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:14.411 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:14.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:14.411 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:14.411 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:14.411 [2024-11-20 09:48:44.521936] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:15:14.411 [2024-11-20 09:48:44.522004] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:14.411 [2024-11-20 09:48:44.620465] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:14.411 [2024-11-20 09:48:44.670787] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:14.411 [2024-11-20 09:48:44.670836] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:14.411 [2024-11-20 09:48:44.670844] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:14.411 [2024-11-20 09:48:44.670852] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:14.411 [2024-11-20 09:48:44.670859] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:14.411 [2024-11-20 09:48:44.671643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:14.672 09:48:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:14.672 09:48:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:15:14.672 09:48:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:14.672 09:48:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:14.672 09:48:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:14.672 09:48:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:14.672 09:48:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:14.672 [2024-11-20 09:48:45.541889] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:14.672 09:48:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:15:14.672 09:48:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:15:14.672 09:48:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:14.934 Malloc1 00:15:14.934 09:48:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:15.195 Malloc2 00:15:15.195 09:48:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:15.456 09:48:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:15:15.717 09:48:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:15.717 [2024-11-20 09:48:46.562927] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:15.717 09:48:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:15:15.717 09:48:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 1d88c4a1-c375-44cc-8ac4-4b77c63e9647 -a 10.0.0.2 -s 4420 -i 4 00:15:15.978 09:48:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:15:15.978 09:48:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:15:15.978 09:48:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:15.978 09:48:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:15:15.978 09:48:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:15:18.525 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:18.525 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:18.525 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:18.525 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:18.525 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:18.525 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:15:18.525 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:18.525 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:18.525 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:18.525 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:18.525 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:15:18.525 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:18.525 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:18.525 [ 0]:0x1 00:15:18.525 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:18.525 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:18.525 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=df80bc8d9dae481ba6b1669387824a05 00:15:18.525 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ df80bc8d9dae481ba6b1669387824a05 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:18.525 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:15:18.525 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:15:18.525 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:18.525 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:18.525 [ 0]:0x1 00:15:18.525 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:18.525 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:18.525 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=df80bc8d9dae481ba6b1669387824a05 00:15:18.525 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ df80bc8d9dae481ba6b1669387824a05 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:18.525 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:15:18.525 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:18.525 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:18.525 [ 1]:0x2 00:15:18.525 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:18.525 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:18.525 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=145a122384484c2e9750820c89e0bbb0 00:15:18.525 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 145a122384484c2e9750820c89e0bbb0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:18.525 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:15:18.525 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:18.525 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:18.525 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:18.785 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:15:18.785 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:15:18.785 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 1d88c4a1-c375-44cc-8ac4-4b77c63e9647 -a 10.0.0.2 -s 4420 -i 4 00:15:19.046 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:15:19.046 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:15:19.046 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:19.046 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:15:19.046 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:15:19.046 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:15:20.960 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:20.960 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:20.960 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:20.960 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:20.960 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:20.960 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:15:20.960 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:20.960 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:21.220 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:21.220 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:21.220 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:15:21.220 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:21.220 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:15:21.220 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:15:21.220 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:21.220 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:15:21.220 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:21.220 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:15:21.220 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:21.220 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:21.220 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:21.220 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:21.220 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:21.220 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:21.220 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:21.220 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:21.220 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:21.220 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:21.220 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:15:21.220 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:21.220 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:21.220 [ 0]:0x2 00:15:21.220 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:21.220 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:21.220 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=145a122384484c2e9750820c89e0bbb0 00:15:21.220 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 145a122384484c2e9750820c89e0bbb0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:21.220 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:21.481 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:15:21.481 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:21.481 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:21.481 [ 0]:0x1 00:15:21.481 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:21.481 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:21.481 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=df80bc8d9dae481ba6b1669387824a05 00:15:21.481 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ df80bc8d9dae481ba6b1669387824a05 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:21.481 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:15:21.481 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:21.481 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:21.481 [ 1]:0x2 00:15:21.481 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:21.481 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:21.481 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=145a122384484c2e9750820c89e0bbb0 00:15:21.481 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 145a122384484c2e9750820c89e0bbb0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:21.481 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:21.742 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:15:21.742 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:21.742 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:15:21.742 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:15:21.742 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:21.742 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:15:21.742 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:21.742 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:15:21.742 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:21.742 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:21.742 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:21.742 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:21.742 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:21.742 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:21.742 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:21.742 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:21.742 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:21.742 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:21.742 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:15:21.742 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:21.742 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:21.742 [ 0]:0x2 00:15:21.742 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:21.742 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:22.003 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=145a122384484c2e9750820c89e0bbb0 00:15:22.004 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 145a122384484c2e9750820c89e0bbb0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:22.004 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:15:22.004 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:22.004 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:22.004 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:22.264 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:15:22.264 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 1d88c4a1-c375-44cc-8ac4-4b77c63e9647 -a 10.0.0.2 -s 4420 -i 4 00:15:22.265 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:22.265 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:15:22.265 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:22.265 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:15:22.265 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:15:22.265 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:15:24.175 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:24.175 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:24.175 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:24.175 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:15:24.175 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:24.175 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:15:24.436 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:24.436 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:24.436 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:24.436 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:24.436 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:15:24.436 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:24.436 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:24.436 [ 0]:0x1 00:15:24.436 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:24.436 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:24.436 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=df80bc8d9dae481ba6b1669387824a05 00:15:24.436 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ df80bc8d9dae481ba6b1669387824a05 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:24.436 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:15:24.436 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:24.436 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:24.436 [ 1]:0x2 00:15:24.436 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:24.436 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:24.436 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=145a122384484c2e9750820c89e0bbb0 00:15:24.436 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 145a122384484c2e9750820c89e0bbb0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:24.436 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:24.696 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:15:24.696 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:24.696 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:15:24.696 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:15:24.696 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:24.696 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:15:24.696 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:24.696 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:15:24.696 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:24.696 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:24.696 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:24.696 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:24.696 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:24.697 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:24.697 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:24.697 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:24.697 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:24.697 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:24.697 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:15:24.697 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:24.697 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:24.697 [ 0]:0x2 00:15:24.697 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:24.697 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:24.697 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=145a122384484c2e9750820c89e0bbb0 00:15:24.697 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 145a122384484c2e9750820c89e0bbb0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:24.697 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:24.697 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:24.697 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:24.697 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:24.957 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:24.957 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:24.957 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:24.957 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:24.957 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:24.957 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:24.957 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:24.957 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:24.957 [2024-11-20 09:48:55.768440] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:15:24.957 request: 00:15:24.957 { 00:15:24.957 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:24.957 "nsid": 2, 00:15:24.957 "host": "nqn.2016-06.io.spdk:host1", 00:15:24.957 "method": "nvmf_ns_remove_host", 00:15:24.957 "req_id": 1 00:15:24.957 } 00:15:24.957 Got JSON-RPC error response 00:15:24.957 response: 00:15:24.957 { 00:15:24.957 "code": -32602, 00:15:24.957 "message": "Invalid parameters" 00:15:24.957 } 00:15:24.957 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:24.957 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:24.957 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:24.957 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:24.957 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:15:24.957 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:24.957 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:15:24.957 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:15:24.957 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:24.957 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:15:24.957 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:24.957 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:15:24.957 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:24.957 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:24.957 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:24.957 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:25.219 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:25.219 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:25.219 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:25.219 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:25.219 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:25.219 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:25.219 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:15:25.219 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:25.219 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:25.219 [ 0]:0x2 00:15:25.219 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:25.219 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:25.219 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=145a122384484c2e9750820c89e0bbb0 00:15:25.219 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 145a122384484c2e9750820c89e0bbb0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:25.219 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:15:25.219 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:25.219 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:25.219 09:48:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1316568 00:15:25.219 09:48:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:15:25.219 09:48:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:15:25.219 09:48:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1316568 /var/tmp/host.sock 00:15:25.219 09:48:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 1316568 ']' 00:15:25.219 09:48:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:15:25.219 09:48:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:25.219 09:48:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:25.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:25.219 09:48:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:25.219 09:48:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:25.219 [2024-11-20 09:48:56.077743] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:15:25.219 [2024-11-20 09:48:56.077796] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1316568 ] 00:15:25.480 [2024-11-20 09:48:56.168182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:25.480 [2024-11-20 09:48:56.203987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:26.052 09:48:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:26.052 09:48:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:15:26.052 09:48:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:26.312 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:26.573 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 4ae71ba9-d405-428c-bb0e-432c91494328 00:15:26.573 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:15:26.573 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 4AE71BA9D405428CBB0E432C91494328 -i 00:15:26.573 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid c6e5b343-8f79-46de-8b37-862cb1405db6 00:15:26.573 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:15:26.573 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g C6E5B3438F7946DE8B37862CB1405DB6 -i 00:15:26.833 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:27.094 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:15:27.094 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:15:27.094 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:15:27.665 nvme0n1 00:15:27.665 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:15:27.665 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:15:27.925 nvme1n2 00:15:28.185 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:15:28.185 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:15:28.185 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:15:28.185 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:15:28.185 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:15:28.185 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:15:28.185 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:15:28.185 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:15:28.185 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:15:28.446 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 4ae71ba9-d405-428c-bb0e-432c91494328 == \4\a\e\7\1\b\a\9\-\d\4\0\5\-\4\2\8\c\-\b\b\0\e\-\4\3\2\c\9\1\4\9\4\3\2\8 ]] 00:15:28.446 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:15:28.446 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:15:28.446 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:15:28.706 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ c6e5b343-8f79-46de-8b37-862cb1405db6 == \c\6\e\5\b\3\4\3\-\8\f\7\9\-\4\6\d\e\-\8\b\3\7\-\8\6\2\c\b\1\4\0\5\d\b\6 ]] 00:15:28.706 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:28.706 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:28.967 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 4ae71ba9-d405-428c-bb0e-432c91494328 00:15:28.967 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:15:28.967 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 4AE71BA9D405428CBB0E432C91494328 00:15:28.967 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:28.967 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 4AE71BA9D405428CBB0E432C91494328 00:15:28.967 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:28.967 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:28.967 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:28.967 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:28.967 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:28.967 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:28.967 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:28.967 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:28.967 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 4AE71BA9D405428CBB0E432C91494328 00:15:29.228 [2024-11-20 09:48:59.927295] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:15:29.228 [2024-11-20 09:48:59.927323] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:15:29.228 [2024-11-20 09:48:59.927330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.228 request: 00:15:29.228 { 00:15:29.228 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:29.228 "namespace": { 00:15:29.228 "bdev_name": "invalid", 00:15:29.228 "nsid": 1, 00:15:29.228 "nguid": "4AE71BA9D405428CBB0E432C91494328", 00:15:29.228 "no_auto_visible": false 00:15:29.228 }, 00:15:29.228 "method": "nvmf_subsystem_add_ns", 00:15:29.228 "req_id": 1 00:15:29.228 } 00:15:29.228 Got JSON-RPC error response 00:15:29.228 response: 00:15:29.228 { 00:15:29.228 "code": -32602, 00:15:29.228 "message": "Invalid parameters" 00:15:29.228 } 00:15:29.228 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:29.228 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:29.228 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:29.228 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:29.228 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 4ae71ba9-d405-428c-bb0e-432c91494328 00:15:29.228 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:15:29.228 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 4AE71BA9D405428CBB0E432C91494328 -i 00:15:29.228 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:15:31.772 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:15:31.772 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:15:31.772 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:15:31.773 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:15:31.773 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 1316568 00:15:31.773 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 1316568 ']' 00:15:31.773 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 1316568 00:15:31.773 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:15:31.773 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:31.773 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1316568 00:15:31.773 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:31.773 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:31.773 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1316568' 00:15:31.773 killing process with pid 1316568 00:15:31.773 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 1316568 00:15:31.773 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 1316568 00:15:31.773 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:32.033 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:15:32.033 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:15:32.033 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:32.033 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:15:32.033 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:32.033 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:15:32.033 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:32.033 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:32.033 rmmod nvme_tcp 00:15:32.033 rmmod nvme_fabrics 00:15:32.033 rmmod nvme_keyring 00:15:32.033 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:32.033 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:15:32.033 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:15:32.033 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 1314337 ']' 00:15:32.033 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 1314337 00:15:32.033 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 1314337 ']' 00:15:32.033 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 1314337 00:15:32.033 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:15:32.033 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:32.033 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1314337 00:15:32.033 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:32.033 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:32.033 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1314337' 00:15:32.033 killing process with pid 1314337 00:15:32.033 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 1314337 00:15:32.033 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 1314337 00:15:32.293 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:32.293 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:32.293 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:32.293 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:15:32.293 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:15:32.293 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:32.293 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:15:32.293 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:32.293 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:32.293 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:32.293 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:32.293 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:34.205 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:34.205 00:15:34.205 real 0m28.410s 00:15:34.205 user 0m32.541s 00:15:34.205 sys 0m8.319s 00:15:34.205 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:34.205 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:34.205 ************************************ 00:15:34.205 END TEST nvmf_ns_masking 00:15:34.205 ************************************ 00:15:34.466 09:49:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:15:34.466 09:49:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:34.466 09:49:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:34.466 09:49:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:34.466 09:49:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:34.466 ************************************ 00:15:34.466 START TEST nvmf_nvme_cli 00:15:34.466 ************************************ 00:15:34.466 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:34.466 * Looking for test storage... 00:15:34.466 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:34.467 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:34.467 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:34.467 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lcov --version 00:15:34.467 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:34.467 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:34.467 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:34.467 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:34.467 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:15:34.467 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:15:34.467 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:15:34.467 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:15:34.467 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:15:34.467 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:15:34.467 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:15:34.467 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:34.467 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:15:34.467 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:15:34.467 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:34.467 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:34.467 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:15:34.467 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:15:34.467 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:34.467 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:15:34.467 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:15:34.467 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:15:34.467 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:15:34.467 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:34.467 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:15:34.467 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:15:34.467 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:34.467 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:34.467 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:15:34.467 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:34.467 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:34.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:34.467 --rc genhtml_branch_coverage=1 00:15:34.467 --rc genhtml_function_coverage=1 00:15:34.467 --rc genhtml_legend=1 00:15:34.467 --rc geninfo_all_blocks=1 00:15:34.467 --rc geninfo_unexecuted_blocks=1 00:15:34.467 00:15:34.467 ' 00:15:34.467 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:34.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:34.467 --rc genhtml_branch_coverage=1 00:15:34.467 --rc genhtml_function_coverage=1 00:15:34.467 --rc genhtml_legend=1 00:15:34.467 --rc geninfo_all_blocks=1 00:15:34.467 --rc geninfo_unexecuted_blocks=1 00:15:34.467 00:15:34.467 ' 00:15:34.467 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:34.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:34.467 --rc genhtml_branch_coverage=1 00:15:34.467 --rc genhtml_function_coverage=1 00:15:34.467 --rc genhtml_legend=1 00:15:34.467 --rc geninfo_all_blocks=1 00:15:34.467 --rc geninfo_unexecuted_blocks=1 00:15:34.467 00:15:34.467 ' 00:15:34.467 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:34.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:34.467 --rc genhtml_branch_coverage=1 00:15:34.467 --rc genhtml_function_coverage=1 00:15:34.467 --rc genhtml_legend=1 00:15:34.467 --rc geninfo_all_blocks=1 00:15:34.467 --rc geninfo_unexecuted_blocks=1 00:15:34.467 00:15:34.467 ' 00:15:34.467 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:34.467 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:15:34.467 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:34.728 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:34.728 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:34.728 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:34.728 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:34.728 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:34.728 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:34.728 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:34.728 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:34.728 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:34.728 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:34.728 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:34.728 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:34.728 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:34.728 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:34.728 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:34.728 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:34.728 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:15:34.728 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:34.728 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:34.728 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:34.729 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.729 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.729 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.729 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:15:34.729 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.729 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:15:34.729 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:34.729 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:34.729 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:34.729 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:34.729 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:34.729 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:34.729 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:34.729 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:34.729 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:34.729 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:34.729 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:34.729 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:34.729 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:15:34.729 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:15:34.729 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:34.729 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:34.729 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:34.729 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:34.729 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:34.729 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:34.729 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:34.729 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:34.729 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:34.729 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:34.729 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:15:34.729 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:42.866 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:42.866 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:15:42.866 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:42.866 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:42.866 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:42.866 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:42.866 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:42.866 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:15:42.866 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:42.866 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:15:42.866 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:15:42.866 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:15:42.866 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:15:42.866 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:15:42.866 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:15:42.866 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:42.866 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:42.866 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:42.866 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:42.866 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:42.866 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:42.866 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:42.866 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:42.866 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:42.866 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:42.866 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:42.866 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:42.866 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:42.866 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:42.866 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:42.866 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:42.866 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:42.866 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:42.866 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:42.866 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:42.866 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:42.866 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:42.866 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:42.866 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:42.866 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:42.866 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:42.866 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:42.866 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:42.866 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:42.866 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:42.866 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:42.866 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:42.866 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:42.866 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:42.866 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:42.866 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:42.866 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:42.866 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:42.866 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:42.866 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:42.866 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:42.866 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:42.866 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:42.866 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:42.866 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:42.866 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:42.866 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:42.866 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:42.866 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:42.866 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:42.866 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:42.866 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:42.866 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:42.866 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:42.866 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:42.866 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:42.866 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:42.866 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:42.866 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:15:42.866 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:42.866 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:42.867 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:42.867 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:42.867 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:42.867 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:42.867 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:42.867 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:42.867 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:42.867 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:42.867 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:42.867 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:42.867 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:42.867 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:42.867 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:42.867 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:42.867 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:42.867 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:42.867 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:42.867 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:42.867 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:42.867 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:42.867 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:42.867 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:42.867 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:42.867 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:42.867 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:42.867 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.620 ms 00:15:42.867 00:15:42.867 --- 10.0.0.2 ping statistics --- 00:15:42.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:42.867 rtt min/avg/max/mdev = 0.620/0.620/0.620/0.000 ms 00:15:42.867 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:42.867 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:42.867 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.301 ms 00:15:42.867 00:15:42.867 --- 10.0.0.1 ping statistics --- 00:15:42.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:42.867 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:15:42.867 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:42.867 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:15:42.867 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:42.867 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:42.867 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:42.867 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:42.867 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:42.867 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:42.867 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:42.867 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:15:42.867 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:42.867 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:42.867 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:42.867 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=1322235 00:15:42.867 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 1322235 00:15:42.867 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:42.867 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 1322235 ']' 00:15:42.867 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:42.867 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:42.867 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:42.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:42.867 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:42.867 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:42.867 [2024-11-20 09:49:12.978928] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:15:42.867 [2024-11-20 09:49:12.978995] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:42.867 [2024-11-20 09:49:13.080676] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:42.867 [2024-11-20 09:49:13.134851] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:42.867 [2024-11-20 09:49:13.134906] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:42.867 [2024-11-20 09:49:13.134915] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:42.867 [2024-11-20 09:49:13.134922] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:42.867 [2024-11-20 09:49:13.134929] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:42.867 [2024-11-20 09:49:13.137050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:42.867 [2024-11-20 09:49:13.137253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:42.867 [2024-11-20 09:49:13.137326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:42.867 [2024-11-20 09:49:13.137327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:43.128 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:43.128 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:15:43.128 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:43.128 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:43.128 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:43.128 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:43.128 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:43.128 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.128 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:43.128 [2024-11-20 09:49:13.850049] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:43.128 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.128 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:43.128 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.128 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:43.128 Malloc0 00:15:43.128 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.128 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:43.128 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.128 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:43.128 Malloc1 00:15:43.128 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.128 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:15:43.128 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.128 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:43.128 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.128 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:43.128 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.128 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:43.128 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.128 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:43.128 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.128 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:43.128 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.128 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:43.128 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.128 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:43.128 [2024-11-20 09:49:13.965887] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:43.128 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.128 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:43.128 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.128 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:43.128 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.128 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:15:43.388 00:15:43.388 Discovery Log Number of Records 2, Generation counter 2 00:15:43.388 =====Discovery Log Entry 0====== 00:15:43.388 trtype: tcp 00:15:43.388 adrfam: ipv4 00:15:43.388 subtype: current discovery subsystem 00:15:43.388 treq: not required 00:15:43.388 portid: 0 00:15:43.388 trsvcid: 4420 00:15:43.388 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:43.388 traddr: 10.0.0.2 00:15:43.388 eflags: explicit discovery connections, duplicate discovery information 00:15:43.388 sectype: none 00:15:43.388 =====Discovery Log Entry 1====== 00:15:43.388 trtype: tcp 00:15:43.388 adrfam: ipv4 00:15:43.388 subtype: nvme subsystem 00:15:43.388 treq: not required 00:15:43.388 portid: 0 00:15:43.388 trsvcid: 4420 00:15:43.388 subnqn: nqn.2016-06.io.spdk:cnode1 00:15:43.388 traddr: 10.0.0.2 00:15:43.388 eflags: none 00:15:43.388 sectype: none 00:15:43.388 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:15:43.388 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:15:43.388 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:15:43.388 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:43.388 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:15:43.388 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:15:43.388 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:43.388 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:15:43.388 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:43.388 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:15:43.388 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:45.295 09:49:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:45.295 09:49:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:15:45.295 09:49:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:45.295 09:49:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:15:45.295 09:49:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:15:45.295 09:49:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:15:47.209 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:47.209 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:47.209 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:47.209 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:15:47.209 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:47.209 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:15:47.209 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:15:47.209 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:15:47.209 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:47.209 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:15:47.209 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:15:47.209 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:47.209 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:15:47.209 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:47.209 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:47.209 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:15:47.209 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:47.209 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:47.209 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:15:47.209 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:47.209 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:15:47.209 /dev/nvme0n2 ]] 00:15:47.209 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:15:47.209 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:15:47.209 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:15:47.209 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:47.209 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:15:47.209 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:15:47.209 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:47.209 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:15:47.209 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:47.209 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:47.209 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:15:47.209 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:47.209 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:47.209 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:15:47.209 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:47.209 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:15:47.209 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:47.209 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:47.209 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:47.209 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:15:47.209 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:47.209 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:47.209 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:47.209 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:47.209 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:15:47.209 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:15:47.209 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:47.209 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.209 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:47.209 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.209 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:47.209 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:15:47.209 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:47.209 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:15:47.209 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:47.209 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:15:47.209 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:47.209 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:47.209 rmmod nvme_tcp 00:15:47.209 rmmod nvme_fabrics 00:15:47.209 rmmod nvme_keyring 00:15:47.209 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:47.209 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:15:47.209 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:15:47.209 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 1322235 ']' 00:15:47.209 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 1322235 00:15:47.209 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 1322235 ']' 00:15:47.209 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 1322235 00:15:47.209 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:15:47.209 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:47.209 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1322235 00:15:47.209 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:47.209 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:47.209 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1322235' 00:15:47.209 killing process with pid 1322235 00:15:47.209 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 1322235 00:15:47.209 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 1322235 00:15:47.470 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:47.470 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:47.470 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:47.470 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:15:47.470 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:15:47.470 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:47.470 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:15:47.470 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:47.470 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:47.470 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:47.470 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:47.470 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:49.381 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:49.643 00:15:49.643 real 0m15.127s 00:15:49.643 user 0m22.477s 00:15:49.643 sys 0m6.439s 00:15:49.643 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:49.643 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:49.643 ************************************ 00:15:49.643 END TEST nvmf_nvme_cli 00:15:49.643 ************************************ 00:15:49.643 09:49:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:15:49.643 09:49:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:49.643 09:49:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:49.643 09:49:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:49.643 09:49:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:49.643 ************************************ 00:15:49.643 START TEST nvmf_vfio_user 00:15:49.643 ************************************ 00:15:49.643 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:49.643 * Looking for test storage... 00:15:49.643 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:49.643 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:49.643 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lcov --version 00:15:49.643 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:49.904 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:49.904 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:49.904 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:49.904 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:49.904 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:15:49.904 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:15:49.904 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:15:49.904 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:15:49.904 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:15:49.904 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:15:49.904 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:15:49.904 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:49.904 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:15:49.904 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:15:49.904 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:49.904 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:49.904 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:15:49.904 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:15:49.904 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:49.904 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:15:49.904 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:15:49.904 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:15:49.904 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:15:49.904 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:49.904 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:15:49.904 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:15:49.904 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:49.904 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:49.904 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:15:49.904 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:49.904 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:49.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:49.904 --rc genhtml_branch_coverage=1 00:15:49.904 --rc genhtml_function_coverage=1 00:15:49.904 --rc genhtml_legend=1 00:15:49.904 --rc geninfo_all_blocks=1 00:15:49.904 --rc geninfo_unexecuted_blocks=1 00:15:49.904 00:15:49.904 ' 00:15:49.904 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:49.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:49.904 --rc genhtml_branch_coverage=1 00:15:49.904 --rc genhtml_function_coverage=1 00:15:49.904 --rc genhtml_legend=1 00:15:49.904 --rc geninfo_all_blocks=1 00:15:49.904 --rc geninfo_unexecuted_blocks=1 00:15:49.904 00:15:49.904 ' 00:15:49.904 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:49.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:49.904 --rc genhtml_branch_coverage=1 00:15:49.904 --rc genhtml_function_coverage=1 00:15:49.904 --rc genhtml_legend=1 00:15:49.904 --rc geninfo_all_blocks=1 00:15:49.904 --rc geninfo_unexecuted_blocks=1 00:15:49.904 00:15:49.904 ' 00:15:49.904 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:49.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:49.904 --rc genhtml_branch_coverage=1 00:15:49.904 --rc genhtml_function_coverage=1 00:15:49.904 --rc genhtml_legend=1 00:15:49.904 --rc geninfo_all_blocks=1 00:15:49.904 --rc geninfo_unexecuted_blocks=1 00:15:49.904 00:15:49.904 ' 00:15:49.904 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:49.904 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:15:49.904 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:49.904 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:49.904 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:49.904 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:49.904 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:49.904 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:49.904 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:49.904 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:49.904 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:49.904 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:49.905 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:49.905 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:49.905 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:49.905 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:49.905 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:49.905 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:49.905 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:49.905 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:15:49.905 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:49.905 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:49.905 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:49.905 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.905 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.905 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.905 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:15:49.905 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.905 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:15:49.905 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:49.905 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:49.905 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:49.905 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:49.905 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:49.905 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:49.905 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:49.905 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:49.905 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:49.905 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:49.905 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:49.905 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:49.905 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:15:49.905 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:49.905 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:49.905 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:49.905 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:15:49.905 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:15:49.905 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:15:49.905 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:15:49.905 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1323771 00:15:49.905 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1323771' 00:15:49.905 Process pid: 1323771 00:15:49.905 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:49.905 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1323771 00:15:49.905 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:15:49.905 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 1323771 ']' 00:15:49.905 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:49.905 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:49.905 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:49.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:49.905 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:49.905 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:49.905 [2024-11-20 09:49:20.668067] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:15:49.905 [2024-11-20 09:49:20.668121] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:49.905 [2024-11-20 09:49:20.751988] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:49.905 [2024-11-20 09:49:20.787856] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:49.905 [2024-11-20 09:49:20.787889] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:49.905 [2024-11-20 09:49:20.787894] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:49.905 [2024-11-20 09:49:20.787899] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:49.905 [2024-11-20 09:49:20.787904] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:49.905 [2024-11-20 09:49:20.789469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:49.905 [2024-11-20 09:49:20.789623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:49.905 [2024-11-20 09:49:20.789777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:49.905 [2024-11-20 09:49:20.789779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:50.848 09:49:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:50.848 09:49:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:15:50.848 09:49:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:51.789 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:15:51.789 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:51.789 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:51.789 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:51.789 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:51.789 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:52.050 Malloc1 00:15:52.050 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:52.311 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:52.571 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:52.571 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:52.571 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:52.571 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:52.832 Malloc2 00:15:52.832 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:53.094 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:53.094 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:53.356 09:49:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:15:53.356 09:49:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:15:53.356 09:49:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:53.356 09:49:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:53.356 09:49:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:15:53.356 09:49:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:53.356 [2024-11-20 09:49:24.192665] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:15:53.356 [2024-11-20 09:49:24.192708] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1324520 ] 00:15:53.356 [2024-11-20 09:49:24.233472] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:15:53.356 [2024-11-20 09:49:24.237166] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:53.356 [2024-11-20 09:49:24.237183] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fd1b5476000 00:15:53.356 [2024-11-20 09:49:24.237727] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:53.356 [2024-11-20 09:49:24.238733] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:53.356 [2024-11-20 09:49:24.239741] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:53.356 [2024-11-20 09:49:24.240746] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:53.356 [2024-11-20 09:49:24.241754] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:53.356 [2024-11-20 09:49:24.242755] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:53.356 [2024-11-20 09:49:24.243760] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:53.356 [2024-11-20 09:49:24.244767] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:53.356 [2024-11-20 09:49:24.245774] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:53.356 [2024-11-20 09:49:24.245781] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fd1b546b000 00:15:53.356 [2024-11-20 09:49:24.246693] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:53.356 [2024-11-20 09:49:24.259148] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:15:53.356 [2024-11-20 09:49:24.259172] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:15:53.356 [2024-11-20 09:49:24.261871] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:53.356 [2024-11-20 09:49:24.261906] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:53.356 [2024-11-20 09:49:24.261969] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:15:53.356 [2024-11-20 09:49:24.261981] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:15:53.356 [2024-11-20 09:49:24.261985] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:15:53.356 [2024-11-20 09:49:24.266164] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:15:53.356 [2024-11-20 09:49:24.266173] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:15:53.356 [2024-11-20 09:49:24.266178] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:15:53.356 [2024-11-20 09:49:24.266892] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:53.357 [2024-11-20 09:49:24.266898] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:15:53.357 [2024-11-20 09:49:24.266907] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:15:53.357 [2024-11-20 09:49:24.267898] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:15:53.357 [2024-11-20 09:49:24.267905] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:53.357 [2024-11-20 09:49:24.268907] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:15:53.357 [2024-11-20 09:49:24.268912] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:15:53.357 [2024-11-20 09:49:24.268916] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:15:53.357 [2024-11-20 09:49:24.268920] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:53.357 [2024-11-20 09:49:24.269026] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:15:53.357 [2024-11-20 09:49:24.269029] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:53.357 [2024-11-20 09:49:24.269033] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:15:53.620 [2024-11-20 09:49:24.269913] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:15:53.620 [2024-11-20 09:49:24.270917] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:15:53.620 [2024-11-20 09:49:24.271922] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:53.620 [2024-11-20 09:49:24.272915] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:53.620 [2024-11-20 09:49:24.272978] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:53.620 [2024-11-20 09:49:24.273931] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:15:53.620 [2024-11-20 09:49:24.273938] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:53.620 [2024-11-20 09:49:24.273941] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:15:53.620 [2024-11-20 09:49:24.273956] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:15:53.620 [2024-11-20 09:49:24.273962] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:15:53.620 [2024-11-20 09:49:24.273972] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:53.620 [2024-11-20 09:49:24.273976] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:53.620 [2024-11-20 09:49:24.273978] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:53.620 [2024-11-20 09:49:24.273989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:53.620 [2024-11-20 09:49:24.274025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:53.620 [2024-11-20 09:49:24.274035] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:15:53.620 [2024-11-20 09:49:24.274039] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:15:53.620 [2024-11-20 09:49:24.274042] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:15:53.620 [2024-11-20 09:49:24.274045] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:53.620 [2024-11-20 09:49:24.274050] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:15:53.620 [2024-11-20 09:49:24.274053] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:15:53.620 [2024-11-20 09:49:24.274057] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:15:53.620 [2024-11-20 09:49:24.274064] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:15:53.620 [2024-11-20 09:49:24.274071] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:53.620 [2024-11-20 09:49:24.274082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:53.620 [2024-11-20 09:49:24.274090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:53.620 [2024-11-20 09:49:24.274096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:53.620 [2024-11-20 09:49:24.274102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:53.620 [2024-11-20 09:49:24.274110] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:53.620 [2024-11-20 09:49:24.274113] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:15:53.620 [2024-11-20 09:49:24.274118] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:53.621 [2024-11-20 09:49:24.274125] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:53.621 [2024-11-20 09:49:24.274133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:53.621 [2024-11-20 09:49:24.274139] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:15:53.621 [2024-11-20 09:49:24.274142] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:53.621 [2024-11-20 09:49:24.274147] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:15:53.621 [2024-11-20 09:49:24.274151] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:15:53.621 [2024-11-20 09:49:24.274160] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:53.621 [2024-11-20 09:49:24.274169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:53.621 [2024-11-20 09:49:24.274214] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:15:53.621 [2024-11-20 09:49:24.274220] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:15:53.621 [2024-11-20 09:49:24.274225] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:53.621 [2024-11-20 09:49:24.274229] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:53.621 [2024-11-20 09:49:24.274231] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:53.621 [2024-11-20 09:49:24.274235] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:53.621 [2024-11-20 09:49:24.274248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:53.621 [2024-11-20 09:49:24.274254] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:15:53.621 [2024-11-20 09:49:24.274263] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:15:53.621 [2024-11-20 09:49:24.274269] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:15:53.621 [2024-11-20 09:49:24.274274] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:53.621 [2024-11-20 09:49:24.274277] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:53.621 [2024-11-20 09:49:24.274279] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:53.621 [2024-11-20 09:49:24.274284] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:53.621 [2024-11-20 09:49:24.274301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:53.621 [2024-11-20 09:49:24.274309] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:53.621 [2024-11-20 09:49:24.274315] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:53.621 [2024-11-20 09:49:24.274320] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:53.621 [2024-11-20 09:49:24.274323] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:53.621 [2024-11-20 09:49:24.274325] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:53.621 [2024-11-20 09:49:24.274329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:53.621 [2024-11-20 09:49:24.274337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:53.621 [2024-11-20 09:49:24.274343] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:53.621 [2024-11-20 09:49:24.274347] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:15:53.621 [2024-11-20 09:49:24.274353] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:15:53.621 [2024-11-20 09:49:24.274357] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:15:53.621 [2024-11-20 09:49:24.274362] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:53.621 [2024-11-20 09:49:24.274365] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:15:53.621 [2024-11-20 09:49:24.274369] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:15:53.621 [2024-11-20 09:49:24.274372] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:15:53.621 [2024-11-20 09:49:24.274376] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:15:53.621 [2024-11-20 09:49:24.274390] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:53.621 [2024-11-20 09:49:24.274403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:53.621 [2024-11-20 09:49:24.274411] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:53.621 [2024-11-20 09:49:24.274421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:53.621 [2024-11-20 09:49:24.274429] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:53.621 [2024-11-20 09:49:24.274437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:53.621 [2024-11-20 09:49:24.274445] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:53.621 [2024-11-20 09:49:24.274452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:53.621 [2024-11-20 09:49:24.274461] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:53.621 [2024-11-20 09:49:24.274464] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:53.621 [2024-11-20 09:49:24.274467] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:53.621 [2024-11-20 09:49:24.274469] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:53.621 [2024-11-20 09:49:24.274472] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:15:53.621 [2024-11-20 09:49:24.274476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:53.621 [2024-11-20 09:49:24.274482] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:53.621 [2024-11-20 09:49:24.274485] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:53.621 [2024-11-20 09:49:24.274487] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:53.621 [2024-11-20 09:49:24.274491] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:53.621 [2024-11-20 09:49:24.274496] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:53.621 [2024-11-20 09:49:24.274499] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:53.621 [2024-11-20 09:49:24.274502] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:53.621 [2024-11-20 09:49:24.274506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:53.621 [2024-11-20 09:49:24.274512] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:53.621 [2024-11-20 09:49:24.274516] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:53.621 [2024-11-20 09:49:24.274518] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:53.621 [2024-11-20 09:49:24.274522] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:53.621 [2024-11-20 09:49:24.274527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:53.621 [2024-11-20 09:49:24.274535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:53.621 [2024-11-20 09:49:24.274542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:53.621 [2024-11-20 09:49:24.274547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:53.621 ===================================================== 00:15:53.621 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:53.621 ===================================================== 00:15:53.621 Controller Capabilities/Features 00:15:53.621 ================================ 00:15:53.621 Vendor ID: 4e58 00:15:53.621 Subsystem Vendor ID: 4e58 00:15:53.621 Serial Number: SPDK1 00:15:53.621 Model Number: SPDK bdev Controller 00:15:53.621 Firmware Version: 25.01 00:15:53.621 Recommended Arb Burst: 6 00:15:53.621 IEEE OUI Identifier: 8d 6b 50 00:15:53.621 Multi-path I/O 00:15:53.621 May have multiple subsystem ports: Yes 00:15:53.621 May have multiple controllers: Yes 00:15:53.621 Associated with SR-IOV VF: No 00:15:53.621 Max Data Transfer Size: 131072 00:15:53.621 Max Number of Namespaces: 32 00:15:53.621 Max Number of I/O Queues: 127 00:15:53.621 NVMe Specification Version (VS): 1.3 00:15:53.621 NVMe Specification Version (Identify): 1.3 00:15:53.621 Maximum Queue Entries: 256 00:15:53.621 Contiguous Queues Required: Yes 00:15:53.621 Arbitration Mechanisms Supported 00:15:53.621 Weighted Round Robin: Not Supported 00:15:53.621 Vendor Specific: Not Supported 00:15:53.621 Reset Timeout: 15000 ms 00:15:53.622 Doorbell Stride: 4 bytes 00:15:53.622 NVM Subsystem Reset: Not Supported 00:15:53.622 Command Sets Supported 00:15:53.622 NVM Command Set: Supported 00:15:53.622 Boot Partition: Not Supported 00:15:53.622 Memory Page Size Minimum: 4096 bytes 00:15:53.622 Memory Page Size Maximum: 4096 bytes 00:15:53.622 Persistent Memory Region: Not Supported 00:15:53.622 Optional Asynchronous Events Supported 00:15:53.622 Namespace Attribute Notices: Supported 00:15:53.622 Firmware Activation Notices: Not Supported 00:15:53.622 ANA Change Notices: Not Supported 00:15:53.622 PLE Aggregate Log Change Notices: Not Supported 00:15:53.622 LBA Status Info Alert Notices: Not Supported 00:15:53.622 EGE Aggregate Log Change Notices: Not Supported 00:15:53.622 Normal NVM Subsystem Shutdown event: Not Supported 00:15:53.622 Zone Descriptor Change Notices: Not Supported 00:15:53.622 Discovery Log Change Notices: Not Supported 00:15:53.622 Controller Attributes 00:15:53.622 128-bit Host Identifier: Supported 00:15:53.622 Non-Operational Permissive Mode: Not Supported 00:15:53.622 NVM Sets: Not Supported 00:15:53.622 Read Recovery Levels: Not Supported 00:15:53.622 Endurance Groups: Not Supported 00:15:53.622 Predictable Latency Mode: Not Supported 00:15:53.622 Traffic Based Keep ALive: Not Supported 00:15:53.622 Namespace Granularity: Not Supported 00:15:53.622 SQ Associations: Not Supported 00:15:53.622 UUID List: Not Supported 00:15:53.622 Multi-Domain Subsystem: Not Supported 00:15:53.622 Fixed Capacity Management: Not Supported 00:15:53.622 Variable Capacity Management: Not Supported 00:15:53.622 Delete Endurance Group: Not Supported 00:15:53.622 Delete NVM Set: Not Supported 00:15:53.622 Extended LBA Formats Supported: Not Supported 00:15:53.622 Flexible Data Placement Supported: Not Supported 00:15:53.622 00:15:53.622 Controller Memory Buffer Support 00:15:53.622 ================================ 00:15:53.622 Supported: No 00:15:53.622 00:15:53.622 Persistent Memory Region Support 00:15:53.622 ================================ 00:15:53.622 Supported: No 00:15:53.622 00:15:53.622 Admin Command Set Attributes 00:15:53.622 ============================ 00:15:53.622 Security Send/Receive: Not Supported 00:15:53.622 Format NVM: Not Supported 00:15:53.622 Firmware Activate/Download: Not Supported 00:15:53.622 Namespace Management: Not Supported 00:15:53.622 Device Self-Test: Not Supported 00:15:53.622 Directives: Not Supported 00:15:53.622 NVMe-MI: Not Supported 00:15:53.622 Virtualization Management: Not Supported 00:15:53.622 Doorbell Buffer Config: Not Supported 00:15:53.622 Get LBA Status Capability: Not Supported 00:15:53.622 Command & Feature Lockdown Capability: Not Supported 00:15:53.622 Abort Command Limit: 4 00:15:53.622 Async Event Request Limit: 4 00:15:53.622 Number of Firmware Slots: N/A 00:15:53.622 Firmware Slot 1 Read-Only: N/A 00:15:53.622 Firmware Activation Without Reset: N/A 00:15:53.622 Multiple Update Detection Support: N/A 00:15:53.622 Firmware Update Granularity: No Information Provided 00:15:53.622 Per-Namespace SMART Log: No 00:15:53.622 Asymmetric Namespace Access Log Page: Not Supported 00:15:53.622 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:15:53.622 Command Effects Log Page: Supported 00:15:53.622 Get Log Page Extended Data: Supported 00:15:53.622 Telemetry Log Pages: Not Supported 00:15:53.622 Persistent Event Log Pages: Not Supported 00:15:53.622 Supported Log Pages Log Page: May Support 00:15:53.622 Commands Supported & Effects Log Page: Not Supported 00:15:53.622 Feature Identifiers & Effects Log Page:May Support 00:15:53.622 NVMe-MI Commands & Effects Log Page: May Support 00:15:53.622 Data Area 4 for Telemetry Log: Not Supported 00:15:53.622 Error Log Page Entries Supported: 128 00:15:53.622 Keep Alive: Supported 00:15:53.622 Keep Alive Granularity: 10000 ms 00:15:53.622 00:15:53.622 NVM Command Set Attributes 00:15:53.622 ========================== 00:15:53.622 Submission Queue Entry Size 00:15:53.622 Max: 64 00:15:53.622 Min: 64 00:15:53.622 Completion Queue Entry Size 00:15:53.622 Max: 16 00:15:53.622 Min: 16 00:15:53.622 Number of Namespaces: 32 00:15:53.622 Compare Command: Supported 00:15:53.622 Write Uncorrectable Command: Not Supported 00:15:53.622 Dataset Management Command: Supported 00:15:53.622 Write Zeroes Command: Supported 00:15:53.622 Set Features Save Field: Not Supported 00:15:53.622 Reservations: Not Supported 00:15:53.622 Timestamp: Not Supported 00:15:53.622 Copy: Supported 00:15:53.622 Volatile Write Cache: Present 00:15:53.622 Atomic Write Unit (Normal): 1 00:15:53.622 Atomic Write Unit (PFail): 1 00:15:53.622 Atomic Compare & Write Unit: 1 00:15:53.622 Fused Compare & Write: Supported 00:15:53.622 Scatter-Gather List 00:15:53.622 SGL Command Set: Supported (Dword aligned) 00:15:53.622 SGL Keyed: Not Supported 00:15:53.622 SGL Bit Bucket Descriptor: Not Supported 00:15:53.622 SGL Metadata Pointer: Not Supported 00:15:53.622 Oversized SGL: Not Supported 00:15:53.622 SGL Metadata Address: Not Supported 00:15:53.622 SGL Offset: Not Supported 00:15:53.622 Transport SGL Data Block: Not Supported 00:15:53.622 Replay Protected Memory Block: Not Supported 00:15:53.622 00:15:53.622 Firmware Slot Information 00:15:53.622 ========================= 00:15:53.622 Active slot: 1 00:15:53.622 Slot 1 Firmware Revision: 25.01 00:15:53.622 00:15:53.622 00:15:53.622 Commands Supported and Effects 00:15:53.622 ============================== 00:15:53.622 Admin Commands 00:15:53.622 -------------- 00:15:53.622 Get Log Page (02h): Supported 00:15:53.622 Identify (06h): Supported 00:15:53.622 Abort (08h): Supported 00:15:53.622 Set Features (09h): Supported 00:15:53.622 Get Features (0Ah): Supported 00:15:53.622 Asynchronous Event Request (0Ch): Supported 00:15:53.622 Keep Alive (18h): Supported 00:15:53.622 I/O Commands 00:15:53.622 ------------ 00:15:53.622 Flush (00h): Supported LBA-Change 00:15:53.622 Write (01h): Supported LBA-Change 00:15:53.622 Read (02h): Supported 00:15:53.622 Compare (05h): Supported 00:15:53.622 Write Zeroes (08h): Supported LBA-Change 00:15:53.622 Dataset Management (09h): Supported LBA-Change 00:15:53.622 Copy (19h): Supported LBA-Change 00:15:53.622 00:15:53.622 Error Log 00:15:53.622 ========= 00:15:53.622 00:15:53.622 Arbitration 00:15:53.622 =========== 00:15:53.622 Arbitration Burst: 1 00:15:53.622 00:15:53.622 Power Management 00:15:53.622 ================ 00:15:53.622 Number of Power States: 1 00:15:53.622 Current Power State: Power State #0 00:15:53.622 Power State #0: 00:15:53.622 Max Power: 0.00 W 00:15:53.622 Non-Operational State: Operational 00:15:53.622 Entry Latency: Not Reported 00:15:53.622 Exit Latency: Not Reported 00:15:53.622 Relative Read Throughput: 0 00:15:53.622 Relative Read Latency: 0 00:15:53.622 Relative Write Throughput: 0 00:15:53.622 Relative Write Latency: 0 00:15:53.622 Idle Power: Not Reported 00:15:53.622 Active Power: Not Reported 00:15:53.622 Non-Operational Permissive Mode: Not Supported 00:15:53.622 00:15:53.622 Health Information 00:15:53.622 ================== 00:15:53.622 Critical Warnings: 00:15:53.622 Available Spare Space: OK 00:15:53.622 Temperature: OK 00:15:53.622 Device Reliability: OK 00:15:53.622 Read Only: No 00:15:53.622 Volatile Memory Backup: OK 00:15:53.622 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:53.622 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:53.622 Available Spare: 0% 00:15:53.622 Available Sp[2024-11-20 09:49:24.274618] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:53.622 [2024-11-20 09:49:24.274626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:53.622 [2024-11-20 09:49:24.274645] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:15:53.622 [2024-11-20 09:49:24.274652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.622 [2024-11-20 09:49:24.274657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.622 [2024-11-20 09:49:24.274661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.622 [2024-11-20 09:49:24.274666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.622 [2024-11-20 09:49:24.274937] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:53.622 [2024-11-20 09:49:24.274944] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:15:53.623 [2024-11-20 09:49:24.275941] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:53.623 [2024-11-20 09:49:24.275982] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:15:53.623 [2024-11-20 09:49:24.275987] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:15:53.623 [2024-11-20 09:49:24.276944] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:15:53.623 [2024-11-20 09:49:24.276952] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:15:53.623 [2024-11-20 09:49:24.277007] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:15:53.623 [2024-11-20 09:49:24.277969] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:53.623 are Threshold: 0% 00:15:53.623 Life Percentage Used: 0% 00:15:53.623 Data Units Read: 0 00:15:53.623 Data Units Written: 0 00:15:53.623 Host Read Commands: 0 00:15:53.623 Host Write Commands: 0 00:15:53.623 Controller Busy Time: 0 minutes 00:15:53.623 Power Cycles: 0 00:15:53.623 Power On Hours: 0 hours 00:15:53.623 Unsafe Shutdowns: 0 00:15:53.623 Unrecoverable Media Errors: 0 00:15:53.623 Lifetime Error Log Entries: 0 00:15:53.623 Warning Temperature Time: 0 minutes 00:15:53.623 Critical Temperature Time: 0 minutes 00:15:53.623 00:15:53.623 Number of Queues 00:15:53.623 ================ 00:15:53.623 Number of I/O Submission Queues: 127 00:15:53.623 Number of I/O Completion Queues: 127 00:15:53.623 00:15:53.623 Active Namespaces 00:15:53.623 ================= 00:15:53.623 Namespace ID:1 00:15:53.623 Error Recovery Timeout: Unlimited 00:15:53.623 Command Set Identifier: NVM (00h) 00:15:53.623 Deallocate: Supported 00:15:53.623 Deallocated/Unwritten Error: Not Supported 00:15:53.623 Deallocated Read Value: Unknown 00:15:53.623 Deallocate in Write Zeroes: Not Supported 00:15:53.623 Deallocated Guard Field: 0xFFFF 00:15:53.623 Flush: Supported 00:15:53.623 Reservation: Supported 00:15:53.623 Namespace Sharing Capabilities: Multiple Controllers 00:15:53.623 Size (in LBAs): 131072 (0GiB) 00:15:53.623 Capacity (in LBAs): 131072 (0GiB) 00:15:53.623 Utilization (in LBAs): 131072 (0GiB) 00:15:53.623 NGUID: A528497890614518ACAEC6DFF0AABBCA 00:15:53.623 UUID: a5284978-9061-4518-acae-c6dff0aabbca 00:15:53.623 Thin Provisioning: Not Supported 00:15:53.623 Per-NS Atomic Units: Yes 00:15:53.623 Atomic Boundary Size (Normal): 0 00:15:53.623 Atomic Boundary Size (PFail): 0 00:15:53.623 Atomic Boundary Offset: 0 00:15:53.623 Maximum Single Source Range Length: 65535 00:15:53.623 Maximum Copy Length: 65535 00:15:53.623 Maximum Source Range Count: 1 00:15:53.623 NGUID/EUI64 Never Reused: No 00:15:53.623 Namespace Write Protected: No 00:15:53.623 Number of LBA Formats: 1 00:15:53.623 Current LBA Format: LBA Format #00 00:15:53.623 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:53.623 00:15:53.623 09:49:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:53.623 [2024-11-20 09:49:24.463840] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:58.908 Initializing NVMe Controllers 00:15:58.908 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:58.908 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:58.908 Initialization complete. Launching workers. 00:15:58.908 ======================================================== 00:15:58.908 Latency(us) 00:15:58.908 Device Information : IOPS MiB/s Average min max 00:15:58.908 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39965.40 156.11 3205.43 851.85 9776.30 00:15:58.908 ======================================================== 00:15:58.908 Total : 39965.40 156.11 3205.43 851.85 9776.30 00:15:58.908 00:15:58.908 [2024-11-20 09:49:29.484769] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:58.908 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:58.908 [2024-11-20 09:49:29.672576] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:04.194 Initializing NVMe Controllers 00:16:04.194 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:04.194 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:16:04.195 Initialization complete. Launching workers. 00:16:04.195 ======================================================== 00:16:04.195 Latency(us) 00:16:04.195 Device Information : IOPS MiB/s Average min max 00:16:04.195 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16051.20 62.70 7983.67 5972.27 10976.67 00:16:04.195 ======================================================== 00:16:04.195 Total : 16051.20 62.70 7983.67 5972.27 10976.67 00:16:04.195 00:16:04.195 [2024-11-20 09:49:34.708032] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:04.195 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:16:04.195 [2024-11-20 09:49:34.907861] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:09.581 [2024-11-20 09:49:39.964340] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:09.581 Initializing NVMe Controllers 00:16:09.581 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:09.581 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:09.581 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:16:09.581 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:16:09.581 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:16:09.581 Initialization complete. Launching workers. 00:16:09.581 Starting thread on core 2 00:16:09.581 Starting thread on core 3 00:16:09.581 Starting thread on core 1 00:16:09.581 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:16:09.581 [2024-11-20 09:49:40.203489] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:12.875 [2024-11-20 09:49:43.271499] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:12.875 Initializing NVMe Controllers 00:16:12.875 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:12.875 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:12.875 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:16:12.875 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:16:12.875 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:16:12.875 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:16:12.875 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:12.875 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:12.875 Initialization complete. Launching workers. 00:16:12.875 Starting thread on core 1 with urgent priority queue 00:16:12.875 Starting thread on core 2 with urgent priority queue 00:16:12.875 Starting thread on core 3 with urgent priority queue 00:16:12.875 Starting thread on core 0 with urgent priority queue 00:16:12.875 SPDK bdev Controller (SPDK1 ) core 0: 8751.67 IO/s 11.43 secs/100000 ios 00:16:12.875 SPDK bdev Controller (SPDK1 ) core 1: 13707.67 IO/s 7.30 secs/100000 ios 00:16:12.875 SPDK bdev Controller (SPDK1 ) core 2: 9525.33 IO/s 10.50 secs/100000 ios 00:16:12.875 SPDK bdev Controller (SPDK1 ) core 3: 16530.33 IO/s 6.05 secs/100000 ios 00:16:12.875 ======================================================== 00:16:12.875 00:16:12.875 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:16:12.875 [2024-11-20 09:49:43.509541] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:12.875 Initializing NVMe Controllers 00:16:12.875 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:12.875 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:12.875 Namespace ID: 1 size: 0GB 00:16:12.875 Initialization complete. 00:16:12.875 INFO: using host memory buffer for IO 00:16:12.875 Hello world! 00:16:12.875 [2024-11-20 09:49:43.543766] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:12.875 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:16:12.875 [2024-11-20 09:49:43.784543] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:14.258 Initializing NVMe Controllers 00:16:14.258 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:14.258 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:14.258 Initialization complete. Launching workers. 00:16:14.258 submit (in ns) avg, min, max = 6728.7, 2823.3, 3999492.5 00:16:14.258 complete (in ns) avg, min, max = 16759.9, 1636.7, 4993230.8 00:16:14.258 00:16:14.258 Submit histogram 00:16:14.258 ================ 00:16:14.258 Range in us Cumulative Count 00:16:14.258 2.813 - 2.827: 0.0397% ( 8) 00:16:14.258 2.827 - 2.840: 0.3620% ( 65) 00:16:14.258 2.840 - 2.853: 2.4202% ( 415) 00:16:14.258 2.853 - 2.867: 6.4273% ( 808) 00:16:14.258 2.867 - 2.880: 12.3835% ( 1201) 00:16:14.258 2.880 - 2.893: 19.0488% ( 1344) 00:16:14.258 2.893 - 2.907: 24.9058% ( 1181) 00:16:14.258 2.907 - 2.920: 30.7231% ( 1173) 00:16:14.258 2.920 - 2.933: 36.5800% ( 1181) 00:16:14.258 2.933 - 2.947: 41.9312% ( 1079) 00:16:14.258 2.947 - 2.960: 46.8855% ( 999) 00:16:14.258 2.960 - 2.973: 52.8715% ( 1207) 00:16:14.258 2.973 - 2.987: 60.8609% ( 1611) 00:16:14.258 2.987 - 3.000: 68.7562% ( 1592) 00:16:14.258 3.000 - 3.013: 76.9093% ( 1644) 00:16:14.258 3.013 - 3.027: 83.4755% ( 1324) 00:16:14.258 3.027 - 3.040: 89.6796% ( 1251) 00:16:14.258 3.040 - 3.053: 94.3712% ( 946) 00:16:14.258 3.053 - 3.067: 97.1533% ( 561) 00:16:14.258 3.067 - 3.080: 98.4824% ( 268) 00:16:14.258 3.080 - 3.093: 99.1371% ( 132) 00:16:14.258 3.093 - 3.107: 99.4198% ( 57) 00:16:14.258 3.107 - 3.120: 99.5537% ( 27) 00:16:14.258 3.120 - 3.133: 99.6181% ( 13) 00:16:14.258 3.133 - 3.147: 99.6380% ( 4) 00:16:14.258 3.160 - 3.173: 99.6429% ( 1) 00:16:14.258 3.200 - 3.213: 99.6479% ( 1) 00:16:14.258 3.240 - 3.253: 99.6528% ( 1) 00:16:14.258 3.293 - 3.307: 99.6578% ( 1) 00:16:14.258 3.320 - 3.333: 99.6628% ( 1) 00:16:14.258 3.440 - 3.467: 99.6677% ( 1) 00:16:14.258 3.493 - 3.520: 99.6727% ( 1) 00:16:14.258 3.653 - 3.680: 99.6776% ( 1) 00:16:14.258 3.707 - 3.733: 99.6826% ( 1) 00:16:14.258 4.027 - 4.053: 99.6876% ( 1) 00:16:14.258 4.240 - 4.267: 99.6925% ( 1) 00:16:14.258 4.267 - 4.293: 99.6975% ( 1) 00:16:14.258 4.427 - 4.453: 99.7074% ( 2) 00:16:14.258 4.480 - 4.507: 99.7124% ( 1) 00:16:14.258 4.560 - 4.587: 99.7173% ( 1) 00:16:14.258 4.587 - 4.613: 99.7272% ( 2) 00:16:14.258 4.640 - 4.667: 99.7322% ( 1) 00:16:14.258 4.667 - 4.693: 99.7372% ( 1) 00:16:14.258 4.800 - 4.827: 99.7421% ( 1) 00:16:14.258 4.827 - 4.853: 99.7471% ( 1) 00:16:14.258 4.853 - 4.880: 99.7520% ( 1) 00:16:14.258 4.933 - 4.960: 99.7570% ( 1) 00:16:14.258 4.960 - 4.987: 99.7669% ( 2) 00:16:14.258 4.987 - 5.013: 99.7719% ( 1) 00:16:14.258 5.040 - 5.067: 99.7768% ( 1) 00:16:14.258 5.067 - 5.093: 99.7867% ( 2) 00:16:14.258 5.147 - 5.173: 99.7917% ( 1) 00:16:14.258 5.227 - 5.253: 99.7967% ( 1) 00:16:14.258 5.280 - 5.307: 99.8016% ( 1) 00:16:14.258 5.307 - 5.333: 99.8066% ( 1) 00:16:14.258 5.360 - 5.387: 99.8115% ( 1) 00:16:14.258 5.387 - 5.413: 99.8165% ( 1) 00:16:14.258 5.440 - 5.467: 99.8264% ( 2) 00:16:14.258 5.547 - 5.573: 99.8314% ( 1) 00:16:14.258 5.573 - 5.600: 99.8413% ( 2) 00:16:14.258 5.627 - 5.653: 99.8512% ( 2) 00:16:14.258 5.653 - 5.680: 99.8562% ( 1) 00:16:14.258 5.707 - 5.733: 99.8661% ( 2) 00:16:14.258 5.813 - 5.840: 99.8711% ( 1) 00:16:14.258 5.840 - 5.867: 99.8760% ( 1) 00:16:14.258 6.080 - 6.107: 99.8810% ( 1) 00:16:14.258 6.133 - 6.160: 99.8859% ( 1) 00:16:14.258 6.267 - 6.293: 99.8959% ( 2) 00:16:14.258 6.587 - 6.613: 99.9008% ( 1) 00:16:14.258 6.720 - 6.747: 99.9058% ( 1) 00:16:14.258 3986.773 - 4014.080: 100.0000% ( 19) 00:16:14.258 00:16:14.258 Complete histogram 00:16:14.258 ================== 00:16:14.258 Range in us Cumulative Count 00:16:14.258 1.633 - 1.640: 0.1736% ( 35) 00:16:14.258 1.640 - 1.647: 0.6199% ( 90) 00:16:14.258 1.647 - 1.653: 0.6893% ( 14) 00:16:14.258 1.653 - 1.660: 0.7786% ( 18) 00:16:14.258 1.660 - 1.667: 0.8183% ( 8) 00:16:14.258 1.667 - 1.673: 0.8431% ( 5) 00:16:14.258 1.673 - [2024-11-20 09:49:44.805193] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:14.258 1.680: 0.8828% ( 8) 00:16:14.258 1.680 - 1.687: 8.0936% ( 1454) 00:16:14.258 1.687 - 1.693: 41.2617% ( 6688) 00:16:14.258 1.693 - 1.700: 50.3620% ( 1835) 00:16:14.258 1.700 - 1.707: 62.8496% ( 2518) 00:16:14.258 1.707 - 1.720: 76.3837% ( 2729) 00:16:14.258 1.720 - 1.733: 82.2307% ( 1179) 00:16:14.258 1.733 - 1.747: 83.7185% ( 300) 00:16:14.258 1.747 - 1.760: 88.7820% ( 1021) 00:16:14.258 1.760 - 1.773: 94.4555% ( 1144) 00:16:14.258 1.773 - 1.787: 97.5501% ( 624) 00:16:14.258 1.787 - 1.800: 98.9734% ( 287) 00:16:14.258 1.800 - 1.813: 99.4198% ( 90) 00:16:14.258 1.813 - 1.827: 99.4644% ( 9) 00:16:14.258 1.827 - 1.840: 99.4694% ( 1) 00:16:14.258 1.853 - 1.867: 99.4743% ( 1) 00:16:14.258 2.160 - 2.173: 99.4793% ( 1) 00:16:14.258 3.373 - 3.387: 99.4842% ( 1) 00:16:14.258 3.400 - 3.413: 99.4892% ( 1) 00:16:14.258 3.440 - 3.467: 99.4991% ( 2) 00:16:14.258 3.760 - 3.787: 99.5041% ( 1) 00:16:14.258 3.867 - 3.893: 99.5090% ( 1) 00:16:14.258 4.000 - 4.027: 99.5140% ( 1) 00:16:14.258 4.080 - 4.107: 99.5189% ( 1) 00:16:14.258 4.133 - 4.160: 99.5289% ( 2) 00:16:14.258 4.160 - 4.187: 99.5338% ( 1) 00:16:14.258 4.187 - 4.213: 99.5388% ( 1) 00:16:14.258 4.213 - 4.240: 99.5437% ( 1) 00:16:14.258 4.293 - 4.320: 99.5487% ( 1) 00:16:14.258 4.320 - 4.347: 99.5537% ( 1) 00:16:14.258 4.347 - 4.373: 99.5586% ( 1) 00:16:14.258 4.400 - 4.427: 99.5636% ( 1) 00:16:14.258 4.507 - 4.533: 99.5685% ( 1) 00:16:14.258 4.560 - 4.587: 99.5735% ( 1) 00:16:14.258 4.747 - 4.773: 99.5785% ( 1) 00:16:14.258 4.987 - 5.013: 99.5834% ( 1) 00:16:14.258 5.120 - 5.147: 99.5884% ( 1) 00:16:14.258 5.413 - 5.440: 99.5933% ( 1) 00:16:14.258 5.867 - 5.893: 99.5983% ( 1) 00:16:14.258 5.947 - 5.973: 99.6033% ( 1) 00:16:14.258 6.320 - 6.347: 99.6082% ( 1) 00:16:14.258 7.733 - 7.787: 99.6132% ( 1) 00:16:14.258 8.640 - 8.693: 99.6181% ( 1) 00:16:14.258 13.173 - 13.227: 99.6231% ( 1) 00:16:14.258 3031.040 - 3044.693: 99.6280% ( 1) 00:16:14.258 3768.320 - 3795.627: 99.6330% ( 1) 00:16:14.258 3986.773 - 4014.080: 99.9901% ( 72) 00:16:14.258 4014.080 - 4041.387: 99.9950% ( 1) 00:16:14.258 4969.813 - 4997.120: 100.0000% ( 1) 00:16:14.258 00:16:14.258 09:49:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:16:14.258 09:49:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:16:14.258 09:49:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:16:14.258 09:49:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:16:14.258 09:49:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:14.258 [ 00:16:14.258 { 00:16:14.258 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:14.258 "subtype": "Discovery", 00:16:14.258 "listen_addresses": [], 00:16:14.258 "allow_any_host": true, 00:16:14.258 "hosts": [] 00:16:14.258 }, 00:16:14.258 { 00:16:14.258 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:14.258 "subtype": "NVMe", 00:16:14.258 "listen_addresses": [ 00:16:14.258 { 00:16:14.258 "trtype": "VFIOUSER", 00:16:14.258 "adrfam": "IPv4", 00:16:14.258 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:14.258 "trsvcid": "0" 00:16:14.258 } 00:16:14.258 ], 00:16:14.258 "allow_any_host": true, 00:16:14.258 "hosts": [], 00:16:14.258 "serial_number": "SPDK1", 00:16:14.258 "model_number": "SPDK bdev Controller", 00:16:14.258 "max_namespaces": 32, 00:16:14.258 "min_cntlid": 1, 00:16:14.258 "max_cntlid": 65519, 00:16:14.258 "namespaces": [ 00:16:14.258 { 00:16:14.258 "nsid": 1, 00:16:14.258 "bdev_name": "Malloc1", 00:16:14.258 "name": "Malloc1", 00:16:14.258 "nguid": "A528497890614518ACAEC6DFF0AABBCA", 00:16:14.258 "uuid": "a5284978-9061-4518-acae-c6dff0aabbca" 00:16:14.259 } 00:16:14.259 ] 00:16:14.259 }, 00:16:14.259 { 00:16:14.259 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:14.259 "subtype": "NVMe", 00:16:14.259 "listen_addresses": [ 00:16:14.259 { 00:16:14.259 "trtype": "VFIOUSER", 00:16:14.259 "adrfam": "IPv4", 00:16:14.259 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:14.259 "trsvcid": "0" 00:16:14.259 } 00:16:14.259 ], 00:16:14.259 "allow_any_host": true, 00:16:14.259 "hosts": [], 00:16:14.259 "serial_number": "SPDK2", 00:16:14.259 "model_number": "SPDK bdev Controller", 00:16:14.259 "max_namespaces": 32, 00:16:14.259 "min_cntlid": 1, 00:16:14.259 "max_cntlid": 65519, 00:16:14.259 "namespaces": [ 00:16:14.259 { 00:16:14.259 "nsid": 1, 00:16:14.259 "bdev_name": "Malloc2", 00:16:14.259 "name": "Malloc2", 00:16:14.259 "nguid": "628EA76AFEDE408A8CA96DB3B2E65F6A", 00:16:14.259 "uuid": "628ea76a-fede-408a-8ca9-6db3b2e65f6a" 00:16:14.259 } 00:16:14.259 ] 00:16:14.259 } 00:16:14.259 ] 00:16:14.259 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:14.259 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1328679 00:16:14.259 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:14.259 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:16:14.259 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:16:14.259 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:14.259 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:14.259 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:16:14.259 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:14.259 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:16:14.519 [2024-11-20 09:49:45.185940] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:14.519 Malloc3 00:16:14.519 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:16:14.519 [2024-11-20 09:49:45.375267] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:14.519 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:14.519 Asynchronous Event Request test 00:16:14.519 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:14.519 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:14.519 Registering asynchronous event callbacks... 00:16:14.519 Starting namespace attribute notice tests for all controllers... 00:16:14.519 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:14.519 aer_cb - Changed Namespace 00:16:14.519 Cleaning up... 00:16:14.780 [ 00:16:14.780 { 00:16:14.780 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:14.780 "subtype": "Discovery", 00:16:14.780 "listen_addresses": [], 00:16:14.780 "allow_any_host": true, 00:16:14.780 "hosts": [] 00:16:14.780 }, 00:16:14.780 { 00:16:14.780 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:14.780 "subtype": "NVMe", 00:16:14.780 "listen_addresses": [ 00:16:14.780 { 00:16:14.780 "trtype": "VFIOUSER", 00:16:14.780 "adrfam": "IPv4", 00:16:14.780 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:14.780 "trsvcid": "0" 00:16:14.780 } 00:16:14.780 ], 00:16:14.781 "allow_any_host": true, 00:16:14.781 "hosts": [], 00:16:14.781 "serial_number": "SPDK1", 00:16:14.781 "model_number": "SPDK bdev Controller", 00:16:14.781 "max_namespaces": 32, 00:16:14.781 "min_cntlid": 1, 00:16:14.781 "max_cntlid": 65519, 00:16:14.781 "namespaces": [ 00:16:14.781 { 00:16:14.781 "nsid": 1, 00:16:14.781 "bdev_name": "Malloc1", 00:16:14.781 "name": "Malloc1", 00:16:14.781 "nguid": "A528497890614518ACAEC6DFF0AABBCA", 00:16:14.781 "uuid": "a5284978-9061-4518-acae-c6dff0aabbca" 00:16:14.781 }, 00:16:14.781 { 00:16:14.781 "nsid": 2, 00:16:14.781 "bdev_name": "Malloc3", 00:16:14.781 "name": "Malloc3", 00:16:14.781 "nguid": "892722FE08A44A5D91951B111F3A452F", 00:16:14.781 "uuid": "892722fe-08a4-4a5d-9195-1b111f3a452f" 00:16:14.781 } 00:16:14.781 ] 00:16:14.781 }, 00:16:14.781 { 00:16:14.781 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:14.781 "subtype": "NVMe", 00:16:14.781 "listen_addresses": [ 00:16:14.781 { 00:16:14.781 "trtype": "VFIOUSER", 00:16:14.781 "adrfam": "IPv4", 00:16:14.781 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:14.781 "trsvcid": "0" 00:16:14.781 } 00:16:14.781 ], 00:16:14.781 "allow_any_host": true, 00:16:14.781 "hosts": [], 00:16:14.781 "serial_number": "SPDK2", 00:16:14.781 "model_number": "SPDK bdev Controller", 00:16:14.781 "max_namespaces": 32, 00:16:14.781 "min_cntlid": 1, 00:16:14.781 "max_cntlid": 65519, 00:16:14.781 "namespaces": [ 00:16:14.781 { 00:16:14.781 "nsid": 1, 00:16:14.781 "bdev_name": "Malloc2", 00:16:14.781 "name": "Malloc2", 00:16:14.781 "nguid": "628EA76AFEDE408A8CA96DB3B2E65F6A", 00:16:14.781 "uuid": "628ea76a-fede-408a-8ca9-6db3b2e65f6a" 00:16:14.781 } 00:16:14.781 ] 00:16:14.781 } 00:16:14.781 ] 00:16:14.781 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1328679 00:16:14.781 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:14.781 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:14.781 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:16:14.781 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:16:14.781 [2024-11-20 09:49:45.613815] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:16:14.781 [2024-11-20 09:49:45.613886] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1328783 ] 00:16:14.781 [2024-11-20 09:49:45.655391] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:16:14.781 [2024-11-20 09:49:45.657570] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:14.781 [2024-11-20 09:49:45.657589] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f1a08877000 00:16:14.781 [2024-11-20 09:49:45.658569] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:14.781 [2024-11-20 09:49:45.659570] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:14.781 [2024-11-20 09:49:45.660573] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:14.781 [2024-11-20 09:49:45.661579] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:14.781 [2024-11-20 09:49:45.662584] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:14.781 [2024-11-20 09:49:45.663595] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:14.781 [2024-11-20 09:49:45.664602] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:14.781 [2024-11-20 09:49:45.665611] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:14.781 [2024-11-20 09:49:45.666622] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:14.781 [2024-11-20 09:49:45.666632] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f1a0886c000 00:16:14.781 [2024-11-20 09:49:45.667542] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:14.781 [2024-11-20 09:49:45.680431] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:16:14.781 [2024-11-20 09:49:45.680450] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:16:14.781 [2024-11-20 09:49:45.685520] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:16:14.781 [2024-11-20 09:49:45.685555] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:16:14.781 [2024-11-20 09:49:45.685615] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:16:14.781 [2024-11-20 09:49:45.685625] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:16:14.781 [2024-11-20 09:49:45.685629] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:16:14.781 [2024-11-20 09:49:45.686523] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:16:14.781 [2024-11-20 09:49:45.686531] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:16:14.781 [2024-11-20 09:49:45.686536] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:16:14.781 [2024-11-20 09:49:45.687532] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:16:14.781 [2024-11-20 09:49:45.687539] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:16:14.781 [2024-11-20 09:49:45.687544] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:16:14.781 [2024-11-20 09:49:45.688535] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:16:14.781 [2024-11-20 09:49:45.688542] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:14.781 [2024-11-20 09:49:45.689546] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:16:14.781 [2024-11-20 09:49:45.689553] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:16:14.781 [2024-11-20 09:49:45.689557] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:16:14.781 [2024-11-20 09:49:45.689562] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:14.781 [2024-11-20 09:49:45.689668] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:16:14.781 [2024-11-20 09:49:45.689672] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:14.781 [2024-11-20 09:49:45.689675] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:16:14.781 [2024-11-20 09:49:45.690551] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:16:14.781 [2024-11-20 09:49:45.691558] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:16:14.781 [2024-11-20 09:49:45.692566] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:16:14.781 [2024-11-20 09:49:45.693570] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:14.781 [2024-11-20 09:49:45.693603] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:15.044 [2024-11-20 09:49:45.694579] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:16:15.044 [2024-11-20 09:49:45.694587] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:15.044 [2024-11-20 09:49:45.694591] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:16:15.045 [2024-11-20 09:49:45.694606] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:16:15.045 [2024-11-20 09:49:45.694612] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:16:15.045 [2024-11-20 09:49:45.694621] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:15.045 [2024-11-20 09:49:45.694625] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:15.045 [2024-11-20 09:49:45.694627] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:15.045 [2024-11-20 09:49:45.694636] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:15.045 [2024-11-20 09:49:45.701164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:16:15.045 [2024-11-20 09:49:45.701173] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:16:15.045 [2024-11-20 09:49:45.701176] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:16:15.045 [2024-11-20 09:49:45.701179] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:16:15.045 [2024-11-20 09:49:45.701183] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:16:15.045 [2024-11-20 09:49:45.701188] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:16:15.045 [2024-11-20 09:49:45.701191] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:16:15.045 [2024-11-20 09:49:45.701195] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:16:15.045 [2024-11-20 09:49:45.701202] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:16:15.045 [2024-11-20 09:49:45.701209] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:16:15.045 [2024-11-20 09:49:45.709162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:16:15.045 [2024-11-20 09:49:45.709171] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:15.045 [2024-11-20 09:49:45.709180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:15.045 [2024-11-20 09:49:45.709186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:15.045 [2024-11-20 09:49:45.709192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:15.045 [2024-11-20 09:49:45.709195] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:16:15.045 [2024-11-20 09:49:45.709200] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:15.045 [2024-11-20 09:49:45.709207] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:16:15.045 [2024-11-20 09:49:45.717163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:16:15.045 [2024-11-20 09:49:45.717170] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:16:15.045 [2024-11-20 09:49:45.717174] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:15.045 [2024-11-20 09:49:45.717179] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:16:15.109 [2024-11-20 09:49:45.717183] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:16:15.109 [2024-11-20 09:49:45.717189] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:15.109 [2024-11-20 09:49:45.725162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:16:15.109 [2024-11-20 09:49:45.725207] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:16:15.109 [2024-11-20 09:49:45.725213] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:16:15.109 [2024-11-20 09:49:45.725219] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:16:15.109 [2024-11-20 09:49:45.725222] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:16:15.109 [2024-11-20 09:49:45.725225] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:15.109 [2024-11-20 09:49:45.725230] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:16:15.109 [2024-11-20 09:49:45.733163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:16:15.109 [2024-11-20 09:49:45.733170] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:16:15.109 [2024-11-20 09:49:45.733182] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:16:15.109 [2024-11-20 09:49:45.733188] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:16:15.109 [2024-11-20 09:49:45.733193] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:15.109 [2024-11-20 09:49:45.733196] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:15.109 [2024-11-20 09:49:45.733200] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:15.109 [2024-11-20 09:49:45.733204] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:15.109 [2024-11-20 09:49:45.741163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:16:15.109 [2024-11-20 09:49:45.741173] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:15.109 [2024-11-20 09:49:45.741179] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:15.109 [2024-11-20 09:49:45.741184] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:15.109 [2024-11-20 09:49:45.741187] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:15.109 [2024-11-20 09:49:45.741190] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:15.109 [2024-11-20 09:49:45.741194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:15.109 [2024-11-20 09:49:45.749163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:16:15.109 [2024-11-20 09:49:45.749170] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:15.109 [2024-11-20 09:49:45.749174] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:16:15.109 [2024-11-20 09:49:45.749180] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:16:15.109 [2024-11-20 09:49:45.749184] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:16:15.109 [2024-11-20 09:49:45.749188] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:15.109 [2024-11-20 09:49:45.749191] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:16:15.109 [2024-11-20 09:49:45.749195] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:16:15.109 [2024-11-20 09:49:45.749198] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:16:15.110 [2024-11-20 09:49:45.749202] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:16:15.110 [2024-11-20 09:49:45.749214] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:16:15.110 [2024-11-20 09:49:45.757163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:16:15.110 [2024-11-20 09:49:45.757173] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:16:15.110 [2024-11-20 09:49:45.765161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:16:15.110 [2024-11-20 09:49:45.765171] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:16:15.110 [2024-11-20 09:49:45.773162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:16:15.110 [2024-11-20 09:49:45.773174] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:15.110 [2024-11-20 09:49:45.781161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:16:15.110 [2024-11-20 09:49:45.781173] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:16:15.110 [2024-11-20 09:49:45.781176] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:16:15.110 [2024-11-20 09:49:45.781179] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:16:15.110 [2024-11-20 09:49:45.781181] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:16:15.110 [2024-11-20 09:49:45.781184] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:16:15.110 [2024-11-20 09:49:45.781189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:16:15.110 [2024-11-20 09:49:45.781194] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:16:15.110 [2024-11-20 09:49:45.781197] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:16:15.110 [2024-11-20 09:49:45.781199] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:15.110 [2024-11-20 09:49:45.781204] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:16:15.110 [2024-11-20 09:49:45.781209] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:16:15.110 [2024-11-20 09:49:45.781212] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:15.110 [2024-11-20 09:49:45.781214] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:15.110 [2024-11-20 09:49:45.781219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:15.110 [2024-11-20 09:49:45.781224] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:16:15.110 [2024-11-20 09:49:45.781227] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:16:15.110 [2024-11-20 09:49:45.781230] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:15.110 [2024-11-20 09:49:45.781234] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:16:15.110 [2024-11-20 09:49:45.789163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:16:15.110 [2024-11-20 09:49:45.789173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:16:15.110 [2024-11-20 09:49:45.789181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:16:15.110 [2024-11-20 09:49:45.789186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:16:15.110 ===================================================== 00:16:15.110 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:15.110 ===================================================== 00:16:15.110 Controller Capabilities/Features 00:16:15.110 ================================ 00:16:15.110 Vendor ID: 4e58 00:16:15.110 Subsystem Vendor ID: 4e58 00:16:15.110 Serial Number: SPDK2 00:16:15.110 Model Number: SPDK bdev Controller 00:16:15.110 Firmware Version: 25.01 00:16:15.110 Recommended Arb Burst: 6 00:16:15.110 IEEE OUI Identifier: 8d 6b 50 00:16:15.110 Multi-path I/O 00:16:15.110 May have multiple subsystem ports: Yes 00:16:15.110 May have multiple controllers: Yes 00:16:15.110 Associated with SR-IOV VF: No 00:16:15.110 Max Data Transfer Size: 131072 00:16:15.110 Max Number of Namespaces: 32 00:16:15.110 Max Number of I/O Queues: 127 00:16:15.110 NVMe Specification Version (VS): 1.3 00:16:15.110 NVMe Specification Version (Identify): 1.3 00:16:15.110 Maximum Queue Entries: 256 00:16:15.110 Contiguous Queues Required: Yes 00:16:15.110 Arbitration Mechanisms Supported 00:16:15.110 Weighted Round Robin: Not Supported 00:16:15.110 Vendor Specific: Not Supported 00:16:15.110 Reset Timeout: 15000 ms 00:16:15.110 Doorbell Stride: 4 bytes 00:16:15.110 NVM Subsystem Reset: Not Supported 00:16:15.110 Command Sets Supported 00:16:15.110 NVM Command Set: Supported 00:16:15.110 Boot Partition: Not Supported 00:16:15.110 Memory Page Size Minimum: 4096 bytes 00:16:15.110 Memory Page Size Maximum: 4096 bytes 00:16:15.110 Persistent Memory Region: Not Supported 00:16:15.110 Optional Asynchronous Events Supported 00:16:15.110 Namespace Attribute Notices: Supported 00:16:15.110 Firmware Activation Notices: Not Supported 00:16:15.110 ANA Change Notices: Not Supported 00:16:15.110 PLE Aggregate Log Change Notices: Not Supported 00:16:15.110 LBA Status Info Alert Notices: Not Supported 00:16:15.110 EGE Aggregate Log Change Notices: Not Supported 00:16:15.110 Normal NVM Subsystem Shutdown event: Not Supported 00:16:15.110 Zone Descriptor Change Notices: Not Supported 00:16:15.110 Discovery Log Change Notices: Not Supported 00:16:15.110 Controller Attributes 00:16:15.110 128-bit Host Identifier: Supported 00:16:15.110 Non-Operational Permissive Mode: Not Supported 00:16:15.110 NVM Sets: Not Supported 00:16:15.110 Read Recovery Levels: Not Supported 00:16:15.110 Endurance Groups: Not Supported 00:16:15.110 Predictable Latency Mode: Not Supported 00:16:15.110 Traffic Based Keep ALive: Not Supported 00:16:15.110 Namespace Granularity: Not Supported 00:16:15.110 SQ Associations: Not Supported 00:16:15.110 UUID List: Not Supported 00:16:15.110 Multi-Domain Subsystem: Not Supported 00:16:15.110 Fixed Capacity Management: Not Supported 00:16:15.110 Variable Capacity Management: Not Supported 00:16:15.110 Delete Endurance Group: Not Supported 00:16:15.110 Delete NVM Set: Not Supported 00:16:15.110 Extended LBA Formats Supported: Not Supported 00:16:15.110 Flexible Data Placement Supported: Not Supported 00:16:15.110 00:16:15.110 Controller Memory Buffer Support 00:16:15.110 ================================ 00:16:15.110 Supported: No 00:16:15.110 00:16:15.110 Persistent Memory Region Support 00:16:15.110 ================================ 00:16:15.110 Supported: No 00:16:15.110 00:16:15.110 Admin Command Set Attributes 00:16:15.110 ============================ 00:16:15.110 Security Send/Receive: Not Supported 00:16:15.110 Format NVM: Not Supported 00:16:15.110 Firmware Activate/Download: Not Supported 00:16:15.110 Namespace Management: Not Supported 00:16:15.110 Device Self-Test: Not Supported 00:16:15.110 Directives: Not Supported 00:16:15.110 NVMe-MI: Not Supported 00:16:15.110 Virtualization Management: Not Supported 00:16:15.110 Doorbell Buffer Config: Not Supported 00:16:15.110 Get LBA Status Capability: Not Supported 00:16:15.110 Command & Feature Lockdown Capability: Not Supported 00:16:15.110 Abort Command Limit: 4 00:16:15.110 Async Event Request Limit: 4 00:16:15.110 Number of Firmware Slots: N/A 00:16:15.110 Firmware Slot 1 Read-Only: N/A 00:16:15.110 Firmware Activation Without Reset: N/A 00:16:15.111 Multiple Update Detection Support: N/A 00:16:15.111 Firmware Update Granularity: No Information Provided 00:16:15.111 Per-Namespace SMART Log: No 00:16:15.111 Asymmetric Namespace Access Log Page: Not Supported 00:16:15.111 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:16:15.111 Command Effects Log Page: Supported 00:16:15.111 Get Log Page Extended Data: Supported 00:16:15.111 Telemetry Log Pages: Not Supported 00:16:15.111 Persistent Event Log Pages: Not Supported 00:16:15.111 Supported Log Pages Log Page: May Support 00:16:15.111 Commands Supported & Effects Log Page: Not Supported 00:16:15.111 Feature Identifiers & Effects Log Page:May Support 00:16:15.111 NVMe-MI Commands & Effects Log Page: May Support 00:16:15.111 Data Area 4 for Telemetry Log: Not Supported 00:16:15.111 Error Log Page Entries Supported: 128 00:16:15.111 Keep Alive: Supported 00:16:15.111 Keep Alive Granularity: 10000 ms 00:16:15.111 00:16:15.111 NVM Command Set Attributes 00:16:15.111 ========================== 00:16:15.111 Submission Queue Entry Size 00:16:15.111 Max: 64 00:16:15.111 Min: 64 00:16:15.111 Completion Queue Entry Size 00:16:15.111 Max: 16 00:16:15.111 Min: 16 00:16:15.111 Number of Namespaces: 32 00:16:15.111 Compare Command: Supported 00:16:15.111 Write Uncorrectable Command: Not Supported 00:16:15.111 Dataset Management Command: Supported 00:16:15.111 Write Zeroes Command: Supported 00:16:15.111 Set Features Save Field: Not Supported 00:16:15.111 Reservations: Not Supported 00:16:15.111 Timestamp: Not Supported 00:16:15.111 Copy: Supported 00:16:15.111 Volatile Write Cache: Present 00:16:15.111 Atomic Write Unit (Normal): 1 00:16:15.111 Atomic Write Unit (PFail): 1 00:16:15.111 Atomic Compare & Write Unit: 1 00:16:15.111 Fused Compare & Write: Supported 00:16:15.111 Scatter-Gather List 00:16:15.111 SGL Command Set: Supported (Dword aligned) 00:16:15.111 SGL Keyed: Not Supported 00:16:15.111 SGL Bit Bucket Descriptor: Not Supported 00:16:15.111 SGL Metadata Pointer: Not Supported 00:16:15.111 Oversized SGL: Not Supported 00:16:15.111 SGL Metadata Address: Not Supported 00:16:15.111 SGL Offset: Not Supported 00:16:15.111 Transport SGL Data Block: Not Supported 00:16:15.111 Replay Protected Memory Block: Not Supported 00:16:15.111 00:16:15.111 Firmware Slot Information 00:16:15.111 ========================= 00:16:15.111 Active slot: 1 00:16:15.111 Slot 1 Firmware Revision: 25.01 00:16:15.111 00:16:15.111 00:16:15.111 Commands Supported and Effects 00:16:15.111 ============================== 00:16:15.111 Admin Commands 00:16:15.111 -------------- 00:16:15.111 Get Log Page (02h): Supported 00:16:15.111 Identify (06h): Supported 00:16:15.111 Abort (08h): Supported 00:16:15.111 Set Features (09h): Supported 00:16:15.111 Get Features (0Ah): Supported 00:16:15.111 Asynchronous Event Request (0Ch): Supported 00:16:15.111 Keep Alive (18h): Supported 00:16:15.111 I/O Commands 00:16:15.111 ------------ 00:16:15.111 Flush (00h): Supported LBA-Change 00:16:15.111 Write (01h): Supported LBA-Change 00:16:15.111 Read (02h): Supported 00:16:15.111 Compare (05h): Supported 00:16:15.111 Write Zeroes (08h): Supported LBA-Change 00:16:15.111 Dataset Management (09h): Supported LBA-Change 00:16:15.111 Copy (19h): Supported LBA-Change 00:16:15.111 00:16:15.111 Error Log 00:16:15.111 ========= 00:16:15.111 00:16:15.111 Arbitration 00:16:15.111 =========== 00:16:15.111 Arbitration Burst: 1 00:16:15.111 00:16:15.111 Power Management 00:16:15.111 ================ 00:16:15.111 Number of Power States: 1 00:16:15.111 Current Power State: Power State #0 00:16:15.111 Power State #0: 00:16:15.111 Max Power: 0.00 W 00:16:15.111 Non-Operational State: Operational 00:16:15.111 Entry Latency: Not Reported 00:16:15.111 Exit Latency: Not Reported 00:16:15.111 Relative Read Throughput: 0 00:16:15.111 Relative Read Latency: 0 00:16:15.111 Relative Write Throughput: 0 00:16:15.111 Relative Write Latency: 0 00:16:15.111 Idle Power: Not Reported 00:16:15.111 Active Power: Not Reported 00:16:15.111 Non-Operational Permissive Mode: Not Supported 00:16:15.111 00:16:15.111 Health Information 00:16:15.111 ================== 00:16:15.111 Critical Warnings: 00:16:15.111 Available Spare Space: OK 00:16:15.111 Temperature: OK 00:16:15.111 Device Reliability: OK 00:16:15.111 Read Only: No 00:16:15.111 Volatile Memory Backup: OK 00:16:15.111 Current Temperature: 0 Kelvin (-273 Celsius) 00:16:15.111 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:15.111 Available Spare: 0% 00:16:15.111 Available Sp[2024-11-20 09:49:45.789259] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:16:15.111 [2024-11-20 09:49:45.797166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:16:15.111 [2024-11-20 09:49:45.797187] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:16:15.111 [2024-11-20 09:49:45.797194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.111 [2024-11-20 09:49:45.797199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.111 [2024-11-20 09:49:45.797205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.111 [2024-11-20 09:49:45.797209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.111 [2024-11-20 09:49:45.797248] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:16:15.111 [2024-11-20 09:49:45.797256] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:16:15.111 [2024-11-20 09:49:45.798254] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:15.111 [2024-11-20 09:49:45.798290] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:16:15.111 [2024-11-20 09:49:45.798295] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:16:15.111 [2024-11-20 09:49:45.799258] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:16:15.111 [2024-11-20 09:49:45.799267] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:16:15.111 [2024-11-20 09:49:45.799306] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:16:15.111 [2024-11-20 09:49:45.800278] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:15.111 are Threshold: 0% 00:16:15.111 Life Percentage Used: 0% 00:16:15.111 Data Units Read: 0 00:16:15.111 Data Units Written: 0 00:16:15.111 Host Read Commands: 0 00:16:15.111 Host Write Commands: 0 00:16:15.111 Controller Busy Time: 0 minutes 00:16:15.111 Power Cycles: 0 00:16:15.111 Power On Hours: 0 hours 00:16:15.111 Unsafe Shutdowns: 0 00:16:15.111 Unrecoverable Media Errors: 0 00:16:15.111 Lifetime Error Log Entries: 0 00:16:15.111 Warning Temperature Time: 0 minutes 00:16:15.111 Critical Temperature Time: 0 minutes 00:16:15.111 00:16:15.111 Number of Queues 00:16:15.111 ================ 00:16:15.112 Number of I/O Submission Queues: 127 00:16:15.112 Number of I/O Completion Queues: 127 00:16:15.112 00:16:15.112 Active Namespaces 00:16:15.112 ================= 00:16:15.112 Namespace ID:1 00:16:15.112 Error Recovery Timeout: Unlimited 00:16:15.112 Command Set Identifier: NVM (00h) 00:16:15.112 Deallocate: Supported 00:16:15.112 Deallocated/Unwritten Error: Not Supported 00:16:15.112 Deallocated Read Value: Unknown 00:16:15.112 Deallocate in Write Zeroes: Not Supported 00:16:15.112 Deallocated Guard Field: 0xFFFF 00:16:15.112 Flush: Supported 00:16:15.112 Reservation: Supported 00:16:15.112 Namespace Sharing Capabilities: Multiple Controllers 00:16:15.112 Size (in LBAs): 131072 (0GiB) 00:16:15.112 Capacity (in LBAs): 131072 (0GiB) 00:16:15.112 Utilization (in LBAs): 131072 (0GiB) 00:16:15.112 NGUID: 628EA76AFEDE408A8CA96DB3B2E65F6A 00:16:15.112 UUID: 628ea76a-fede-408a-8ca9-6db3b2e65f6a 00:16:15.112 Thin Provisioning: Not Supported 00:16:15.112 Per-NS Atomic Units: Yes 00:16:15.112 Atomic Boundary Size (Normal): 0 00:16:15.112 Atomic Boundary Size (PFail): 0 00:16:15.112 Atomic Boundary Offset: 0 00:16:15.112 Maximum Single Source Range Length: 65535 00:16:15.112 Maximum Copy Length: 65535 00:16:15.112 Maximum Source Range Count: 1 00:16:15.112 NGUID/EUI64 Never Reused: No 00:16:15.112 Namespace Write Protected: No 00:16:15.112 Number of LBA Formats: 1 00:16:15.112 Current LBA Format: LBA Format #00 00:16:15.112 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:15.112 00:16:15.112 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:16:15.372 [2024-11-20 09:49:45.990529] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:20.657 Initializing NVMe Controllers 00:16:20.657 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:20.657 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:20.657 Initialization complete. Launching workers. 00:16:20.657 ======================================================== 00:16:20.657 Latency(us) 00:16:20.657 Device Information : IOPS MiB/s Average min max 00:16:20.657 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39975.13 156.15 3201.66 843.39 7445.25 00:16:20.657 ======================================================== 00:16:20.657 Total : 39975.13 156.15 3201.66 843.39 7445.25 00:16:20.657 00:16:20.657 [2024-11-20 09:49:51.096361] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:20.657 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:16:20.657 [2024-11-20 09:49:51.286962] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:25.938 Initializing NVMe Controllers 00:16:25.938 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:25.938 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:25.938 Initialization complete. Launching workers. 00:16:25.938 ======================================================== 00:16:25.938 Latency(us) 00:16:25.938 Device Information : IOPS MiB/s Average min max 00:16:25.938 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 40029.18 156.36 3198.16 846.35 9734.01 00:16:25.938 ======================================================== 00:16:25.938 Total : 40029.18 156.36 3198.16 846.35 9734.01 00:16:25.938 00:16:25.938 [2024-11-20 09:49:56.303667] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:25.938 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:16:25.938 [2024-11-20 09:49:56.508863] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:31.227 [2024-11-20 09:50:01.657242] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:31.227 Initializing NVMe Controllers 00:16:31.227 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:31.227 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:31.227 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:16:31.227 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:16:31.227 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:16:31.227 Initialization complete. Launching workers. 00:16:31.227 Starting thread on core 2 00:16:31.227 Starting thread on core 3 00:16:31.227 Starting thread on core 1 00:16:31.227 09:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:16:31.227 [2024-11-20 09:50:01.903599] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:34.522 [2024-11-20 09:50:04.963743] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:34.522 Initializing NVMe Controllers 00:16:34.522 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:34.522 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:34.522 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:16:34.522 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:16:34.522 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:16:34.522 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:16:34.522 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:34.522 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:34.522 Initialization complete. Launching workers. 00:16:34.522 Starting thread on core 1 with urgent priority queue 00:16:34.522 Starting thread on core 2 with urgent priority queue 00:16:34.522 Starting thread on core 3 with urgent priority queue 00:16:34.522 Starting thread on core 0 with urgent priority queue 00:16:34.522 SPDK bdev Controller (SPDK2 ) core 0: 7752.33 IO/s 12.90 secs/100000 ios 00:16:34.522 SPDK bdev Controller (SPDK2 ) core 1: 10492.00 IO/s 9.53 secs/100000 ios 00:16:34.522 SPDK bdev Controller (SPDK2 ) core 2: 6880.67 IO/s 14.53 secs/100000 ios 00:16:34.522 SPDK bdev Controller (SPDK2 ) core 3: 7548.33 IO/s 13.25 secs/100000 ios 00:16:34.522 ======================================================== 00:16:34.522 00:16:34.522 09:50:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:34.522 [2024-11-20 09:50:05.203585] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:34.522 Initializing NVMe Controllers 00:16:34.522 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:34.522 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:34.522 Namespace ID: 1 size: 0GB 00:16:34.522 Initialization complete. 00:16:34.522 INFO: using host memory buffer for IO 00:16:34.522 Hello world! 00:16:34.522 [2024-11-20 09:50:05.213656] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:34.522 09:50:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:34.783 [2024-11-20 09:50:05.450814] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:35.725 Initializing NVMe Controllers 00:16:35.725 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:35.725 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:35.725 Initialization complete. Launching workers. 00:16:35.725 submit (in ns) avg, min, max = 5722.2, 2846.7, 3998369.2 00:16:35.725 complete (in ns) avg, min, max = 17495.9, 1645.8, 3999847.5 00:16:35.725 00:16:35.725 Submit histogram 00:16:35.725 ================ 00:16:35.725 Range in us Cumulative Count 00:16:35.725 2.840 - 2.853: 0.2438% ( 50) 00:16:35.725 2.853 - 2.867: 1.3264% ( 222) 00:16:35.725 2.867 - 2.880: 3.9011% ( 528) 00:16:35.725 2.880 - 2.893: 8.3191% ( 906) 00:16:35.725 2.893 - 2.907: 13.5173% ( 1066) 00:16:35.725 2.907 - 2.920: 18.5303% ( 1028) 00:16:35.725 2.920 - 2.933: 23.9967% ( 1121) 00:16:35.725 2.933 - 2.947: 29.0584% ( 1038) 00:16:35.725 2.947 - 2.960: 34.3980% ( 1095) 00:16:35.725 2.960 - 2.973: 39.6938% ( 1086) 00:16:35.725 2.973 - 2.987: 45.0480% ( 1098) 00:16:35.725 2.987 - 3.000: 50.7534% ( 1170) 00:16:35.725 3.000 - 3.013: 57.5267% ( 1389) 00:16:35.725 3.013 - 3.027: 66.6114% ( 1863) 00:16:35.725 3.027 - 3.040: 76.1057% ( 1947) 00:16:35.725 3.040 - 3.053: 83.9274% ( 1604) 00:16:35.725 3.053 - 3.067: 89.8279% ( 1210) 00:16:35.725 3.067 - 3.080: 94.2459% ( 906) 00:16:35.725 3.080 - 3.093: 97.3131% ( 629) 00:16:35.725 3.093 - 3.107: 98.6785% ( 280) 00:16:35.725 3.107 - 3.120: 99.2685% ( 121) 00:16:35.725 3.120 - 3.133: 99.4880% ( 45) 00:16:35.725 3.133 - 3.147: 99.5562% ( 14) 00:16:35.725 3.147 - 3.160: 99.5709% ( 3) 00:16:35.725 3.160 - 3.173: 99.5855% ( 3) 00:16:35.725 3.173 - 3.187: 99.5904% ( 1) 00:16:35.725 3.187 - 3.200: 99.5953% ( 1) 00:16:35.725 3.200 - 3.213: 99.6001% ( 1) 00:16:35.725 3.373 - 3.387: 99.6050% ( 1) 00:16:35.725 3.493 - 3.520: 99.6099% ( 1) 00:16:35.725 3.520 - 3.547: 99.6196% ( 2) 00:16:35.725 3.707 - 3.733: 99.6245% ( 1) 00:16:35.725 4.133 - 4.160: 99.6294% ( 1) 00:16:35.725 4.187 - 4.213: 99.6343% ( 1) 00:16:35.725 4.347 - 4.373: 99.6391% ( 1) 00:16:35.725 4.373 - 4.400: 99.6440% ( 1) 00:16:35.725 4.453 - 4.480: 99.6538% ( 2) 00:16:35.725 4.480 - 4.507: 99.6587% ( 1) 00:16:35.725 4.533 - 4.560: 99.6684% ( 2) 00:16:35.725 4.640 - 4.667: 99.6782% ( 2) 00:16:35.725 4.693 - 4.720: 99.6830% ( 1) 00:16:35.725 4.747 - 4.773: 99.7074% ( 5) 00:16:35.725 4.853 - 4.880: 99.7123% ( 1) 00:16:35.725 4.933 - 4.960: 99.7172% ( 1) 00:16:35.725 4.960 - 4.987: 99.7220% ( 1) 00:16:35.725 5.040 - 5.067: 99.7269% ( 1) 00:16:35.725 5.067 - 5.093: 99.7318% ( 1) 00:16:35.725 5.093 - 5.120: 99.7416% ( 2) 00:16:35.725 5.120 - 5.147: 99.7562% ( 3) 00:16:35.725 5.147 - 5.173: 99.7708% ( 3) 00:16:35.725 5.173 - 5.200: 99.7757% ( 1) 00:16:35.725 5.227 - 5.253: 99.7806% ( 1) 00:16:35.725 5.253 - 5.280: 99.7903% ( 2) 00:16:35.725 5.280 - 5.307: 99.7952% ( 1) 00:16:35.725 5.387 - 5.413: 99.8001% ( 1) 00:16:35.725 5.413 - 5.440: 99.8098% ( 2) 00:16:35.725 5.440 - 5.467: 99.8196% ( 2) 00:16:35.725 5.467 - 5.493: 99.8245% ( 1) 00:16:35.725 5.493 - 5.520: 99.8293% ( 1) 00:16:35.725 5.520 - 5.547: 99.8342% ( 1) 00:16:35.725 5.573 - 5.600: 99.8391% ( 1) 00:16:35.725 5.600 - 5.627: 99.8440% ( 1) 00:16:35.725 5.653 - 5.680: 99.8488% ( 1) 00:16:35.725 5.707 - 5.733: 99.8537% ( 1) 00:16:35.725 5.787 - 5.813: 99.8586% ( 1) 00:16:35.725 5.813 - 5.840: 99.8635% ( 1) 00:16:35.725 5.867 - 5.893: 99.8683% ( 1) 00:16:35.725 6.000 - 6.027: 99.8732% ( 1) 00:16:35.725 6.080 - 6.107: 99.8781% ( 1) 00:16:35.725 6.107 - 6.133: 99.8830% ( 1) 00:16:35.725 6.160 - 6.187: 99.8878% ( 1) 00:16:35.725 6.187 - 6.213: 99.8927% ( 1) 00:16:35.725 6.453 - 6.480: 99.8976% ( 1) 00:16:35.725 6.533 - 6.560: 99.9073% ( 2) 00:16:35.725 6.587 - 6.613: 99.9122% ( 1) 00:16:35.725 6.747 - 6.773: 99.9171% ( 1) 00:16:35.725 7.253 - 7.307: 99.9220% ( 1) 00:16:35.725 7.360 - 7.413: 99.9269% ( 1) 00:16:35.725 9.067 - 9.120: 99.9317% ( 1) 00:16:35.725 [2024-11-20 09:50:06.544675] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:35.725 3986.773 - 4014.080: 100.0000% ( 14) 00:16:35.725 00:16:35.725 Complete histogram 00:16:35.725 ================== 00:16:35.725 Range in us Cumulative Count 00:16:35.725 1.640 - 1.647: 0.0049% ( 1) 00:16:35.725 1.647 - 1.653: 0.3950% ( 80) 00:16:35.725 1.653 - 1.660: 0.6827% ( 59) 00:16:35.725 1.660 - 1.667: 0.7120% ( 6) 00:16:35.725 1.667 - 1.673: 0.8144% ( 21) 00:16:35.725 1.673 - 1.680: 0.8387% ( 5) 00:16:35.725 1.680 - 1.687: 0.8534% ( 3) 00:16:35.725 1.687 - 1.693: 0.9460% ( 19) 00:16:35.725 1.693 - 1.700: 45.7405% ( 9186) 00:16:35.725 1.700 - 1.707: 56.4490% ( 2196) 00:16:35.726 1.707 - 1.720: 73.5895% ( 3515) 00:16:35.726 1.720 - 1.733: 81.7038% ( 1664) 00:16:35.726 1.733 - 1.747: 83.3813% ( 344) 00:16:35.726 1.747 - 1.760: 86.2827% ( 595) 00:16:35.726 1.760 - 1.773: 91.4761% ( 1065) 00:16:35.726 1.773 - 1.787: 96.0696% ( 942) 00:16:35.726 1.787 - 1.800: 98.2543% ( 448) 00:16:35.726 1.800 - 1.813: 99.2490% ( 204) 00:16:35.726 1.813 - 1.827: 99.4246% ( 36) 00:16:35.726 1.827 - 1.840: 99.4441% ( 4) 00:16:35.726 1.840 - 1.853: 99.4490% ( 1) 00:16:35.726 3.240 - 3.253: 99.4538% ( 1) 00:16:35.726 3.253 - 3.267: 99.4587% ( 1) 00:16:35.726 3.320 - 3.333: 99.4636% ( 1) 00:16:35.726 3.333 - 3.347: 99.4734% ( 2) 00:16:35.726 3.347 - 3.360: 99.4782% ( 1) 00:16:35.726 3.440 - 3.467: 99.4831% ( 1) 00:16:35.726 3.467 - 3.493: 99.4880% ( 1) 00:16:35.726 3.547 - 3.573: 99.4929% ( 1) 00:16:35.726 3.600 - 3.627: 99.4977% ( 1) 00:16:35.726 3.627 - 3.653: 99.5026% ( 1) 00:16:35.726 3.840 - 3.867: 99.5075% ( 1) 00:16:35.726 3.867 - 3.893: 99.5124% ( 1) 00:16:35.726 3.947 - 3.973: 99.5221% ( 2) 00:16:35.726 4.000 - 4.027: 99.5319% ( 2) 00:16:35.726 4.107 - 4.133: 99.5416% ( 2) 00:16:35.726 4.133 - 4.160: 99.5465% ( 1) 00:16:35.726 4.213 - 4.240: 99.5514% ( 1) 00:16:35.726 4.400 - 4.427: 99.5611% ( 2) 00:16:35.726 4.507 - 4.533: 99.5660% ( 1) 00:16:35.726 4.533 - 4.560: 99.5709% ( 1) 00:16:35.726 4.560 - 4.587: 99.5758% ( 1) 00:16:35.726 4.613 - 4.640: 99.5806% ( 1) 00:16:35.726 4.720 - 4.747: 99.5855% ( 1) 00:16:35.726 4.747 - 4.773: 99.5904% ( 1) 00:16:35.726 4.773 - 4.800: 99.5953% ( 1) 00:16:35.726 11.893 - 11.947: 99.6001% ( 1) 00:16:35.726 12.693 - 12.747: 99.6050% ( 1) 00:16:35.726 3986.773 - 4014.080: 100.0000% ( 81) 00:16:35.726 00:16:35.726 09:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:16:35.726 09:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:35.726 09:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:16:35.726 09:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:16:35.726 09:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:35.987 [ 00:16:35.987 { 00:16:35.987 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:35.987 "subtype": "Discovery", 00:16:35.987 "listen_addresses": [], 00:16:35.987 "allow_any_host": true, 00:16:35.987 "hosts": [] 00:16:35.987 }, 00:16:35.987 { 00:16:35.987 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:35.987 "subtype": "NVMe", 00:16:35.987 "listen_addresses": [ 00:16:35.987 { 00:16:35.987 "trtype": "VFIOUSER", 00:16:35.987 "adrfam": "IPv4", 00:16:35.987 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:35.987 "trsvcid": "0" 00:16:35.987 } 00:16:35.987 ], 00:16:35.987 "allow_any_host": true, 00:16:35.987 "hosts": [], 00:16:35.987 "serial_number": "SPDK1", 00:16:35.987 "model_number": "SPDK bdev Controller", 00:16:35.987 "max_namespaces": 32, 00:16:35.987 "min_cntlid": 1, 00:16:35.987 "max_cntlid": 65519, 00:16:35.987 "namespaces": [ 00:16:35.987 { 00:16:35.987 "nsid": 1, 00:16:35.987 "bdev_name": "Malloc1", 00:16:35.987 "name": "Malloc1", 00:16:35.987 "nguid": "A528497890614518ACAEC6DFF0AABBCA", 00:16:35.987 "uuid": "a5284978-9061-4518-acae-c6dff0aabbca" 00:16:35.987 }, 00:16:35.987 { 00:16:35.987 "nsid": 2, 00:16:35.987 "bdev_name": "Malloc3", 00:16:35.987 "name": "Malloc3", 00:16:35.987 "nguid": "892722FE08A44A5D91951B111F3A452F", 00:16:35.987 "uuid": "892722fe-08a4-4a5d-9195-1b111f3a452f" 00:16:35.987 } 00:16:35.987 ] 00:16:35.987 }, 00:16:35.987 { 00:16:35.987 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:35.987 "subtype": "NVMe", 00:16:35.987 "listen_addresses": [ 00:16:35.987 { 00:16:35.987 "trtype": "VFIOUSER", 00:16:35.987 "adrfam": "IPv4", 00:16:35.987 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:35.987 "trsvcid": "0" 00:16:35.987 } 00:16:35.987 ], 00:16:35.987 "allow_any_host": true, 00:16:35.987 "hosts": [], 00:16:35.987 "serial_number": "SPDK2", 00:16:35.987 "model_number": "SPDK bdev Controller", 00:16:35.987 "max_namespaces": 32, 00:16:35.987 "min_cntlid": 1, 00:16:35.987 "max_cntlid": 65519, 00:16:35.987 "namespaces": [ 00:16:35.987 { 00:16:35.987 "nsid": 1, 00:16:35.987 "bdev_name": "Malloc2", 00:16:35.987 "name": "Malloc2", 00:16:35.987 "nguid": "628EA76AFEDE408A8CA96DB3B2E65F6A", 00:16:35.987 "uuid": "628ea76a-fede-408a-8ca9-6db3b2e65f6a" 00:16:35.987 } 00:16:35.987 ] 00:16:35.987 } 00:16:35.987 ] 00:16:35.987 09:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:35.987 09:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1332811 00:16:35.987 09:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:35.987 09:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:16:35.987 09:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:16:35.987 09:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:35.987 09:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:35.987 09:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:16:35.987 09:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:35.987 09:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:16:36.249 [2024-11-20 09:50:06.924474] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:36.249 Malloc4 00:16:36.249 09:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:16:36.249 [2024-11-20 09:50:07.120782] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:36.249 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:36.249 Asynchronous Event Request test 00:16:36.249 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:36.249 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:36.249 Registering asynchronous event callbacks... 00:16:36.249 Starting namespace attribute notice tests for all controllers... 00:16:36.249 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:36.249 aer_cb - Changed Namespace 00:16:36.249 Cleaning up... 00:16:36.510 [ 00:16:36.510 { 00:16:36.510 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:36.510 "subtype": "Discovery", 00:16:36.510 "listen_addresses": [], 00:16:36.510 "allow_any_host": true, 00:16:36.510 "hosts": [] 00:16:36.510 }, 00:16:36.510 { 00:16:36.510 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:36.510 "subtype": "NVMe", 00:16:36.510 "listen_addresses": [ 00:16:36.510 { 00:16:36.510 "trtype": "VFIOUSER", 00:16:36.511 "adrfam": "IPv4", 00:16:36.511 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:36.511 "trsvcid": "0" 00:16:36.511 } 00:16:36.511 ], 00:16:36.511 "allow_any_host": true, 00:16:36.511 "hosts": [], 00:16:36.511 "serial_number": "SPDK1", 00:16:36.511 "model_number": "SPDK bdev Controller", 00:16:36.511 "max_namespaces": 32, 00:16:36.511 "min_cntlid": 1, 00:16:36.511 "max_cntlid": 65519, 00:16:36.511 "namespaces": [ 00:16:36.511 { 00:16:36.511 "nsid": 1, 00:16:36.511 "bdev_name": "Malloc1", 00:16:36.511 "name": "Malloc1", 00:16:36.511 "nguid": "A528497890614518ACAEC6DFF0AABBCA", 00:16:36.511 "uuid": "a5284978-9061-4518-acae-c6dff0aabbca" 00:16:36.511 }, 00:16:36.511 { 00:16:36.511 "nsid": 2, 00:16:36.511 "bdev_name": "Malloc3", 00:16:36.511 "name": "Malloc3", 00:16:36.511 "nguid": "892722FE08A44A5D91951B111F3A452F", 00:16:36.511 "uuid": "892722fe-08a4-4a5d-9195-1b111f3a452f" 00:16:36.511 } 00:16:36.511 ] 00:16:36.511 }, 00:16:36.511 { 00:16:36.511 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:36.511 "subtype": "NVMe", 00:16:36.511 "listen_addresses": [ 00:16:36.511 { 00:16:36.511 "trtype": "VFIOUSER", 00:16:36.511 "adrfam": "IPv4", 00:16:36.511 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:36.511 "trsvcid": "0" 00:16:36.511 } 00:16:36.511 ], 00:16:36.511 "allow_any_host": true, 00:16:36.511 "hosts": [], 00:16:36.511 "serial_number": "SPDK2", 00:16:36.511 "model_number": "SPDK bdev Controller", 00:16:36.511 "max_namespaces": 32, 00:16:36.511 "min_cntlid": 1, 00:16:36.511 "max_cntlid": 65519, 00:16:36.511 "namespaces": [ 00:16:36.511 { 00:16:36.511 "nsid": 1, 00:16:36.511 "bdev_name": "Malloc2", 00:16:36.511 "name": "Malloc2", 00:16:36.511 "nguid": "628EA76AFEDE408A8CA96DB3B2E65F6A", 00:16:36.511 "uuid": "628ea76a-fede-408a-8ca9-6db3b2e65f6a" 00:16:36.511 }, 00:16:36.511 { 00:16:36.511 "nsid": 2, 00:16:36.511 "bdev_name": "Malloc4", 00:16:36.511 "name": "Malloc4", 00:16:36.511 "nguid": "5F50D959A35F46D39F05E3C4AF310AE1", 00:16:36.511 "uuid": "5f50d959-a35f-46d3-9f05-e3c4af310ae1" 00:16:36.511 } 00:16:36.511 ] 00:16:36.511 } 00:16:36.511 ] 00:16:36.511 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1332811 00:16:36.511 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:16:36.511 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1323771 00:16:36.511 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 1323771 ']' 00:16:36.511 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 1323771 00:16:36.511 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:16:36.511 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:36.511 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1323771 00:16:36.511 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:36.511 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:36.511 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1323771' 00:16:36.511 killing process with pid 1323771 00:16:36.511 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 1323771 00:16:36.511 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 1323771 00:16:36.771 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:36.771 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:36.771 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:16:36.771 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:16:36.771 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:16:36.771 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1333148 00:16:36.771 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1333148' 00:16:36.771 Process pid: 1333148 00:16:36.771 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:36.771 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:16:36.771 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1333148 00:16:36.771 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 1333148 ']' 00:16:36.771 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:36.771 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:36.771 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:36.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:36.771 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:36.771 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:36.771 [2024-11-20 09:50:07.582805] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:16:36.771 [2024-11-20 09:50:07.583724] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:16:36.771 [2024-11-20 09:50:07.583768] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:36.771 [2024-11-20 09:50:07.669243] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:37.031 [2024-11-20 09:50:07.698365] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:37.031 [2024-11-20 09:50:07.698399] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:37.031 [2024-11-20 09:50:07.698405] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:37.031 [2024-11-20 09:50:07.698410] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:37.031 [2024-11-20 09:50:07.698414] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:37.031 [2024-11-20 09:50:07.699621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:37.031 [2024-11-20 09:50:07.699772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:37.031 [2024-11-20 09:50:07.699898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:37.031 [2024-11-20 09:50:07.699900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:37.031 [2024-11-20 09:50:07.750143] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:16:37.031 [2024-11-20 09:50:07.751066] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:16:37.031 [2024-11-20 09:50:07.751927] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:16:37.031 [2024-11-20 09:50:07.752409] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:16:37.031 [2024-11-20 09:50:07.752434] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:16:37.603 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:37.603 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:16:37.603 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:38.544 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:16:38.805 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:38.805 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:38.805 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:38.805 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:38.805 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:39.065 Malloc1 00:16:39.065 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:39.325 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:39.325 09:50:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:39.587 09:50:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:39.587 09:50:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:39.587 09:50:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:39.848 Malloc2 00:16:39.848 09:50:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:39.848 09:50:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:40.110 09:50:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:40.371 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:16:40.371 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1333148 00:16:40.371 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 1333148 ']' 00:16:40.371 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 1333148 00:16:40.371 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:16:40.371 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:40.371 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1333148 00:16:40.371 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:40.371 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:40.371 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1333148' 00:16:40.371 killing process with pid 1333148 00:16:40.371 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 1333148 00:16:40.371 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 1333148 00:16:40.632 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:40.632 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:40.632 00:16:40.632 real 0m50.912s 00:16:40.632 user 3m15.161s 00:16:40.632 sys 0m2.679s 00:16:40.632 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:40.632 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:40.632 ************************************ 00:16:40.632 END TEST nvmf_vfio_user 00:16:40.632 ************************************ 00:16:40.632 09:50:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:40.632 09:50:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:40.632 09:50:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:40.632 09:50:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:40.632 ************************************ 00:16:40.632 START TEST nvmf_vfio_user_nvme_compliance 00:16:40.632 ************************************ 00:16:40.632 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:40.632 * Looking for test storage... 00:16:40.632 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:16:40.632 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:40.632 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lcov --version 00:16:40.632 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:40.894 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:40.894 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:40.894 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:40.894 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:40.894 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:16:40.894 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:16:40.894 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:16:40.894 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:16:40.894 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:16:40.894 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:16:40.894 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:16:40.894 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:40.894 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:16:40.894 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:16:40.894 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:40.894 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:40.894 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:16:40.894 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:16:40.894 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:40.894 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:16:40.894 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:16:40.894 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:16:40.894 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:16:40.894 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:40.894 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:16:40.894 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:16:40.894 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:40.894 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:40.894 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:16:40.894 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:40.894 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:40.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:40.894 --rc genhtml_branch_coverage=1 00:16:40.894 --rc genhtml_function_coverage=1 00:16:40.894 --rc genhtml_legend=1 00:16:40.894 --rc geninfo_all_blocks=1 00:16:40.894 --rc geninfo_unexecuted_blocks=1 00:16:40.894 00:16:40.894 ' 00:16:40.894 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:40.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:40.894 --rc genhtml_branch_coverage=1 00:16:40.894 --rc genhtml_function_coverage=1 00:16:40.894 --rc genhtml_legend=1 00:16:40.894 --rc geninfo_all_blocks=1 00:16:40.894 --rc geninfo_unexecuted_blocks=1 00:16:40.894 00:16:40.894 ' 00:16:40.894 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:40.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:40.894 --rc genhtml_branch_coverage=1 00:16:40.894 --rc genhtml_function_coverage=1 00:16:40.894 --rc genhtml_legend=1 00:16:40.894 --rc geninfo_all_blocks=1 00:16:40.894 --rc geninfo_unexecuted_blocks=1 00:16:40.894 00:16:40.894 ' 00:16:40.894 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:40.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:40.894 --rc genhtml_branch_coverage=1 00:16:40.894 --rc genhtml_function_coverage=1 00:16:40.894 --rc genhtml_legend=1 00:16:40.894 --rc geninfo_all_blocks=1 00:16:40.894 --rc geninfo_unexecuted_blocks=1 00:16:40.894 00:16:40.894 ' 00:16:40.894 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:40.894 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:16:40.894 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:40.894 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:40.894 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:40.894 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:40.894 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:40.894 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:40.894 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:40.894 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:40.894 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:40.894 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:40.894 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:40.894 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:40.894 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:40.894 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:40.894 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:40.894 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:40.894 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:40.895 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:16:40.895 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:40.895 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:40.895 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:40.895 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.895 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.895 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.895 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:16:40.895 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.895 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:16:40.895 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:40.895 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:40.895 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:40.895 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:40.895 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:40.895 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:40.895 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:40.895 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:40.895 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:40.895 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:40.895 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:40.895 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:40.895 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:16:40.895 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:16:40.895 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:16:40.895 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1333906 00:16:40.895 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1333906' 00:16:40.895 Process pid: 1333906 00:16:40.895 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:40.895 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:40.895 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1333906 00:16:40.895 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 1333906 ']' 00:16:40.895 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:40.895 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:40.895 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:40.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:40.895 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:40.895 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:40.895 [2024-11-20 09:50:11.659289] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:16:40.895 [2024-11-20 09:50:11.659364] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:40.895 [2024-11-20 09:50:11.747785] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:40.895 [2024-11-20 09:50:11.781535] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:40.895 [2024-11-20 09:50:11.781565] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:40.895 [2024-11-20 09:50:11.781571] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:40.895 [2024-11-20 09:50:11.781576] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:40.895 [2024-11-20 09:50:11.781581] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:40.895 [2024-11-20 09:50:11.782931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:40.895 [2024-11-20 09:50:11.783086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:40.895 [2024-11-20 09:50:11.783089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:41.838 09:50:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:41.838 09:50:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:16:41.838 09:50:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:16:42.782 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:42.782 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:16:42.782 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:42.782 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.782 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:42.782 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.782 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:16:42.782 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:42.782 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.782 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:42.782 malloc0 00:16:42.782 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.782 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:16:42.782 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.782 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:42.782 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.782 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:42.782 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.782 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:42.782 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.782 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:42.782 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.782 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:42.782 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.782 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:16:42.782 00:16:42.782 00:16:42.782 CUnit - A unit testing framework for C - Version 2.1-3 00:16:42.782 http://cunit.sourceforge.net/ 00:16:42.782 00:16:42.782 00:16:42.782 Suite: nvme_compliance 00:16:43.044 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-20 09:50:13.715590] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:43.044 [2024-11-20 09:50:13.716886] vfio_user.c: 807:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:16:43.044 [2024-11-20 09:50:13.716899] vfio_user.c:5511:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:16:43.044 [2024-11-20 09:50:13.716903] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:16:43.044 [2024-11-20 09:50:13.718613] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:43.044 passed 00:16:43.044 Test: admin_identify_ctrlr_verify_fused ...[2024-11-20 09:50:13.794085] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:43.044 [2024-11-20 09:50:13.797104] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:43.044 passed 00:16:43.044 Test: admin_identify_ns ...[2024-11-20 09:50:13.873698] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:43.044 [2024-11-20 09:50:13.937169] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:16:43.044 [2024-11-20 09:50:13.945169] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:16:43.306 [2024-11-20 09:50:13.966248] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:43.306 passed 00:16:43.306 Test: admin_get_features_mandatory_features ...[2024-11-20 09:50:14.041915] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:43.306 [2024-11-20 09:50:14.044938] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:43.306 passed 00:16:43.306 Test: admin_get_features_optional_features ...[2024-11-20 09:50:14.120427] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:43.306 [2024-11-20 09:50:14.123445] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:43.306 passed 00:16:43.306 Test: admin_set_features_number_of_queues ...[2024-11-20 09:50:14.200170] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:43.568 [2024-11-20 09:50:14.306249] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:43.568 passed 00:16:43.568 Test: admin_get_log_page_mandatory_logs ...[2024-11-20 09:50:14.382881] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:43.568 [2024-11-20 09:50:14.387925] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:43.568 passed 00:16:43.568 Test: admin_get_log_page_with_lpo ...[2024-11-20 09:50:14.461663] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:43.829 [2024-11-20 09:50:14.529167] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:16:43.829 [2024-11-20 09:50:14.542199] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:43.829 passed 00:16:43.829 Test: fabric_property_get ...[2024-11-20 09:50:14.616386] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:43.829 [2024-11-20 09:50:14.617586] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:16:43.829 [2024-11-20 09:50:14.619411] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:43.829 passed 00:16:43.829 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-20 09:50:14.696869] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:43.829 [2024-11-20 09:50:14.698086] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:16:43.829 [2024-11-20 09:50:14.699892] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:43.829 passed 00:16:44.089 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-20 09:50:14.773634] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:44.089 [2024-11-20 09:50:14.857167] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:44.089 [2024-11-20 09:50:14.873166] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:44.089 [2024-11-20 09:50:14.878240] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:44.089 passed 00:16:44.089 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-20 09:50:14.951450] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:44.090 [2024-11-20 09:50:14.952654] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:16:44.090 [2024-11-20 09:50:14.954470] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:44.090 passed 00:16:44.351 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-20 09:50:15.031213] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:44.351 [2024-11-20 09:50:15.107162] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:44.351 [2024-11-20 09:50:15.131165] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:44.351 [2024-11-20 09:50:15.136229] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:44.351 passed 00:16:44.351 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-20 09:50:15.211248] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:44.351 [2024-11-20 09:50:15.212449] vfio_user.c:2161:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:16:44.352 [2024-11-20 09:50:15.212467] vfio_user.c:2155:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:16:44.352 [2024-11-20 09:50:15.214269] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:44.352 passed 00:16:44.612 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-20 09:50:15.290540] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:44.612 [2024-11-20 09:50:15.382165] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:16:44.612 [2024-11-20 09:50:15.390165] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:16:44.612 [2024-11-20 09:50:15.398166] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:16:44.612 [2024-11-20 09:50:15.406166] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:16:44.612 [2024-11-20 09:50:15.435229] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:44.612 passed 00:16:44.612 Test: admin_create_io_sq_verify_pc ...[2024-11-20 09:50:15.508418] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:44.612 [2024-11-20 09:50:15.525169] vfio_user.c:2054:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:16:44.873 [2024-11-20 09:50:15.542570] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:44.873 passed 00:16:44.873 Test: admin_create_io_qp_max_qps ...[2024-11-20 09:50:15.621051] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:45.816 [2024-11-20 09:50:16.724167] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:16:46.386 [2024-11-20 09:50:17.103607] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:46.386 passed 00:16:46.386 Test: admin_create_io_sq_shared_cq ...[2024-11-20 09:50:17.177524] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:46.646 [2024-11-20 09:50:17.310165] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:46.646 [2024-11-20 09:50:17.350215] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:46.646 passed 00:16:46.646 00:16:46.646 Run Summary: Type Total Ran Passed Failed Inactive 00:16:46.646 suites 1 1 n/a 0 0 00:16:46.646 tests 18 18 18 0 0 00:16:46.646 asserts 360 360 360 0 n/a 00:16:46.646 00:16:46.646 Elapsed time = 1.496 seconds 00:16:46.646 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1333906 00:16:46.646 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 1333906 ']' 00:16:46.646 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 1333906 00:16:46.646 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:16:46.646 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:46.646 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1333906 00:16:46.646 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:46.646 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:46.646 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1333906' 00:16:46.646 killing process with pid 1333906 00:16:46.646 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 1333906 00:16:46.646 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 1333906 00:16:46.908 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:16:46.908 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:16:46.908 00:16:46.908 real 0m6.205s 00:16:46.908 user 0m17.563s 00:16:46.908 sys 0m0.551s 00:16:46.908 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:46.908 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:46.908 ************************************ 00:16:46.908 END TEST nvmf_vfio_user_nvme_compliance 00:16:46.908 ************************************ 00:16:46.908 09:50:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:46.908 09:50:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:46.908 09:50:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:46.908 09:50:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:46.908 ************************************ 00:16:46.908 START TEST nvmf_vfio_user_fuzz 00:16:46.908 ************************************ 00:16:46.908 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:46.908 * Looking for test storage... 00:16:46.908 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:46.908 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:46.908 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:16:46.908 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:47.170 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:47.170 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:47.170 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:47.170 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:47.170 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:16:47.170 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:16:47.170 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:16:47.170 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:16:47.170 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:16:47.170 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:16:47.170 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:16:47.170 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:47.170 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:16:47.170 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:16:47.170 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:47.170 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:47.170 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:16:47.170 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:16:47.170 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:47.170 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:16:47.170 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:16:47.170 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:16:47.170 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:16:47.170 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:47.170 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:16:47.170 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:16:47.170 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:47.170 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:47.170 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:16:47.170 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:47.170 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:47.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:47.170 --rc genhtml_branch_coverage=1 00:16:47.170 --rc genhtml_function_coverage=1 00:16:47.170 --rc genhtml_legend=1 00:16:47.170 --rc geninfo_all_blocks=1 00:16:47.170 --rc geninfo_unexecuted_blocks=1 00:16:47.170 00:16:47.170 ' 00:16:47.170 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:47.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:47.170 --rc genhtml_branch_coverage=1 00:16:47.170 --rc genhtml_function_coverage=1 00:16:47.171 --rc genhtml_legend=1 00:16:47.171 --rc geninfo_all_blocks=1 00:16:47.171 --rc geninfo_unexecuted_blocks=1 00:16:47.171 00:16:47.171 ' 00:16:47.171 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:47.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:47.171 --rc genhtml_branch_coverage=1 00:16:47.171 --rc genhtml_function_coverage=1 00:16:47.171 --rc genhtml_legend=1 00:16:47.171 --rc geninfo_all_blocks=1 00:16:47.171 --rc geninfo_unexecuted_blocks=1 00:16:47.171 00:16:47.171 ' 00:16:47.171 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:47.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:47.171 --rc genhtml_branch_coverage=1 00:16:47.171 --rc genhtml_function_coverage=1 00:16:47.171 --rc genhtml_legend=1 00:16:47.171 --rc geninfo_all_blocks=1 00:16:47.171 --rc geninfo_unexecuted_blocks=1 00:16:47.171 00:16:47.171 ' 00:16:47.171 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:47.171 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:16:47.171 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:47.171 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:47.171 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:47.171 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:47.171 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:47.171 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:47.171 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:47.171 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:47.171 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:47.171 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:47.171 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:47.171 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:47.171 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:47.171 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:47.171 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:47.171 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:47.171 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:47.171 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:16:47.171 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:47.171 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:47.171 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:47.171 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.171 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.171 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.171 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:16:47.171 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.171 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:16:47.171 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:47.171 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:47.171 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:47.171 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:47.171 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:47.171 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:47.171 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:47.171 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:47.171 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:47.171 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:47.171 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:47.171 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:47.171 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:47.171 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:16:47.171 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:47.171 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:47.171 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:16:47.171 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1335299 00:16:47.171 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1335299' 00:16:47.171 Process pid: 1335299 00:16:47.171 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:47.171 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1335299 00:16:47.171 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:47.171 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 1335299 ']' 00:16:47.171 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:47.171 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:47.171 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:47.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:47.171 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:47.171 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:48.112 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:48.112 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:16:48.112 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:16:49.052 09:50:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:49.052 09:50:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.052 09:50:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:49.052 09:50:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.052 09:50:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:16:49.052 09:50:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:49.052 09:50:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.052 09:50:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:49.052 malloc0 00:16:49.052 09:50:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.052 09:50:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:16:49.052 09:50:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.052 09:50:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:49.052 09:50:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.052 09:50:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:49.052 09:50:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.052 09:50:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:49.052 09:50:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.052 09:50:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:49.052 09:50:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.052 09:50:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:49.053 09:50:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.053 09:50:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:16:49.053 09:50:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:17:21.175 Fuzzing completed. Shutting down the fuzz application 00:17:21.175 00:17:21.175 Dumping successful admin opcodes: 00:17:21.175 8, 9, 10, 24, 00:17:21.175 Dumping successful io opcodes: 00:17:21.175 0, 00:17:21.175 NS: 0x20000081ef00 I/O qp, Total commands completed: 1234520, total successful commands: 4846, random_seed: 2365884480 00:17:21.175 NS: 0x20000081ef00 admin qp, Total commands completed: 260163, total successful commands: 2094, random_seed: 3274456384 00:17:21.175 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:17:21.175 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.175 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:21.175 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.175 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1335299 00:17:21.175 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 1335299 ']' 00:17:21.175 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 1335299 00:17:21.175 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:17:21.176 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:21.176 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1335299 00:17:21.176 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:21.176 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:21.176 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1335299' 00:17:21.176 killing process with pid 1335299 00:17:21.176 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 1335299 00:17:21.176 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 1335299 00:17:21.176 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:17:21.176 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:17:21.176 00:17:21.176 real 0m32.885s 00:17:21.176 user 0m34.991s 00:17:21.176 sys 0m25.750s 00:17:21.176 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:21.176 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:21.176 ************************************ 00:17:21.176 END TEST nvmf_vfio_user_fuzz 00:17:21.176 ************************************ 00:17:21.176 09:50:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:21.176 09:50:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:21.176 09:50:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:21.176 09:50:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:21.176 ************************************ 00:17:21.176 START TEST nvmf_auth_target 00:17:21.176 ************************************ 00:17:21.176 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:21.176 * Looking for test storage... 00:17:21.176 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:21.176 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:21.176 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:17:21.176 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:21.176 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:21.176 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:21.176 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:21.176 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:21.176 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:17:21.176 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:17:21.176 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:17:21.176 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:17:21.176 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:17:21.176 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:17:21.176 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:17:21.176 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:21.176 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:17:21.176 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:17:21.176 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:21.176 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:21.176 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:17:21.176 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:17:21.176 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:21.176 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:17:21.176 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:17:21.176 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:17:21.176 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:17:21.176 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:21.176 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:17:21.176 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:17:21.176 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:21.176 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:21.176 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:17:21.176 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:21.176 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:21.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:21.176 --rc genhtml_branch_coverage=1 00:17:21.176 --rc genhtml_function_coverage=1 00:17:21.176 --rc genhtml_legend=1 00:17:21.176 --rc geninfo_all_blocks=1 00:17:21.176 --rc geninfo_unexecuted_blocks=1 00:17:21.176 00:17:21.176 ' 00:17:21.176 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:21.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:21.176 --rc genhtml_branch_coverage=1 00:17:21.176 --rc genhtml_function_coverage=1 00:17:21.176 --rc genhtml_legend=1 00:17:21.176 --rc geninfo_all_blocks=1 00:17:21.176 --rc geninfo_unexecuted_blocks=1 00:17:21.176 00:17:21.176 ' 00:17:21.176 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:21.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:21.176 --rc genhtml_branch_coverage=1 00:17:21.176 --rc genhtml_function_coverage=1 00:17:21.176 --rc genhtml_legend=1 00:17:21.176 --rc geninfo_all_blocks=1 00:17:21.176 --rc geninfo_unexecuted_blocks=1 00:17:21.176 00:17:21.176 ' 00:17:21.176 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:21.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:21.176 --rc genhtml_branch_coverage=1 00:17:21.176 --rc genhtml_function_coverage=1 00:17:21.176 --rc genhtml_legend=1 00:17:21.176 --rc geninfo_all_blocks=1 00:17:21.176 --rc geninfo_unexecuted_blocks=1 00:17:21.176 00:17:21.176 ' 00:17:21.176 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:21.176 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:17:21.176 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:21.176 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:21.176 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:21.176 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:21.176 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:21.176 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:21.176 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:21.176 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:21.176 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:21.176 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:21.176 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:21.176 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:21.176 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:21.176 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:21.176 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:21.176 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:21.176 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:21.176 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:17:21.176 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:21.176 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:21.176 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:21.176 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.177 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.177 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.177 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:17:21.177 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.177 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:17:21.177 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:21.177 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:21.177 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:21.177 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:21.177 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:21.177 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:21.177 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:21.177 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:21.177 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:21.177 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:21.177 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:17:21.177 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:17:21.177 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:17:21.177 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:21.177 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:17:21.177 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:17:21.177 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:17:21.177 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:17:21.177 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:21.177 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:21.177 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:21.177 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:21.177 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:21.177 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:21.177 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:21.177 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:21.177 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:21.177 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:21.177 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:17:21.177 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.770 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:27.770 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:17:27.770 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:27.770 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:27.770 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:27.770 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:27.770 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:27.770 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:17:27.770 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:27.770 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:17:27.770 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:17:27.770 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:17:27.770 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:17:27.770 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:17:27.770 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:17:27.770 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:27.770 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:27.770 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:27.770 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:27.770 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:27.770 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:27.770 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:27.770 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:27.770 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:27.770 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:27.770 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:27.770 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:27.770 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:27.770 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:27.770 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:27.770 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:27.770 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:27.770 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:27.770 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:27.770 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:27.770 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:27.770 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:27.770 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:27.770 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:27.770 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:27.770 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:27.770 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:27.770 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:27.770 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:27.770 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:27.770 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:27.770 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:27.770 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:27.770 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:27.770 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:27.770 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:27.770 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:27.770 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:27.770 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:27.770 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:27.770 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:27.770 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:27.770 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:27.770 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:27.770 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:27.770 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:27.770 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:27.770 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:27.770 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:27.770 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:27.770 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:27.770 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:27.770 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:27.770 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:27.770 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:27.770 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:27.770 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:27.770 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:27.770 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:17:27.770 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:27.770 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:27.770 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:27.770 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:27.770 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:27.770 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:27.770 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:27.770 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:27.770 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:27.771 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:27.771 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:27.771 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:27.771 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:27.771 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:27.771 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:27.771 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:27.771 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:27.771 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:27.771 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:27.771 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:27.771 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:27.771 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:27.771 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:27.771 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:27.771 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:27.771 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:27.771 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:27.771 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.609 ms 00:17:27.771 00:17:27.771 --- 10.0.0.2 ping statistics --- 00:17:27.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:27.771 rtt min/avg/max/mdev = 0.609/0.609/0.609/0.000 ms 00:17:27.771 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:27.771 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:27.771 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.294 ms 00:17:27.771 00:17:27.771 --- 10.0.0.1 ping statistics --- 00:17:27.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:27.771 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:17:27.771 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:27.771 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:17:27.771 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:27.771 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:27.771 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:27.771 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:27.771 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:27.771 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:27.771 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:27.771 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:17:27.771 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:27.771 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:27.771 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.771 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=1345288 00:17:27.771 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 1345288 00:17:27.771 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:17:27.771 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1345288 ']' 00:17:27.771 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:27.771 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:27.771 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:27.771 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:27.771 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.343 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:28.343 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:28.343 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:28.343 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:28.343 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.605 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:28.605 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=1345408 00:17:28.605 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:28.605 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:17:28.605 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:17:28.605 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:28.605 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:28.605 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:28.605 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:17:28.605 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:17:28.605 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:28.605 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=b9ff366b93b586719f487e1108fe259482821eeafc1d6326 00:17:28.605 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:17:28.605 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.5aJ 00:17:28.605 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key b9ff366b93b586719f487e1108fe259482821eeafc1d6326 0 00:17:28.605 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 b9ff366b93b586719f487e1108fe259482821eeafc1d6326 0 00:17:28.605 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:28.605 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:28.605 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=b9ff366b93b586719f487e1108fe259482821eeafc1d6326 00:17:28.605 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:17:28.605 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:28.605 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.5aJ 00:17:28.605 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.5aJ 00:17:28.605 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.5aJ 00:17:28.605 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:17:28.605 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:28.605 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:28.605 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:28.605 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:17:28.605 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:17:28.605 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:28.605 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=2c1e5a4509ac7a1e3f211640f30e0a643a9a9fec45cae088ee85781075509ecc 00:17:28.605 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:17:28.605 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.PEA 00:17:28.605 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 2c1e5a4509ac7a1e3f211640f30e0a643a9a9fec45cae088ee85781075509ecc 3 00:17:28.605 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 2c1e5a4509ac7a1e3f211640f30e0a643a9a9fec45cae088ee85781075509ecc 3 00:17:28.605 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:28.605 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:28.605 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=2c1e5a4509ac7a1e3f211640f30e0a643a9a9fec45cae088ee85781075509ecc 00:17:28.605 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:17:28.605 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:28.605 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.PEA 00:17:28.605 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.PEA 00:17:28.605 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.PEA 00:17:28.605 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:17:28.605 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:28.605 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:28.605 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:28.605 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:17:28.605 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:17:28.605 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:28.605 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=8cadeb4579fd5a8170ce242f80691ee1 00:17:28.605 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:17:28.605 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.PxE 00:17:28.605 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 8cadeb4579fd5a8170ce242f80691ee1 1 00:17:28.605 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 8cadeb4579fd5a8170ce242f80691ee1 1 00:17:28.605 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:28.605 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:28.605 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=8cadeb4579fd5a8170ce242f80691ee1 00:17:28.605 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:17:28.605 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:28.605 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.PxE 00:17:28.605 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.PxE 00:17:28.605 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.PxE 00:17:28.605 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:17:28.605 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:28.605 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:28.605 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:28.605 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:17:28.605 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:17:28.605 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:28.606 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=74d36679bca70f9186bb479db09326a139fa48625796cbde 00:17:28.606 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:17:28.606 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.h0w 00:17:28.606 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 74d36679bca70f9186bb479db09326a139fa48625796cbde 2 00:17:28.606 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 74d36679bca70f9186bb479db09326a139fa48625796cbde 2 00:17:28.606 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:28.606 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:28.606 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=74d36679bca70f9186bb479db09326a139fa48625796cbde 00:17:28.606 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:17:28.606 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:28.867 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.h0w 00:17:28.867 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.h0w 00:17:28.867 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.h0w 00:17:28.867 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:17:28.867 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:28.867 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:28.867 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:28.867 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:17:28.867 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:17:28.867 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:28.867 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=1152e53de55779133750549ffeb1a3d8f88090ec4dafe502 00:17:28.867 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:17:28.867 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.ShE 00:17:28.867 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 1152e53de55779133750549ffeb1a3d8f88090ec4dafe502 2 00:17:28.868 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 1152e53de55779133750549ffeb1a3d8f88090ec4dafe502 2 00:17:28.868 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:28.868 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:28.868 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=1152e53de55779133750549ffeb1a3d8f88090ec4dafe502 00:17:28.868 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:17:28.868 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:28.868 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.ShE 00:17:28.868 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.ShE 00:17:28.868 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.ShE 00:17:28.868 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:17:28.868 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:28.868 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:28.868 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:28.868 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:17:28.868 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:17:28.868 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:28.868 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=1d1ace86f9cfe5e85f2d3191fc1fbaa8 00:17:28.868 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:17:28.868 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.hCe 00:17:28.868 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 1d1ace86f9cfe5e85f2d3191fc1fbaa8 1 00:17:28.868 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 1d1ace86f9cfe5e85f2d3191fc1fbaa8 1 00:17:28.868 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:28.868 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:28.868 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=1d1ace86f9cfe5e85f2d3191fc1fbaa8 00:17:28.868 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:17:28.868 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:28.868 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.hCe 00:17:28.868 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.hCe 00:17:28.868 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.hCe 00:17:28.868 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:17:28.868 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:28.868 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:28.868 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:28.868 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:17:28.868 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:17:28.868 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:28.868 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=c104f556154c658932b12c0e86988b6da4c16e82d2892702d1ecef8b87bd39af 00:17:28.868 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:17:28.868 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.iLE 00:17:28.868 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key c104f556154c658932b12c0e86988b6da4c16e82d2892702d1ecef8b87bd39af 3 00:17:28.868 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 c104f556154c658932b12c0e86988b6da4c16e82d2892702d1ecef8b87bd39af 3 00:17:28.868 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:28.868 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:28.868 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=c104f556154c658932b12c0e86988b6da4c16e82d2892702d1ecef8b87bd39af 00:17:28.868 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:17:28.868 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:28.868 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.iLE 00:17:28.868 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.iLE 00:17:28.868 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.iLE 00:17:28.868 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:17:28.868 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 1345288 00:17:28.868 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1345288 ']' 00:17:28.868 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:28.868 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:28.868 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:28.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:28.868 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:28.868 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.129 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:29.129 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:29.129 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 1345408 /var/tmp/host.sock 00:17:29.129 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1345408 ']' 00:17:29.129 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:17:29.129 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:29.129 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:29.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:29.129 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:29.129 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.391 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:29.391 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:29.391 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:17:29.391 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.391 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.391 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.391 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:29.391 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.5aJ 00:17:29.391 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.391 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.391 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.391 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.5aJ 00:17:29.391 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.5aJ 00:17:29.684 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.PEA ]] 00:17:29.684 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.PEA 00:17:29.684 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.684 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.684 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.684 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.PEA 00:17:29.684 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.PEA 00:17:29.985 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:29.985 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.PxE 00:17:29.985 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.985 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.985 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.985 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.PxE 00:17:29.985 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.PxE 00:17:29.985 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.h0w ]] 00:17:29.985 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.h0w 00:17:29.985 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.985 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.985 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.985 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.h0w 00:17:29.985 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.h0w 00:17:30.320 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:30.320 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.ShE 00:17:30.320 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.320 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.320 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.320 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.ShE 00:17:30.320 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.ShE 00:17:30.581 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.hCe ]] 00:17:30.581 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.hCe 00:17:30.581 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.581 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.581 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.581 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.hCe 00:17:30.581 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.hCe 00:17:30.581 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:30.581 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.iLE 00:17:30.581 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.581 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.581 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.581 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.iLE 00:17:30.581 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.iLE 00:17:30.843 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:17:30.843 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:30.843 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:30.843 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:30.843 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:30.843 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:31.104 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:17:31.104 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:31.104 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:31.104 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:31.104 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:31.104 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:31.105 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.105 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.105 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.105 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.105 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.105 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.105 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.366 00:17:31.366 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:31.366 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:31.366 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.627 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.627 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:31.627 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.627 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.627 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.627 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:31.627 { 00:17:31.627 "cntlid": 1, 00:17:31.627 "qid": 0, 00:17:31.627 "state": "enabled", 00:17:31.627 "thread": "nvmf_tgt_poll_group_000", 00:17:31.627 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:31.627 "listen_address": { 00:17:31.627 "trtype": "TCP", 00:17:31.627 "adrfam": "IPv4", 00:17:31.627 "traddr": "10.0.0.2", 00:17:31.627 "trsvcid": "4420" 00:17:31.627 }, 00:17:31.627 "peer_address": { 00:17:31.627 "trtype": "TCP", 00:17:31.627 "adrfam": "IPv4", 00:17:31.627 "traddr": "10.0.0.1", 00:17:31.627 "trsvcid": "43226" 00:17:31.627 }, 00:17:31.627 "auth": { 00:17:31.627 "state": "completed", 00:17:31.627 "digest": "sha256", 00:17:31.627 "dhgroup": "null" 00:17:31.627 } 00:17:31.627 } 00:17:31.627 ]' 00:17:31.627 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:31.627 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:31.627 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:31.627 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:31.627 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:31.627 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:31.627 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:31.627 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:31.887 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjlmZjM2NmI5M2I1ODY3MTlmNDg3ZTExMDhmZTI1OTQ4MjgyMWVlYWZjMWQ2MzI2iijJ7w==: --dhchap-ctrl-secret DHHC-1:03:MmMxZTVhNDUwOWFjN2ExZTNmMjExNjQwZjMwZTBhNjQzYTlhOWZlYzQ1Y2FlMDg4ZWU4NTc4MTA3NTUwOWVjY6iUJvY=: 00:17:31.887 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YjlmZjM2NmI5M2I1ODY3MTlmNDg3ZTExMDhmZTI1OTQ4MjgyMWVlYWZjMWQ2MzI2iijJ7w==: --dhchap-ctrl-secret DHHC-1:03:MmMxZTVhNDUwOWFjN2ExZTNmMjExNjQwZjMwZTBhNjQzYTlhOWZlYzQ1Y2FlMDg4ZWU4NTc4MTA3NTUwOWVjY6iUJvY=: 00:17:32.458 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:32.458 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:32.458 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:32.458 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.458 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.458 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.458 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:32.458 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:32.458 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:32.719 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:17:32.719 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:32.719 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:32.719 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:32.719 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:32.719 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:32.719 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.719 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.719 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.719 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.719 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.719 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.719 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.980 00:17:32.980 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:32.980 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:32.980 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:33.240 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.240 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:33.240 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.240 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.240 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.240 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:33.240 { 00:17:33.240 "cntlid": 3, 00:17:33.240 "qid": 0, 00:17:33.240 "state": "enabled", 00:17:33.240 "thread": "nvmf_tgt_poll_group_000", 00:17:33.240 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:33.240 "listen_address": { 00:17:33.240 "trtype": "TCP", 00:17:33.240 "adrfam": "IPv4", 00:17:33.240 "traddr": "10.0.0.2", 00:17:33.240 "trsvcid": "4420" 00:17:33.240 }, 00:17:33.240 "peer_address": { 00:17:33.240 "trtype": "TCP", 00:17:33.240 "adrfam": "IPv4", 00:17:33.240 "traddr": "10.0.0.1", 00:17:33.240 "trsvcid": "43262" 00:17:33.240 }, 00:17:33.240 "auth": { 00:17:33.240 "state": "completed", 00:17:33.240 "digest": "sha256", 00:17:33.240 "dhgroup": "null" 00:17:33.240 } 00:17:33.240 } 00:17:33.240 ]' 00:17:33.240 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:33.240 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:33.240 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:33.240 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:33.240 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:33.240 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:33.240 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:33.240 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:33.500 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGNhZGViNDU3OWZkNWE4MTcwY2UyNDJmODA2OTFlZTGXPKvD: --dhchap-ctrl-secret DHHC-1:02:NzRkMzY2NzliY2E3MGY5MTg2YmI0NzlkYjA5MzI2YTEzOWZhNDg2MjU3OTZjYmRlqL/vPg==: 00:17:33.500 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:OGNhZGViNDU3OWZkNWE4MTcwY2UyNDJmODA2OTFlZTGXPKvD: --dhchap-ctrl-secret DHHC-1:02:NzRkMzY2NzliY2E3MGY5MTg2YmI0NzlkYjA5MzI2YTEzOWZhNDg2MjU3OTZjYmRlqL/vPg==: 00:17:34.073 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:34.073 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:34.073 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:34.073 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.073 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.073 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.073 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:34.073 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:34.073 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:34.334 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:17:34.334 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:34.334 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:34.334 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:34.334 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:34.334 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:34.334 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:34.334 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.334 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.334 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.334 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:34.334 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:34.334 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:34.595 00:17:34.595 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:34.595 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:34.595 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.595 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.856 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:34.856 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.856 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.856 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.856 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:34.856 { 00:17:34.856 "cntlid": 5, 00:17:34.856 "qid": 0, 00:17:34.856 "state": "enabled", 00:17:34.856 "thread": "nvmf_tgt_poll_group_000", 00:17:34.856 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:34.856 "listen_address": { 00:17:34.856 "trtype": "TCP", 00:17:34.856 "adrfam": "IPv4", 00:17:34.856 "traddr": "10.0.0.2", 00:17:34.856 "trsvcid": "4420" 00:17:34.856 }, 00:17:34.856 "peer_address": { 00:17:34.856 "trtype": "TCP", 00:17:34.856 "adrfam": "IPv4", 00:17:34.856 "traddr": "10.0.0.1", 00:17:34.856 "trsvcid": "35086" 00:17:34.856 }, 00:17:34.856 "auth": { 00:17:34.856 "state": "completed", 00:17:34.856 "digest": "sha256", 00:17:34.856 "dhgroup": "null" 00:17:34.856 } 00:17:34.856 } 00:17:34.856 ]' 00:17:34.856 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:34.856 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:34.856 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:34.856 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:34.856 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:34.856 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.856 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.856 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:35.117 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTE1MmU1M2RlNTU3NzkxMzM3NTA1NDlmZmViMWEzZDhmODgwOTBlYzRkYWZlNTAyxXEW/Q==: --dhchap-ctrl-secret DHHC-1:01:MWQxYWNlODZmOWNmZTVlODVmMmQzMTkxZmMxZmJhYThS3flV: 00:17:35.117 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MTE1MmU1M2RlNTU3NzkxMzM3NTA1NDlmZmViMWEzZDhmODgwOTBlYzRkYWZlNTAyxXEW/Q==: --dhchap-ctrl-secret DHHC-1:01:MWQxYWNlODZmOWNmZTVlODVmMmQzMTkxZmMxZmJhYThS3flV: 00:17:35.690 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.690 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.690 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:35.690 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.690 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.690 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.690 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:35.690 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:35.690 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:35.951 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:17:35.951 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:35.951 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:35.951 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:35.951 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:35.951 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:35.951 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:35.951 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.951 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.951 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.951 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:35.951 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:35.951 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:36.211 00:17:36.211 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:36.211 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:36.211 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.211 09:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.211 09:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:36.211 09:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.211 09:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.211 09:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.211 09:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:36.211 { 00:17:36.211 "cntlid": 7, 00:17:36.211 "qid": 0, 00:17:36.211 "state": "enabled", 00:17:36.211 "thread": "nvmf_tgt_poll_group_000", 00:17:36.211 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:36.211 "listen_address": { 00:17:36.211 "trtype": "TCP", 00:17:36.211 "adrfam": "IPv4", 00:17:36.211 "traddr": "10.0.0.2", 00:17:36.211 "trsvcid": "4420" 00:17:36.211 }, 00:17:36.211 "peer_address": { 00:17:36.211 "trtype": "TCP", 00:17:36.211 "adrfam": "IPv4", 00:17:36.211 "traddr": "10.0.0.1", 00:17:36.211 "trsvcid": "35102" 00:17:36.211 }, 00:17:36.211 "auth": { 00:17:36.211 "state": "completed", 00:17:36.211 "digest": "sha256", 00:17:36.211 "dhgroup": "null" 00:17:36.211 } 00:17:36.211 } 00:17:36.211 ]' 00:17:36.211 09:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:36.471 09:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:36.471 09:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:36.471 09:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:36.471 09:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:36.471 09:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:36.472 09:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.472 09:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.472 09:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzEwNGY1NTYxNTRjNjU4OTMyYjEyYzBlODY5ODhiNmRhNGMxNmU4MmQyODkyNzAyZDFlY2VmOGI4N2JkMzlhZnmzGvg=: 00:17:36.472 09:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YzEwNGY1NTYxNTRjNjU4OTMyYjEyYzBlODY5ODhiNmRhNGMxNmU4MmQyODkyNzAyZDFlY2VmOGI4N2JkMzlhZnmzGvg=: 00:17:37.413 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:37.413 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:37.413 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:37.413 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.413 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.413 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.413 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:37.413 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:37.413 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:37.413 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:37.413 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:17:37.413 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:37.413 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:37.413 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:37.413 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:37.413 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:37.413 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.413 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.413 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.413 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.413 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.413 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.413 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.673 00:17:37.673 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:37.673 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:37.673 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:37.934 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.934 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:37.934 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.934 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.934 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.934 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:37.934 { 00:17:37.934 "cntlid": 9, 00:17:37.934 "qid": 0, 00:17:37.934 "state": "enabled", 00:17:37.934 "thread": "nvmf_tgt_poll_group_000", 00:17:37.934 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:37.934 "listen_address": { 00:17:37.934 "trtype": "TCP", 00:17:37.934 "adrfam": "IPv4", 00:17:37.934 "traddr": "10.0.0.2", 00:17:37.934 "trsvcid": "4420" 00:17:37.934 }, 00:17:37.934 "peer_address": { 00:17:37.934 "trtype": "TCP", 00:17:37.934 "adrfam": "IPv4", 00:17:37.934 "traddr": "10.0.0.1", 00:17:37.934 "trsvcid": "35126" 00:17:37.934 }, 00:17:37.934 "auth": { 00:17:37.934 "state": "completed", 00:17:37.934 "digest": "sha256", 00:17:37.934 "dhgroup": "ffdhe2048" 00:17:37.934 } 00:17:37.934 } 00:17:37.934 ]' 00:17:37.934 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:37.934 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:37.934 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:37.934 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:37.934 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:37.934 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:37.934 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:37.934 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:38.194 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjlmZjM2NmI5M2I1ODY3MTlmNDg3ZTExMDhmZTI1OTQ4MjgyMWVlYWZjMWQ2MzI2iijJ7w==: --dhchap-ctrl-secret DHHC-1:03:MmMxZTVhNDUwOWFjN2ExZTNmMjExNjQwZjMwZTBhNjQzYTlhOWZlYzQ1Y2FlMDg4ZWU4NTc4MTA3NTUwOWVjY6iUJvY=: 00:17:38.194 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YjlmZjM2NmI5M2I1ODY3MTlmNDg3ZTExMDhmZTI1OTQ4MjgyMWVlYWZjMWQ2MzI2iijJ7w==: --dhchap-ctrl-secret DHHC-1:03:MmMxZTVhNDUwOWFjN2ExZTNmMjExNjQwZjMwZTBhNjQzYTlhOWZlYzQ1Y2FlMDg4ZWU4NTc4MTA3NTUwOWVjY6iUJvY=: 00:17:38.765 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:38.765 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:38.765 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:38.765 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.765 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.765 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.765 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:38.765 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:38.765 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:39.026 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:17:39.026 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:39.026 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:39.026 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:39.026 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:39.026 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:39.026 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.026 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.026 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.026 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.026 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.027 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.027 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.287 00:17:39.287 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:39.287 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:39.287 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.548 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.548 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.548 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.548 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.548 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.548 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:39.548 { 00:17:39.548 "cntlid": 11, 00:17:39.548 "qid": 0, 00:17:39.548 "state": "enabled", 00:17:39.548 "thread": "nvmf_tgt_poll_group_000", 00:17:39.548 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:39.548 "listen_address": { 00:17:39.548 "trtype": "TCP", 00:17:39.548 "adrfam": "IPv4", 00:17:39.548 "traddr": "10.0.0.2", 00:17:39.548 "trsvcid": "4420" 00:17:39.548 }, 00:17:39.548 "peer_address": { 00:17:39.548 "trtype": "TCP", 00:17:39.548 "adrfam": "IPv4", 00:17:39.548 "traddr": "10.0.0.1", 00:17:39.548 "trsvcid": "35148" 00:17:39.548 }, 00:17:39.548 "auth": { 00:17:39.548 "state": "completed", 00:17:39.548 "digest": "sha256", 00:17:39.548 "dhgroup": "ffdhe2048" 00:17:39.548 } 00:17:39.548 } 00:17:39.548 ]' 00:17:39.548 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:39.548 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:39.548 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:39.548 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:39.548 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:39.548 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:39.548 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:39.548 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:39.809 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGNhZGViNDU3OWZkNWE4MTcwY2UyNDJmODA2OTFlZTGXPKvD: --dhchap-ctrl-secret DHHC-1:02:NzRkMzY2NzliY2E3MGY5MTg2YmI0NzlkYjA5MzI2YTEzOWZhNDg2MjU3OTZjYmRlqL/vPg==: 00:17:39.809 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:OGNhZGViNDU3OWZkNWE4MTcwY2UyNDJmODA2OTFlZTGXPKvD: --dhchap-ctrl-secret DHHC-1:02:NzRkMzY2NzliY2E3MGY5MTg2YmI0NzlkYjA5MzI2YTEzOWZhNDg2MjU3OTZjYmRlqL/vPg==: 00:17:40.379 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:40.379 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:40.379 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:40.379 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.379 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.379 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.379 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:40.379 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:40.379 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:40.640 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:17:40.640 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:40.640 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:40.640 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:40.640 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:40.640 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:40.640 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.640 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.640 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.640 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.640 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.640 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.640 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.901 00:17:40.901 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:40.901 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:40.901 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:41.162 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.162 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.162 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.162 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.162 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.162 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:41.162 { 00:17:41.162 "cntlid": 13, 00:17:41.162 "qid": 0, 00:17:41.162 "state": "enabled", 00:17:41.162 "thread": "nvmf_tgt_poll_group_000", 00:17:41.162 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:41.162 "listen_address": { 00:17:41.162 "trtype": "TCP", 00:17:41.162 "adrfam": "IPv4", 00:17:41.162 "traddr": "10.0.0.2", 00:17:41.162 "trsvcid": "4420" 00:17:41.162 }, 00:17:41.162 "peer_address": { 00:17:41.162 "trtype": "TCP", 00:17:41.162 "adrfam": "IPv4", 00:17:41.162 "traddr": "10.0.0.1", 00:17:41.162 "trsvcid": "35164" 00:17:41.162 }, 00:17:41.162 "auth": { 00:17:41.162 "state": "completed", 00:17:41.162 "digest": "sha256", 00:17:41.162 "dhgroup": "ffdhe2048" 00:17:41.162 } 00:17:41.162 } 00:17:41.162 ]' 00:17:41.162 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:41.162 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:41.162 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:41.162 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:41.162 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:41.162 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:41.162 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.162 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.423 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTE1MmU1M2RlNTU3NzkxMzM3NTA1NDlmZmViMWEzZDhmODgwOTBlYzRkYWZlNTAyxXEW/Q==: --dhchap-ctrl-secret DHHC-1:01:MWQxYWNlODZmOWNmZTVlODVmMmQzMTkxZmMxZmJhYThS3flV: 00:17:41.423 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MTE1MmU1M2RlNTU3NzkxMzM3NTA1NDlmZmViMWEzZDhmODgwOTBlYzRkYWZlNTAyxXEW/Q==: --dhchap-ctrl-secret DHHC-1:01:MWQxYWNlODZmOWNmZTVlODVmMmQzMTkxZmMxZmJhYThS3flV: 00:17:41.993 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:41.993 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:41.993 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:41.993 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.993 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.993 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.993 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:41.993 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:41.993 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:42.253 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:17:42.253 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:42.253 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:42.253 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:42.253 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:42.253 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:42.253 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:42.253 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.253 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.253 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.253 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:42.253 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:42.253 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:42.513 00:17:42.513 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:42.513 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:42.513 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:42.775 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.775 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:42.775 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.775 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.775 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.775 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:42.775 { 00:17:42.775 "cntlid": 15, 00:17:42.775 "qid": 0, 00:17:42.775 "state": "enabled", 00:17:42.775 "thread": "nvmf_tgt_poll_group_000", 00:17:42.775 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:42.775 "listen_address": { 00:17:42.775 "trtype": "TCP", 00:17:42.775 "adrfam": "IPv4", 00:17:42.775 "traddr": "10.0.0.2", 00:17:42.775 "trsvcid": "4420" 00:17:42.775 }, 00:17:42.775 "peer_address": { 00:17:42.775 "trtype": "TCP", 00:17:42.775 "adrfam": "IPv4", 00:17:42.775 "traddr": "10.0.0.1", 00:17:42.775 "trsvcid": "35184" 00:17:42.775 }, 00:17:42.775 "auth": { 00:17:42.775 "state": "completed", 00:17:42.775 "digest": "sha256", 00:17:42.775 "dhgroup": "ffdhe2048" 00:17:42.775 } 00:17:42.775 } 00:17:42.775 ]' 00:17:42.775 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:42.775 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:42.775 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:42.775 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:42.775 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:42.775 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:42.775 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:42.775 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:43.036 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzEwNGY1NTYxNTRjNjU4OTMyYjEyYzBlODY5ODhiNmRhNGMxNmU4MmQyODkyNzAyZDFlY2VmOGI4N2JkMzlhZnmzGvg=: 00:17:43.036 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YzEwNGY1NTYxNTRjNjU4OTMyYjEyYzBlODY5ODhiNmRhNGMxNmU4MmQyODkyNzAyZDFlY2VmOGI4N2JkMzlhZnmzGvg=: 00:17:43.607 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:43.607 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:43.607 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:43.607 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.607 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.607 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.607 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:43.607 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:43.607 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:43.607 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:43.868 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:17:43.868 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:43.868 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:43.868 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:43.868 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:43.868 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:43.868 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:43.868 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.868 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.868 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.868 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:43.868 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:43.868 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:44.129 00:17:44.129 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:44.129 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:44.129 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.391 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.391 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.391 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.391 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.391 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.391 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:44.391 { 00:17:44.391 "cntlid": 17, 00:17:44.391 "qid": 0, 00:17:44.391 "state": "enabled", 00:17:44.391 "thread": "nvmf_tgt_poll_group_000", 00:17:44.391 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:44.391 "listen_address": { 00:17:44.391 "trtype": "TCP", 00:17:44.391 "adrfam": "IPv4", 00:17:44.391 "traddr": "10.0.0.2", 00:17:44.391 "trsvcid": "4420" 00:17:44.391 }, 00:17:44.391 "peer_address": { 00:17:44.391 "trtype": "TCP", 00:17:44.391 "adrfam": "IPv4", 00:17:44.391 "traddr": "10.0.0.1", 00:17:44.391 "trsvcid": "46570" 00:17:44.391 }, 00:17:44.391 "auth": { 00:17:44.391 "state": "completed", 00:17:44.391 "digest": "sha256", 00:17:44.391 "dhgroup": "ffdhe3072" 00:17:44.391 } 00:17:44.391 } 00:17:44.391 ]' 00:17:44.391 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:44.391 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:44.391 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:44.391 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:44.391 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:44.391 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.391 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.391 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.652 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjlmZjM2NmI5M2I1ODY3MTlmNDg3ZTExMDhmZTI1OTQ4MjgyMWVlYWZjMWQ2MzI2iijJ7w==: --dhchap-ctrl-secret DHHC-1:03:MmMxZTVhNDUwOWFjN2ExZTNmMjExNjQwZjMwZTBhNjQzYTlhOWZlYzQ1Y2FlMDg4ZWU4NTc4MTA3NTUwOWVjY6iUJvY=: 00:17:44.653 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YjlmZjM2NmI5M2I1ODY3MTlmNDg3ZTExMDhmZTI1OTQ4MjgyMWVlYWZjMWQ2MzI2iijJ7w==: --dhchap-ctrl-secret DHHC-1:03:MmMxZTVhNDUwOWFjN2ExZTNmMjExNjQwZjMwZTBhNjQzYTlhOWZlYzQ1Y2FlMDg4ZWU4NTc4MTA3NTUwOWVjY6iUJvY=: 00:17:45.224 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:45.224 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:45.224 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:45.224 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.224 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.224 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.224 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:45.224 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:45.224 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:45.485 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:17:45.485 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:45.485 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:45.485 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:45.485 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:45.485 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:45.485 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:45.485 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.485 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.485 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.485 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:45.485 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:45.485 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:45.747 00:17:45.747 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:45.747 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:45.747 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.009 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.009 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.009 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.009 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.009 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.009 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:46.009 { 00:17:46.009 "cntlid": 19, 00:17:46.009 "qid": 0, 00:17:46.009 "state": "enabled", 00:17:46.009 "thread": "nvmf_tgt_poll_group_000", 00:17:46.009 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:46.009 "listen_address": { 00:17:46.009 "trtype": "TCP", 00:17:46.009 "adrfam": "IPv4", 00:17:46.009 "traddr": "10.0.0.2", 00:17:46.009 "trsvcid": "4420" 00:17:46.009 }, 00:17:46.009 "peer_address": { 00:17:46.009 "trtype": "TCP", 00:17:46.009 "adrfam": "IPv4", 00:17:46.009 "traddr": "10.0.0.1", 00:17:46.009 "trsvcid": "46592" 00:17:46.009 }, 00:17:46.009 "auth": { 00:17:46.009 "state": "completed", 00:17:46.009 "digest": "sha256", 00:17:46.009 "dhgroup": "ffdhe3072" 00:17:46.009 } 00:17:46.009 } 00:17:46.009 ]' 00:17:46.009 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:46.009 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:46.009 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:46.009 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:46.009 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:46.009 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.009 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.009 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.270 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGNhZGViNDU3OWZkNWE4MTcwY2UyNDJmODA2OTFlZTGXPKvD: --dhchap-ctrl-secret DHHC-1:02:NzRkMzY2NzliY2E3MGY5MTg2YmI0NzlkYjA5MzI2YTEzOWZhNDg2MjU3OTZjYmRlqL/vPg==: 00:17:46.270 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:OGNhZGViNDU3OWZkNWE4MTcwY2UyNDJmODA2OTFlZTGXPKvD: --dhchap-ctrl-secret DHHC-1:02:NzRkMzY2NzliY2E3MGY5MTg2YmI0NzlkYjA5MzI2YTEzOWZhNDg2MjU3OTZjYmRlqL/vPg==: 00:17:46.841 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:46.841 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:46.841 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:46.841 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.841 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.841 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.841 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:46.841 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:46.841 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:47.102 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:17:47.102 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:47.102 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:47.102 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:47.102 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:47.102 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:47.102 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.102 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.102 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.102 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.102 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.102 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.102 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.364 00:17:47.364 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:47.364 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:47.364 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:47.625 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.625 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:47.625 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.625 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.625 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.625 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:47.625 { 00:17:47.625 "cntlid": 21, 00:17:47.625 "qid": 0, 00:17:47.625 "state": "enabled", 00:17:47.625 "thread": "nvmf_tgt_poll_group_000", 00:17:47.625 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:47.625 "listen_address": { 00:17:47.625 "trtype": "TCP", 00:17:47.625 "adrfam": "IPv4", 00:17:47.625 "traddr": "10.0.0.2", 00:17:47.625 "trsvcid": "4420" 00:17:47.625 }, 00:17:47.625 "peer_address": { 00:17:47.625 "trtype": "TCP", 00:17:47.625 "adrfam": "IPv4", 00:17:47.625 "traddr": "10.0.0.1", 00:17:47.625 "trsvcid": "46612" 00:17:47.625 }, 00:17:47.625 "auth": { 00:17:47.625 "state": "completed", 00:17:47.625 "digest": "sha256", 00:17:47.625 "dhgroup": "ffdhe3072" 00:17:47.625 } 00:17:47.625 } 00:17:47.625 ]' 00:17:47.625 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:47.625 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:47.625 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:47.625 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:47.625 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:47.625 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:47.625 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:47.625 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:47.886 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTE1MmU1M2RlNTU3NzkxMzM3NTA1NDlmZmViMWEzZDhmODgwOTBlYzRkYWZlNTAyxXEW/Q==: --dhchap-ctrl-secret DHHC-1:01:MWQxYWNlODZmOWNmZTVlODVmMmQzMTkxZmMxZmJhYThS3flV: 00:17:47.886 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MTE1MmU1M2RlNTU3NzkxMzM3NTA1NDlmZmViMWEzZDhmODgwOTBlYzRkYWZlNTAyxXEW/Q==: --dhchap-ctrl-secret DHHC-1:01:MWQxYWNlODZmOWNmZTVlODVmMmQzMTkxZmMxZmJhYThS3flV: 00:17:48.457 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:48.457 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:48.457 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:48.457 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.457 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.457 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.457 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:48.457 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:48.457 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:48.718 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:17:48.718 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:48.718 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:48.718 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:48.718 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:48.718 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:48.718 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:48.718 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.718 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.718 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.718 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:48.718 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:48.718 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:48.979 00:17:48.979 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:48.979 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:48.979 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.239 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.239 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:49.239 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.239 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.239 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.239 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:49.239 { 00:17:49.239 "cntlid": 23, 00:17:49.239 "qid": 0, 00:17:49.239 "state": "enabled", 00:17:49.239 "thread": "nvmf_tgt_poll_group_000", 00:17:49.239 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:49.239 "listen_address": { 00:17:49.239 "trtype": "TCP", 00:17:49.239 "adrfam": "IPv4", 00:17:49.239 "traddr": "10.0.0.2", 00:17:49.239 "trsvcid": "4420" 00:17:49.239 }, 00:17:49.239 "peer_address": { 00:17:49.239 "trtype": "TCP", 00:17:49.239 "adrfam": "IPv4", 00:17:49.239 "traddr": "10.0.0.1", 00:17:49.239 "trsvcid": "46640" 00:17:49.239 }, 00:17:49.239 "auth": { 00:17:49.239 "state": "completed", 00:17:49.239 "digest": "sha256", 00:17:49.239 "dhgroup": "ffdhe3072" 00:17:49.239 } 00:17:49.239 } 00:17:49.239 ]' 00:17:49.239 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:49.239 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:49.239 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:49.240 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:49.240 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:49.240 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:49.240 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:49.240 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:49.501 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzEwNGY1NTYxNTRjNjU4OTMyYjEyYzBlODY5ODhiNmRhNGMxNmU4MmQyODkyNzAyZDFlY2VmOGI4N2JkMzlhZnmzGvg=: 00:17:49.501 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YzEwNGY1NTYxNTRjNjU4OTMyYjEyYzBlODY5ODhiNmRhNGMxNmU4MmQyODkyNzAyZDFlY2VmOGI4N2JkMzlhZnmzGvg=: 00:17:50.073 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:50.073 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:50.073 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:50.073 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.073 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.073 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.073 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:50.073 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:50.073 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:50.073 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:50.333 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:17:50.333 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:50.333 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:50.333 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:50.334 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:50.334 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:50.334 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.334 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.334 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.334 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.334 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.334 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.334 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.595 00:17:50.595 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:50.595 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:50.595 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.857 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.857 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.857 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.857 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.857 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.857 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:50.857 { 00:17:50.857 "cntlid": 25, 00:17:50.857 "qid": 0, 00:17:50.857 "state": "enabled", 00:17:50.857 "thread": "nvmf_tgt_poll_group_000", 00:17:50.857 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:50.857 "listen_address": { 00:17:50.857 "trtype": "TCP", 00:17:50.857 "adrfam": "IPv4", 00:17:50.857 "traddr": "10.0.0.2", 00:17:50.857 "trsvcid": "4420" 00:17:50.857 }, 00:17:50.857 "peer_address": { 00:17:50.857 "trtype": "TCP", 00:17:50.857 "adrfam": "IPv4", 00:17:50.857 "traddr": "10.0.0.1", 00:17:50.857 "trsvcid": "46666" 00:17:50.857 }, 00:17:50.857 "auth": { 00:17:50.857 "state": "completed", 00:17:50.857 "digest": "sha256", 00:17:50.857 "dhgroup": "ffdhe4096" 00:17:50.857 } 00:17:50.857 } 00:17:50.857 ]' 00:17:50.857 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:50.857 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:50.857 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:50.857 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:50.857 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:50.857 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:50.857 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.857 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.118 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjlmZjM2NmI5M2I1ODY3MTlmNDg3ZTExMDhmZTI1OTQ4MjgyMWVlYWZjMWQ2MzI2iijJ7w==: --dhchap-ctrl-secret DHHC-1:03:MmMxZTVhNDUwOWFjN2ExZTNmMjExNjQwZjMwZTBhNjQzYTlhOWZlYzQ1Y2FlMDg4ZWU4NTc4MTA3NTUwOWVjY6iUJvY=: 00:17:51.118 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YjlmZjM2NmI5M2I1ODY3MTlmNDg3ZTExMDhmZTI1OTQ4MjgyMWVlYWZjMWQ2MzI2iijJ7w==: --dhchap-ctrl-secret DHHC-1:03:MmMxZTVhNDUwOWFjN2ExZTNmMjExNjQwZjMwZTBhNjQzYTlhOWZlYzQ1Y2FlMDg4ZWU4NTc4MTA3NTUwOWVjY6iUJvY=: 00:17:51.691 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.691 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.691 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:51.691 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.691 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.691 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.691 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:51.691 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:51.691 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:51.952 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:17:51.952 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:51.952 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:51.952 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:51.952 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:51.952 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:51.952 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:51.952 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.952 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.952 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.952 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:51.952 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:51.952 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.213 00:17:52.213 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:52.213 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:52.213 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.474 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.474 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:52.474 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.474 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.474 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.474 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:52.474 { 00:17:52.474 "cntlid": 27, 00:17:52.474 "qid": 0, 00:17:52.474 "state": "enabled", 00:17:52.474 "thread": "nvmf_tgt_poll_group_000", 00:17:52.474 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:52.474 "listen_address": { 00:17:52.474 "trtype": "TCP", 00:17:52.474 "adrfam": "IPv4", 00:17:52.474 "traddr": "10.0.0.2", 00:17:52.474 "trsvcid": "4420" 00:17:52.474 }, 00:17:52.474 "peer_address": { 00:17:52.474 "trtype": "TCP", 00:17:52.474 "adrfam": "IPv4", 00:17:52.474 "traddr": "10.0.0.1", 00:17:52.474 "trsvcid": "46676" 00:17:52.474 }, 00:17:52.474 "auth": { 00:17:52.474 "state": "completed", 00:17:52.474 "digest": "sha256", 00:17:52.474 "dhgroup": "ffdhe4096" 00:17:52.474 } 00:17:52.474 } 00:17:52.474 ]' 00:17:52.474 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:52.474 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:52.474 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:52.474 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:52.474 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:52.474 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:52.474 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.474 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:52.735 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGNhZGViNDU3OWZkNWE4MTcwY2UyNDJmODA2OTFlZTGXPKvD: --dhchap-ctrl-secret DHHC-1:02:NzRkMzY2NzliY2E3MGY5MTg2YmI0NzlkYjA5MzI2YTEzOWZhNDg2MjU3OTZjYmRlqL/vPg==: 00:17:52.735 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:OGNhZGViNDU3OWZkNWE4MTcwY2UyNDJmODA2OTFlZTGXPKvD: --dhchap-ctrl-secret DHHC-1:02:NzRkMzY2NzliY2E3MGY5MTg2YmI0NzlkYjA5MzI2YTEzOWZhNDg2MjU3OTZjYmRlqL/vPg==: 00:17:53.308 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.569 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.569 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:53.569 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.569 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.569 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.569 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:53.569 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:53.569 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:53.569 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:17:53.569 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:53.569 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:53.569 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:53.569 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:53.569 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:53.569 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:53.569 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.569 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.569 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.569 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:53.569 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:53.569 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:53.830 00:17:53.830 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:53.830 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:53.830 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.091 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.091 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:54.091 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.091 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.091 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.091 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:54.091 { 00:17:54.091 "cntlid": 29, 00:17:54.091 "qid": 0, 00:17:54.091 "state": "enabled", 00:17:54.091 "thread": "nvmf_tgt_poll_group_000", 00:17:54.091 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:54.091 "listen_address": { 00:17:54.091 "trtype": "TCP", 00:17:54.091 "adrfam": "IPv4", 00:17:54.091 "traddr": "10.0.0.2", 00:17:54.091 "trsvcid": "4420" 00:17:54.091 }, 00:17:54.091 "peer_address": { 00:17:54.091 "trtype": "TCP", 00:17:54.091 "adrfam": "IPv4", 00:17:54.091 "traddr": "10.0.0.1", 00:17:54.091 "trsvcid": "46710" 00:17:54.091 }, 00:17:54.091 "auth": { 00:17:54.091 "state": "completed", 00:17:54.091 "digest": "sha256", 00:17:54.091 "dhgroup": "ffdhe4096" 00:17:54.091 } 00:17:54.091 } 00:17:54.091 ]' 00:17:54.091 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:54.091 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:54.091 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:54.091 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:54.091 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:54.352 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:54.352 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.352 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.352 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTE1MmU1M2RlNTU3NzkxMzM3NTA1NDlmZmViMWEzZDhmODgwOTBlYzRkYWZlNTAyxXEW/Q==: --dhchap-ctrl-secret DHHC-1:01:MWQxYWNlODZmOWNmZTVlODVmMmQzMTkxZmMxZmJhYThS3flV: 00:17:54.352 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MTE1MmU1M2RlNTU3NzkxMzM3NTA1NDlmZmViMWEzZDhmODgwOTBlYzRkYWZlNTAyxXEW/Q==: --dhchap-ctrl-secret DHHC-1:01:MWQxYWNlODZmOWNmZTVlODVmMmQzMTkxZmMxZmJhYThS3flV: 00:17:55.291 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.291 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.291 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:55.291 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.291 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.291 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.291 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:55.291 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:55.291 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:55.291 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:17:55.291 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:55.291 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:55.291 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:55.291 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:55.292 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:55.292 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:55.292 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.292 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.292 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.292 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:55.292 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:55.292 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:55.552 00:17:55.552 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:55.552 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:55.552 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.813 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.813 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:55.813 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.813 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.813 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.813 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:55.813 { 00:17:55.813 "cntlid": 31, 00:17:55.813 "qid": 0, 00:17:55.813 "state": "enabled", 00:17:55.813 "thread": "nvmf_tgt_poll_group_000", 00:17:55.813 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:55.813 "listen_address": { 00:17:55.813 "trtype": "TCP", 00:17:55.813 "adrfam": "IPv4", 00:17:55.813 "traddr": "10.0.0.2", 00:17:55.813 "trsvcid": "4420" 00:17:55.813 }, 00:17:55.813 "peer_address": { 00:17:55.813 "trtype": "TCP", 00:17:55.813 "adrfam": "IPv4", 00:17:55.813 "traddr": "10.0.0.1", 00:17:55.813 "trsvcid": "52982" 00:17:55.813 }, 00:17:55.813 "auth": { 00:17:55.813 "state": "completed", 00:17:55.813 "digest": "sha256", 00:17:55.813 "dhgroup": "ffdhe4096" 00:17:55.813 } 00:17:55.813 } 00:17:55.813 ]' 00:17:55.813 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:55.813 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:55.813 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:55.813 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:55.813 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:55.813 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:55.813 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:55.813 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.074 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzEwNGY1NTYxNTRjNjU4OTMyYjEyYzBlODY5ODhiNmRhNGMxNmU4MmQyODkyNzAyZDFlY2VmOGI4N2JkMzlhZnmzGvg=: 00:17:56.074 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YzEwNGY1NTYxNTRjNjU4OTMyYjEyYzBlODY5ODhiNmRhNGMxNmU4MmQyODkyNzAyZDFlY2VmOGI4N2JkMzlhZnmzGvg=: 00:17:56.646 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:56.646 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:56.646 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:56.646 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.646 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.646 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.646 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:56.646 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:56.646 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:56.646 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:56.906 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:17:56.906 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:56.906 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:56.906 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:56.906 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:56.906 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:56.906 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.906 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.906 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.906 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.906 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.906 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.906 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:57.166 00:17:57.166 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:57.166 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:57.166 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.428 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.428 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:57.428 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.428 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.428 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.428 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:57.428 { 00:17:57.428 "cntlid": 33, 00:17:57.428 "qid": 0, 00:17:57.428 "state": "enabled", 00:17:57.428 "thread": "nvmf_tgt_poll_group_000", 00:17:57.428 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:57.428 "listen_address": { 00:17:57.428 "trtype": "TCP", 00:17:57.428 "adrfam": "IPv4", 00:17:57.428 "traddr": "10.0.0.2", 00:17:57.428 "trsvcid": "4420" 00:17:57.428 }, 00:17:57.428 "peer_address": { 00:17:57.428 "trtype": "TCP", 00:17:57.428 "adrfam": "IPv4", 00:17:57.428 "traddr": "10.0.0.1", 00:17:57.428 "trsvcid": "53022" 00:17:57.428 }, 00:17:57.428 "auth": { 00:17:57.428 "state": "completed", 00:17:57.428 "digest": "sha256", 00:17:57.428 "dhgroup": "ffdhe6144" 00:17:57.428 } 00:17:57.428 } 00:17:57.428 ]' 00:17:57.428 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:57.428 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:57.428 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:57.690 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:57.690 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:57.690 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:57.690 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.690 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.690 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjlmZjM2NmI5M2I1ODY3MTlmNDg3ZTExMDhmZTI1OTQ4MjgyMWVlYWZjMWQ2MzI2iijJ7w==: --dhchap-ctrl-secret DHHC-1:03:MmMxZTVhNDUwOWFjN2ExZTNmMjExNjQwZjMwZTBhNjQzYTlhOWZlYzQ1Y2FlMDg4ZWU4NTc4MTA3NTUwOWVjY6iUJvY=: 00:17:57.690 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YjlmZjM2NmI5M2I1ODY3MTlmNDg3ZTExMDhmZTI1OTQ4MjgyMWVlYWZjMWQ2MzI2iijJ7w==: --dhchap-ctrl-secret DHHC-1:03:MmMxZTVhNDUwOWFjN2ExZTNmMjExNjQwZjMwZTBhNjQzYTlhOWZlYzQ1Y2FlMDg4ZWU4NTc4MTA3NTUwOWVjY6iUJvY=: 00:17:58.632 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.632 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.632 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:58.632 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.632 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.632 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.632 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:58.632 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:58.632 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:58.632 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:17:58.632 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:58.632 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:58.632 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:58.632 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:58.632 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:58.632 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.632 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.632 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.632 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.632 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.632 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.632 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.893 00:17:58.893 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:58.893 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:58.893 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.155 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.155 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.155 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.155 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.155 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.155 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:59.155 { 00:17:59.155 "cntlid": 35, 00:17:59.155 "qid": 0, 00:17:59.155 "state": "enabled", 00:17:59.155 "thread": "nvmf_tgt_poll_group_000", 00:17:59.155 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:59.155 "listen_address": { 00:17:59.155 "trtype": "TCP", 00:17:59.155 "adrfam": "IPv4", 00:17:59.155 "traddr": "10.0.0.2", 00:17:59.155 "trsvcid": "4420" 00:17:59.155 }, 00:17:59.155 "peer_address": { 00:17:59.155 "trtype": "TCP", 00:17:59.155 "adrfam": "IPv4", 00:17:59.155 "traddr": "10.0.0.1", 00:17:59.155 "trsvcid": "53062" 00:17:59.155 }, 00:17:59.155 "auth": { 00:17:59.155 "state": "completed", 00:17:59.155 "digest": "sha256", 00:17:59.155 "dhgroup": "ffdhe6144" 00:17:59.155 } 00:17:59.155 } 00:17:59.155 ]' 00:17:59.155 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:59.155 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:59.155 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:59.417 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:59.417 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:59.417 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.417 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.417 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:59.417 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGNhZGViNDU3OWZkNWE4MTcwY2UyNDJmODA2OTFlZTGXPKvD: --dhchap-ctrl-secret DHHC-1:02:NzRkMzY2NzliY2E3MGY5MTg2YmI0NzlkYjA5MzI2YTEzOWZhNDg2MjU3OTZjYmRlqL/vPg==: 00:17:59.417 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:OGNhZGViNDU3OWZkNWE4MTcwY2UyNDJmODA2OTFlZTGXPKvD: --dhchap-ctrl-secret DHHC-1:02:NzRkMzY2NzliY2E3MGY5MTg2YmI0NzlkYjA5MzI2YTEzOWZhNDg2MjU3OTZjYmRlqL/vPg==: 00:18:00.358 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:00.358 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:00.358 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:00.358 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.358 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.358 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.358 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:00.358 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:00.358 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:00.358 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:18:00.358 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:00.358 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:00.358 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:00.358 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:00.358 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:00.358 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.358 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.358 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.358 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.358 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.358 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.358 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.645 00:18:00.645 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:00.645 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:00.645 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.906 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.906 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:00.906 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.906 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.906 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.906 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:00.906 { 00:18:00.906 "cntlid": 37, 00:18:00.906 "qid": 0, 00:18:00.906 "state": "enabled", 00:18:00.906 "thread": "nvmf_tgt_poll_group_000", 00:18:00.906 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:00.906 "listen_address": { 00:18:00.906 "trtype": "TCP", 00:18:00.906 "adrfam": "IPv4", 00:18:00.906 "traddr": "10.0.0.2", 00:18:00.906 "trsvcid": "4420" 00:18:00.906 }, 00:18:00.906 "peer_address": { 00:18:00.906 "trtype": "TCP", 00:18:00.906 "adrfam": "IPv4", 00:18:00.906 "traddr": "10.0.0.1", 00:18:00.906 "trsvcid": "53086" 00:18:00.906 }, 00:18:00.906 "auth": { 00:18:00.906 "state": "completed", 00:18:00.906 "digest": "sha256", 00:18:00.906 "dhgroup": "ffdhe6144" 00:18:00.906 } 00:18:00.906 } 00:18:00.906 ]' 00:18:00.906 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:00.906 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:00.906 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:00.906 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:00.906 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:00.906 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:00.906 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:00.906 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:01.167 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTE1MmU1M2RlNTU3NzkxMzM3NTA1NDlmZmViMWEzZDhmODgwOTBlYzRkYWZlNTAyxXEW/Q==: --dhchap-ctrl-secret DHHC-1:01:MWQxYWNlODZmOWNmZTVlODVmMmQzMTkxZmMxZmJhYThS3flV: 00:18:01.167 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MTE1MmU1M2RlNTU3NzkxMzM3NTA1NDlmZmViMWEzZDhmODgwOTBlYzRkYWZlNTAyxXEW/Q==: --dhchap-ctrl-secret DHHC-1:01:MWQxYWNlODZmOWNmZTVlODVmMmQzMTkxZmMxZmJhYThS3flV: 00:18:01.739 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:02.000 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:02.000 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:02.000 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.000 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.000 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.000 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:02.000 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:02.000 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:02.000 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:18:02.000 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:02.000 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:02.000 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:02.000 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:02.000 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:02.000 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:02.000 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.000 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.000 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.000 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:02.000 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:02.000 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:02.260 00:18:02.521 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:02.521 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:02.521 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.521 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.521 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:02.521 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.521 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.521 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.521 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:02.521 { 00:18:02.521 "cntlid": 39, 00:18:02.521 "qid": 0, 00:18:02.521 "state": "enabled", 00:18:02.521 "thread": "nvmf_tgt_poll_group_000", 00:18:02.521 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:02.521 "listen_address": { 00:18:02.521 "trtype": "TCP", 00:18:02.521 "adrfam": "IPv4", 00:18:02.521 "traddr": "10.0.0.2", 00:18:02.521 "trsvcid": "4420" 00:18:02.521 }, 00:18:02.521 "peer_address": { 00:18:02.521 "trtype": "TCP", 00:18:02.521 "adrfam": "IPv4", 00:18:02.521 "traddr": "10.0.0.1", 00:18:02.521 "trsvcid": "53104" 00:18:02.521 }, 00:18:02.521 "auth": { 00:18:02.521 "state": "completed", 00:18:02.521 "digest": "sha256", 00:18:02.521 "dhgroup": "ffdhe6144" 00:18:02.521 } 00:18:02.521 } 00:18:02.521 ]' 00:18:02.521 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:02.521 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:02.521 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:02.781 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:02.781 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:02.781 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.781 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.781 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:02.781 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzEwNGY1NTYxNTRjNjU4OTMyYjEyYzBlODY5ODhiNmRhNGMxNmU4MmQyODkyNzAyZDFlY2VmOGI4N2JkMzlhZnmzGvg=: 00:18:02.781 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YzEwNGY1NTYxNTRjNjU4OTMyYjEyYzBlODY5ODhiNmRhNGMxNmU4MmQyODkyNzAyZDFlY2VmOGI4N2JkMzlhZnmzGvg=: 00:18:03.724 09:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.724 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.724 09:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:03.724 09:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.724 09:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.724 09:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.724 09:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:03.724 09:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:03.724 09:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:03.724 09:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:03.724 09:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:18:03.724 09:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:03.724 09:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:03.724 09:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:03.724 09:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:03.724 09:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:03.724 09:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.724 09:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.724 09:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.724 09:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.724 09:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.724 09:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.724 09:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:04.294 00:18:04.294 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:04.294 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:04.294 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.295 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.295 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:04.295 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.295 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.295 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.295 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:04.295 { 00:18:04.295 "cntlid": 41, 00:18:04.295 "qid": 0, 00:18:04.295 "state": "enabled", 00:18:04.295 "thread": "nvmf_tgt_poll_group_000", 00:18:04.295 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:04.295 "listen_address": { 00:18:04.295 "trtype": "TCP", 00:18:04.295 "adrfam": "IPv4", 00:18:04.295 "traddr": "10.0.0.2", 00:18:04.295 "trsvcid": "4420" 00:18:04.295 }, 00:18:04.295 "peer_address": { 00:18:04.295 "trtype": "TCP", 00:18:04.295 "adrfam": "IPv4", 00:18:04.295 "traddr": "10.0.0.1", 00:18:04.295 "trsvcid": "35630" 00:18:04.295 }, 00:18:04.295 "auth": { 00:18:04.295 "state": "completed", 00:18:04.295 "digest": "sha256", 00:18:04.295 "dhgroup": "ffdhe8192" 00:18:04.295 } 00:18:04.295 } 00:18:04.295 ]' 00:18:04.295 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:04.555 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:04.555 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:04.555 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:04.555 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:04.555 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:04.555 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:04.555 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.817 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjlmZjM2NmI5M2I1ODY3MTlmNDg3ZTExMDhmZTI1OTQ4MjgyMWVlYWZjMWQ2MzI2iijJ7w==: --dhchap-ctrl-secret DHHC-1:03:MmMxZTVhNDUwOWFjN2ExZTNmMjExNjQwZjMwZTBhNjQzYTlhOWZlYzQ1Y2FlMDg4ZWU4NTc4MTA3NTUwOWVjY6iUJvY=: 00:18:04.817 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YjlmZjM2NmI5M2I1ODY3MTlmNDg3ZTExMDhmZTI1OTQ4MjgyMWVlYWZjMWQ2MzI2iijJ7w==: --dhchap-ctrl-secret DHHC-1:03:MmMxZTVhNDUwOWFjN2ExZTNmMjExNjQwZjMwZTBhNjQzYTlhOWZlYzQ1Y2FlMDg4ZWU4NTc4MTA3NTUwOWVjY6iUJvY=: 00:18:05.389 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:05.389 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:05.389 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:05.389 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.389 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.389 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.389 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:05.389 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:05.389 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:05.649 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:18:05.649 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:05.649 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:05.649 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:05.649 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:05.649 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:05.649 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.649 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.649 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.649 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.649 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.649 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.649 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:06.220 00:18:06.220 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:06.220 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:06.220 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.220 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.220 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.220 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.220 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.220 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.220 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:06.220 { 00:18:06.220 "cntlid": 43, 00:18:06.220 "qid": 0, 00:18:06.220 "state": "enabled", 00:18:06.220 "thread": "nvmf_tgt_poll_group_000", 00:18:06.220 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:06.220 "listen_address": { 00:18:06.220 "trtype": "TCP", 00:18:06.220 "adrfam": "IPv4", 00:18:06.220 "traddr": "10.0.0.2", 00:18:06.220 "trsvcid": "4420" 00:18:06.220 }, 00:18:06.220 "peer_address": { 00:18:06.220 "trtype": "TCP", 00:18:06.220 "adrfam": "IPv4", 00:18:06.220 "traddr": "10.0.0.1", 00:18:06.220 "trsvcid": "35658" 00:18:06.220 }, 00:18:06.220 "auth": { 00:18:06.220 "state": "completed", 00:18:06.220 "digest": "sha256", 00:18:06.220 "dhgroup": "ffdhe8192" 00:18:06.220 } 00:18:06.220 } 00:18:06.220 ]' 00:18:06.220 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:06.220 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:06.220 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:06.481 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:06.481 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:06.481 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:06.481 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.481 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:06.481 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGNhZGViNDU3OWZkNWE4MTcwY2UyNDJmODA2OTFlZTGXPKvD: --dhchap-ctrl-secret DHHC-1:02:NzRkMzY2NzliY2E3MGY5MTg2YmI0NzlkYjA5MzI2YTEzOWZhNDg2MjU3OTZjYmRlqL/vPg==: 00:18:06.481 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:OGNhZGViNDU3OWZkNWE4MTcwY2UyNDJmODA2OTFlZTGXPKvD: --dhchap-ctrl-secret DHHC-1:02:NzRkMzY2NzliY2E3MGY5MTg2YmI0NzlkYjA5MzI2YTEzOWZhNDg2MjU3OTZjYmRlqL/vPg==: 00:18:07.425 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:07.425 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:07.425 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:07.425 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.425 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.425 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.425 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:07.425 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:07.425 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:07.425 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:18:07.425 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:07.425 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:07.425 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:07.425 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:07.425 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:07.425 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:07.425 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.425 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.425 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.425 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:07.425 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:07.425 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:08.041 00:18:08.041 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:08.041 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:08.041 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.041 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.041 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:08.041 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.041 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.041 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.041 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:08.041 { 00:18:08.041 "cntlid": 45, 00:18:08.041 "qid": 0, 00:18:08.041 "state": "enabled", 00:18:08.041 "thread": "nvmf_tgt_poll_group_000", 00:18:08.041 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:08.041 "listen_address": { 00:18:08.041 "trtype": "TCP", 00:18:08.041 "adrfam": "IPv4", 00:18:08.041 "traddr": "10.0.0.2", 00:18:08.041 "trsvcid": "4420" 00:18:08.041 }, 00:18:08.041 "peer_address": { 00:18:08.041 "trtype": "TCP", 00:18:08.041 "adrfam": "IPv4", 00:18:08.041 "traddr": "10.0.0.1", 00:18:08.041 "trsvcid": "35684" 00:18:08.041 }, 00:18:08.041 "auth": { 00:18:08.041 "state": "completed", 00:18:08.041 "digest": "sha256", 00:18:08.041 "dhgroup": "ffdhe8192" 00:18:08.041 } 00:18:08.041 } 00:18:08.041 ]' 00:18:08.041 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:08.344 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:08.344 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:08.344 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:08.344 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:08.344 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:08.344 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.344 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.344 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTE1MmU1M2RlNTU3NzkxMzM3NTA1NDlmZmViMWEzZDhmODgwOTBlYzRkYWZlNTAyxXEW/Q==: --dhchap-ctrl-secret DHHC-1:01:MWQxYWNlODZmOWNmZTVlODVmMmQzMTkxZmMxZmJhYThS3flV: 00:18:08.344 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MTE1MmU1M2RlNTU3NzkxMzM3NTA1NDlmZmViMWEzZDhmODgwOTBlYzRkYWZlNTAyxXEW/Q==: --dhchap-ctrl-secret DHHC-1:01:MWQxYWNlODZmOWNmZTVlODVmMmQzMTkxZmMxZmJhYThS3flV: 00:18:09.310 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:09.310 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:09.310 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:09.310 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.310 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.310 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.310 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:09.310 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:09.310 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:09.310 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:18:09.310 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:09.310 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:09.310 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:09.310 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:09.310 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:09.310 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:09.310 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.310 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.310 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.310 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:09.310 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:09.310 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:09.883 00:18:09.883 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:09.883 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:09.883 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:10.145 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.145 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:10.145 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.145 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.145 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.145 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:10.145 { 00:18:10.145 "cntlid": 47, 00:18:10.145 "qid": 0, 00:18:10.145 "state": "enabled", 00:18:10.145 "thread": "nvmf_tgt_poll_group_000", 00:18:10.145 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:10.145 "listen_address": { 00:18:10.145 "trtype": "TCP", 00:18:10.145 "adrfam": "IPv4", 00:18:10.145 "traddr": "10.0.0.2", 00:18:10.145 "trsvcid": "4420" 00:18:10.145 }, 00:18:10.145 "peer_address": { 00:18:10.145 "trtype": "TCP", 00:18:10.145 "adrfam": "IPv4", 00:18:10.145 "traddr": "10.0.0.1", 00:18:10.145 "trsvcid": "35720" 00:18:10.145 }, 00:18:10.145 "auth": { 00:18:10.145 "state": "completed", 00:18:10.145 "digest": "sha256", 00:18:10.145 "dhgroup": "ffdhe8192" 00:18:10.145 } 00:18:10.145 } 00:18:10.145 ]' 00:18:10.145 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:10.145 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:10.145 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:10.145 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:10.145 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:10.145 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:10.145 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.145 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.406 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzEwNGY1NTYxNTRjNjU4OTMyYjEyYzBlODY5ODhiNmRhNGMxNmU4MmQyODkyNzAyZDFlY2VmOGI4N2JkMzlhZnmzGvg=: 00:18:10.406 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YzEwNGY1NTYxNTRjNjU4OTMyYjEyYzBlODY5ODhiNmRhNGMxNmU4MmQyODkyNzAyZDFlY2VmOGI4N2JkMzlhZnmzGvg=: 00:18:10.977 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:10.977 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:10.977 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:10.977 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.977 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.977 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.977 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:18:10.977 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:10.977 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:10.977 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:10.977 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:11.237 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:18:11.237 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:11.237 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:11.237 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:11.237 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:11.237 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:11.237 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:11.237 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.237 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.237 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.238 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:11.238 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:11.238 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:11.498 00:18:11.498 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:11.498 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:11.498 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:11.498 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.498 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:11.498 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.498 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.498 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.498 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:11.498 { 00:18:11.498 "cntlid": 49, 00:18:11.498 "qid": 0, 00:18:11.498 "state": "enabled", 00:18:11.498 "thread": "nvmf_tgt_poll_group_000", 00:18:11.498 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:11.498 "listen_address": { 00:18:11.498 "trtype": "TCP", 00:18:11.498 "adrfam": "IPv4", 00:18:11.498 "traddr": "10.0.0.2", 00:18:11.498 "trsvcid": "4420" 00:18:11.498 }, 00:18:11.498 "peer_address": { 00:18:11.498 "trtype": "TCP", 00:18:11.498 "adrfam": "IPv4", 00:18:11.498 "traddr": "10.0.0.1", 00:18:11.498 "trsvcid": "35754" 00:18:11.498 }, 00:18:11.498 "auth": { 00:18:11.498 "state": "completed", 00:18:11.498 "digest": "sha384", 00:18:11.498 "dhgroup": "null" 00:18:11.498 } 00:18:11.498 } 00:18:11.498 ]' 00:18:11.498 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:11.758 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:11.758 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:11.758 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:11.758 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:11.759 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.759 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.759 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:12.019 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjlmZjM2NmI5M2I1ODY3MTlmNDg3ZTExMDhmZTI1OTQ4MjgyMWVlYWZjMWQ2MzI2iijJ7w==: --dhchap-ctrl-secret DHHC-1:03:MmMxZTVhNDUwOWFjN2ExZTNmMjExNjQwZjMwZTBhNjQzYTlhOWZlYzQ1Y2FlMDg4ZWU4NTc4MTA3NTUwOWVjY6iUJvY=: 00:18:12.019 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YjlmZjM2NmI5M2I1ODY3MTlmNDg3ZTExMDhmZTI1OTQ4MjgyMWVlYWZjMWQ2MzI2iijJ7w==: --dhchap-ctrl-secret DHHC-1:03:MmMxZTVhNDUwOWFjN2ExZTNmMjExNjQwZjMwZTBhNjQzYTlhOWZlYzQ1Y2FlMDg4ZWU4NTc4MTA3NTUwOWVjY6iUJvY=: 00:18:12.590 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:12.590 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:12.590 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:12.590 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.590 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.590 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.590 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:12.590 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:12.590 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:12.851 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:18:12.851 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:12.851 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:12.851 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:12.851 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:12.851 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:12.851 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:12.851 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.851 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.851 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.851 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:12.851 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:12.851 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:13.112 00:18:13.112 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:13.112 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:13.112 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:13.112 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.112 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:13.112 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.112 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.112 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.112 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:13.112 { 00:18:13.112 "cntlid": 51, 00:18:13.112 "qid": 0, 00:18:13.112 "state": "enabled", 00:18:13.112 "thread": "nvmf_tgt_poll_group_000", 00:18:13.112 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:13.112 "listen_address": { 00:18:13.112 "trtype": "TCP", 00:18:13.112 "adrfam": "IPv4", 00:18:13.112 "traddr": "10.0.0.2", 00:18:13.112 "trsvcid": "4420" 00:18:13.112 }, 00:18:13.112 "peer_address": { 00:18:13.112 "trtype": "TCP", 00:18:13.112 "adrfam": "IPv4", 00:18:13.112 "traddr": "10.0.0.1", 00:18:13.112 "trsvcid": "35784" 00:18:13.112 }, 00:18:13.112 "auth": { 00:18:13.112 "state": "completed", 00:18:13.112 "digest": "sha384", 00:18:13.112 "dhgroup": "null" 00:18:13.112 } 00:18:13.112 } 00:18:13.112 ]' 00:18:13.112 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:13.372 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:13.372 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:13.372 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:13.372 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:13.372 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:13.372 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:13.372 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:13.633 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGNhZGViNDU3OWZkNWE4MTcwY2UyNDJmODA2OTFlZTGXPKvD: --dhchap-ctrl-secret DHHC-1:02:NzRkMzY2NzliY2E3MGY5MTg2YmI0NzlkYjA5MzI2YTEzOWZhNDg2MjU3OTZjYmRlqL/vPg==: 00:18:13.633 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:OGNhZGViNDU3OWZkNWE4MTcwY2UyNDJmODA2OTFlZTGXPKvD: --dhchap-ctrl-secret DHHC-1:02:NzRkMzY2NzliY2E3MGY5MTg2YmI0NzlkYjA5MzI2YTEzOWZhNDg2MjU3OTZjYmRlqL/vPg==: 00:18:14.203 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:14.203 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:14.203 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:14.203 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.203 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.203 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.203 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:14.203 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:14.204 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:14.464 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:18:14.464 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:14.464 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:14.464 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:14.464 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:14.464 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:14.464 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:14.464 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.464 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.464 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.464 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:14.464 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:14.464 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:14.724 00:18:14.724 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:14.724 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:14.724 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.724 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.724 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:14.724 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.724 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.724 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.724 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:14.724 { 00:18:14.724 "cntlid": 53, 00:18:14.724 "qid": 0, 00:18:14.724 "state": "enabled", 00:18:14.724 "thread": "nvmf_tgt_poll_group_000", 00:18:14.724 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:14.724 "listen_address": { 00:18:14.724 "trtype": "TCP", 00:18:14.724 "adrfam": "IPv4", 00:18:14.724 "traddr": "10.0.0.2", 00:18:14.724 "trsvcid": "4420" 00:18:14.724 }, 00:18:14.724 "peer_address": { 00:18:14.724 "trtype": "TCP", 00:18:14.724 "adrfam": "IPv4", 00:18:14.724 "traddr": "10.0.0.1", 00:18:14.724 "trsvcid": "48818" 00:18:14.724 }, 00:18:14.724 "auth": { 00:18:14.724 "state": "completed", 00:18:14.724 "digest": "sha384", 00:18:14.724 "dhgroup": "null" 00:18:14.724 } 00:18:14.724 } 00:18:14.724 ]' 00:18:14.724 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:14.724 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:14.984 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:14.984 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:14.984 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:14.984 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:14.984 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:14.984 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:14.984 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTE1MmU1M2RlNTU3NzkxMzM3NTA1NDlmZmViMWEzZDhmODgwOTBlYzRkYWZlNTAyxXEW/Q==: --dhchap-ctrl-secret DHHC-1:01:MWQxYWNlODZmOWNmZTVlODVmMmQzMTkxZmMxZmJhYThS3flV: 00:18:14.984 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MTE1MmU1M2RlNTU3NzkxMzM3NTA1NDlmZmViMWEzZDhmODgwOTBlYzRkYWZlNTAyxXEW/Q==: --dhchap-ctrl-secret DHHC-1:01:MWQxYWNlODZmOWNmZTVlODVmMmQzMTkxZmMxZmJhYThS3flV: 00:18:15.928 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:15.928 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:15.928 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:15.928 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.928 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.928 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.928 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:15.928 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:15.928 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:15.928 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:18:15.928 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:15.928 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:15.928 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:15.928 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:15.928 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:15.928 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:15.928 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.928 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.928 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.928 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:15.928 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:15.928 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:16.188 00:18:16.188 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:16.188 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:16.188 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:16.450 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.450 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:16.450 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.450 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.450 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.450 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:16.450 { 00:18:16.450 "cntlid": 55, 00:18:16.450 "qid": 0, 00:18:16.450 "state": "enabled", 00:18:16.450 "thread": "nvmf_tgt_poll_group_000", 00:18:16.450 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:16.450 "listen_address": { 00:18:16.450 "trtype": "TCP", 00:18:16.450 "adrfam": "IPv4", 00:18:16.450 "traddr": "10.0.0.2", 00:18:16.450 "trsvcid": "4420" 00:18:16.450 }, 00:18:16.450 "peer_address": { 00:18:16.450 "trtype": "TCP", 00:18:16.450 "adrfam": "IPv4", 00:18:16.450 "traddr": "10.0.0.1", 00:18:16.450 "trsvcid": "48854" 00:18:16.450 }, 00:18:16.450 "auth": { 00:18:16.450 "state": "completed", 00:18:16.450 "digest": "sha384", 00:18:16.450 "dhgroup": "null" 00:18:16.450 } 00:18:16.450 } 00:18:16.450 ]' 00:18:16.450 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:16.450 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:16.450 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:16.450 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:16.450 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:16.450 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:16.450 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:16.450 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.710 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzEwNGY1NTYxNTRjNjU4OTMyYjEyYzBlODY5ODhiNmRhNGMxNmU4MmQyODkyNzAyZDFlY2VmOGI4N2JkMzlhZnmzGvg=: 00:18:16.710 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YzEwNGY1NTYxNTRjNjU4OTMyYjEyYzBlODY5ODhiNmRhNGMxNmU4MmQyODkyNzAyZDFlY2VmOGI4N2JkMzlhZnmzGvg=: 00:18:17.282 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:17.282 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:17.282 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:17.282 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.282 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.282 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.282 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:17.282 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:17.282 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:17.282 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:17.543 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:18:17.543 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:17.543 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:17.543 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:17.543 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:17.543 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:17.543 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:17.543 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.543 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.543 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.543 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:17.543 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:17.543 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:17.803 00:18:17.803 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:17.803 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:17.803 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.064 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.064 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:18.064 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.064 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.064 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.064 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:18.064 { 00:18:18.064 "cntlid": 57, 00:18:18.064 "qid": 0, 00:18:18.064 "state": "enabled", 00:18:18.064 "thread": "nvmf_tgt_poll_group_000", 00:18:18.064 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:18.064 "listen_address": { 00:18:18.064 "trtype": "TCP", 00:18:18.064 "adrfam": "IPv4", 00:18:18.064 "traddr": "10.0.0.2", 00:18:18.064 "trsvcid": "4420" 00:18:18.064 }, 00:18:18.064 "peer_address": { 00:18:18.064 "trtype": "TCP", 00:18:18.064 "adrfam": "IPv4", 00:18:18.064 "traddr": "10.0.0.1", 00:18:18.064 "trsvcid": "48892" 00:18:18.064 }, 00:18:18.064 "auth": { 00:18:18.064 "state": "completed", 00:18:18.064 "digest": "sha384", 00:18:18.064 "dhgroup": "ffdhe2048" 00:18:18.064 } 00:18:18.064 } 00:18:18.064 ]' 00:18:18.064 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:18.064 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:18.064 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:18.064 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:18.064 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:18.064 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:18.064 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:18.064 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:18.325 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjlmZjM2NmI5M2I1ODY3MTlmNDg3ZTExMDhmZTI1OTQ4MjgyMWVlYWZjMWQ2MzI2iijJ7w==: --dhchap-ctrl-secret DHHC-1:03:MmMxZTVhNDUwOWFjN2ExZTNmMjExNjQwZjMwZTBhNjQzYTlhOWZlYzQ1Y2FlMDg4ZWU4NTc4MTA3NTUwOWVjY6iUJvY=: 00:18:18.325 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YjlmZjM2NmI5M2I1ODY3MTlmNDg3ZTExMDhmZTI1OTQ4MjgyMWVlYWZjMWQ2MzI2iijJ7w==: --dhchap-ctrl-secret DHHC-1:03:MmMxZTVhNDUwOWFjN2ExZTNmMjExNjQwZjMwZTBhNjQzYTlhOWZlYzQ1Y2FlMDg4ZWU4NTc4MTA3NTUwOWVjY6iUJvY=: 00:18:18.896 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:18.896 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:18.896 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:18.896 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.896 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.896 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.896 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:18.896 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:18.896 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:19.157 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:18:19.157 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:19.157 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:19.157 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:19.157 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:19.157 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:19.157 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:19.157 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.157 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.157 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.157 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:19.157 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:19.157 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:19.419 00:18:19.419 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:19.419 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:19.419 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:19.678 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.678 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:19.678 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.678 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.678 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.678 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:19.678 { 00:18:19.678 "cntlid": 59, 00:18:19.678 "qid": 0, 00:18:19.678 "state": "enabled", 00:18:19.678 "thread": "nvmf_tgt_poll_group_000", 00:18:19.678 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:19.678 "listen_address": { 00:18:19.678 "trtype": "TCP", 00:18:19.678 "adrfam": "IPv4", 00:18:19.678 "traddr": "10.0.0.2", 00:18:19.678 "trsvcid": "4420" 00:18:19.678 }, 00:18:19.678 "peer_address": { 00:18:19.678 "trtype": "TCP", 00:18:19.678 "adrfam": "IPv4", 00:18:19.678 "traddr": "10.0.0.1", 00:18:19.678 "trsvcid": "48908" 00:18:19.678 }, 00:18:19.678 "auth": { 00:18:19.678 "state": "completed", 00:18:19.678 "digest": "sha384", 00:18:19.678 "dhgroup": "ffdhe2048" 00:18:19.678 } 00:18:19.678 } 00:18:19.678 ]' 00:18:19.678 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:19.678 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:19.678 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:19.678 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:19.678 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:19.679 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:19.679 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:19.679 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:19.939 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGNhZGViNDU3OWZkNWE4MTcwY2UyNDJmODA2OTFlZTGXPKvD: --dhchap-ctrl-secret DHHC-1:02:NzRkMzY2NzliY2E3MGY5MTg2YmI0NzlkYjA5MzI2YTEzOWZhNDg2MjU3OTZjYmRlqL/vPg==: 00:18:19.939 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:OGNhZGViNDU3OWZkNWE4MTcwY2UyNDJmODA2OTFlZTGXPKvD: --dhchap-ctrl-secret DHHC-1:02:NzRkMzY2NzliY2E3MGY5MTg2YmI0NzlkYjA5MzI2YTEzOWZhNDg2MjU3OTZjYmRlqL/vPg==: 00:18:20.508 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:20.509 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:20.509 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:20.509 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.509 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.509 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.509 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:20.509 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:20.509 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:20.768 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:18:20.768 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:20.768 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:20.768 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:20.768 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:20.768 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:20.768 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:20.768 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.768 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.768 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.768 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:20.768 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:20.768 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:21.028 00:18:21.028 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:21.028 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:21.028 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.287 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.287 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:21.287 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.287 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.287 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.287 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:21.287 { 00:18:21.287 "cntlid": 61, 00:18:21.287 "qid": 0, 00:18:21.287 "state": "enabled", 00:18:21.287 "thread": "nvmf_tgt_poll_group_000", 00:18:21.287 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:21.287 "listen_address": { 00:18:21.287 "trtype": "TCP", 00:18:21.287 "adrfam": "IPv4", 00:18:21.287 "traddr": "10.0.0.2", 00:18:21.287 "trsvcid": "4420" 00:18:21.287 }, 00:18:21.287 "peer_address": { 00:18:21.287 "trtype": "TCP", 00:18:21.287 "adrfam": "IPv4", 00:18:21.287 "traddr": "10.0.0.1", 00:18:21.287 "trsvcid": "48934" 00:18:21.287 }, 00:18:21.287 "auth": { 00:18:21.287 "state": "completed", 00:18:21.287 "digest": "sha384", 00:18:21.287 "dhgroup": "ffdhe2048" 00:18:21.287 } 00:18:21.287 } 00:18:21.287 ]' 00:18:21.287 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:21.287 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:21.287 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:21.287 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:21.287 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:21.287 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:21.287 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:21.287 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:21.546 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTE1MmU1M2RlNTU3NzkxMzM3NTA1NDlmZmViMWEzZDhmODgwOTBlYzRkYWZlNTAyxXEW/Q==: --dhchap-ctrl-secret DHHC-1:01:MWQxYWNlODZmOWNmZTVlODVmMmQzMTkxZmMxZmJhYThS3flV: 00:18:21.546 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MTE1MmU1M2RlNTU3NzkxMzM3NTA1NDlmZmViMWEzZDhmODgwOTBlYzRkYWZlNTAyxXEW/Q==: --dhchap-ctrl-secret DHHC-1:01:MWQxYWNlODZmOWNmZTVlODVmMmQzMTkxZmMxZmJhYThS3flV: 00:18:22.115 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:22.115 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:22.115 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:22.115 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.115 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.115 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.115 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:22.115 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:22.115 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:22.376 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:18:22.376 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:22.376 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:22.376 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:22.376 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:22.376 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:22.376 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:22.376 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.376 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.376 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.376 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:22.376 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:22.376 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:22.636 00:18:22.636 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:22.636 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:22.636 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.897 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.897 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:22.897 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.897 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.897 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.897 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:22.897 { 00:18:22.897 "cntlid": 63, 00:18:22.897 "qid": 0, 00:18:22.897 "state": "enabled", 00:18:22.897 "thread": "nvmf_tgt_poll_group_000", 00:18:22.897 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:22.897 "listen_address": { 00:18:22.897 "trtype": "TCP", 00:18:22.897 "adrfam": "IPv4", 00:18:22.897 "traddr": "10.0.0.2", 00:18:22.897 "trsvcid": "4420" 00:18:22.897 }, 00:18:22.897 "peer_address": { 00:18:22.897 "trtype": "TCP", 00:18:22.897 "adrfam": "IPv4", 00:18:22.897 "traddr": "10.0.0.1", 00:18:22.897 "trsvcid": "48970" 00:18:22.897 }, 00:18:22.897 "auth": { 00:18:22.897 "state": "completed", 00:18:22.897 "digest": "sha384", 00:18:22.897 "dhgroup": "ffdhe2048" 00:18:22.897 } 00:18:22.897 } 00:18:22.897 ]' 00:18:22.897 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:22.897 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:22.897 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:22.897 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:22.897 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:22.897 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:22.897 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:22.897 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:23.159 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzEwNGY1NTYxNTRjNjU4OTMyYjEyYzBlODY5ODhiNmRhNGMxNmU4MmQyODkyNzAyZDFlY2VmOGI4N2JkMzlhZnmzGvg=: 00:18:23.159 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YzEwNGY1NTYxNTRjNjU4OTMyYjEyYzBlODY5ODhiNmRhNGMxNmU4MmQyODkyNzAyZDFlY2VmOGI4N2JkMzlhZnmzGvg=: 00:18:23.730 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:23.730 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:23.730 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:23.730 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.730 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.730 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.730 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:23.730 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:23.730 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:23.730 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:23.991 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:18:23.991 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:23.991 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:23.991 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:23.991 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:23.991 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:23.991 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:23.991 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.991 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.991 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.991 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:23.991 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:23.991 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:24.251 00:18:24.251 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:24.251 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:24.251 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:24.512 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.512 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:24.512 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.512 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.512 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.512 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:24.512 { 00:18:24.512 "cntlid": 65, 00:18:24.512 "qid": 0, 00:18:24.512 "state": "enabled", 00:18:24.512 "thread": "nvmf_tgt_poll_group_000", 00:18:24.512 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:24.512 "listen_address": { 00:18:24.512 "trtype": "TCP", 00:18:24.512 "adrfam": "IPv4", 00:18:24.512 "traddr": "10.0.0.2", 00:18:24.512 "trsvcid": "4420" 00:18:24.512 }, 00:18:24.512 "peer_address": { 00:18:24.512 "trtype": "TCP", 00:18:24.512 "adrfam": "IPv4", 00:18:24.512 "traddr": "10.0.0.1", 00:18:24.512 "trsvcid": "60274" 00:18:24.512 }, 00:18:24.512 "auth": { 00:18:24.512 "state": "completed", 00:18:24.512 "digest": "sha384", 00:18:24.512 "dhgroup": "ffdhe3072" 00:18:24.512 } 00:18:24.512 } 00:18:24.512 ]' 00:18:24.512 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:24.512 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:24.513 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:24.513 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:24.513 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:24.513 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:24.513 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:24.513 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:24.773 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjlmZjM2NmI5M2I1ODY3MTlmNDg3ZTExMDhmZTI1OTQ4MjgyMWVlYWZjMWQ2MzI2iijJ7w==: --dhchap-ctrl-secret DHHC-1:03:MmMxZTVhNDUwOWFjN2ExZTNmMjExNjQwZjMwZTBhNjQzYTlhOWZlYzQ1Y2FlMDg4ZWU4NTc4MTA3NTUwOWVjY6iUJvY=: 00:18:24.773 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YjlmZjM2NmI5M2I1ODY3MTlmNDg3ZTExMDhmZTI1OTQ4MjgyMWVlYWZjMWQ2MzI2iijJ7w==: --dhchap-ctrl-secret DHHC-1:03:MmMxZTVhNDUwOWFjN2ExZTNmMjExNjQwZjMwZTBhNjQzYTlhOWZlYzQ1Y2FlMDg4ZWU4NTc4MTA3NTUwOWVjY6iUJvY=: 00:18:25.343 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:25.343 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:25.343 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:25.343 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.343 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.343 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.343 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:25.343 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:25.343 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:25.603 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:18:25.603 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:25.603 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:25.603 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:25.603 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:25.603 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:25.603 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:25.603 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.603 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.603 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.603 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:25.603 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:25.603 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:25.864 00:18:25.864 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:25.864 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:25.864 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:26.125 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.125 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:26.125 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.125 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.125 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.125 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:26.125 { 00:18:26.125 "cntlid": 67, 00:18:26.125 "qid": 0, 00:18:26.125 "state": "enabled", 00:18:26.125 "thread": "nvmf_tgt_poll_group_000", 00:18:26.125 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:26.125 "listen_address": { 00:18:26.125 "trtype": "TCP", 00:18:26.125 "adrfam": "IPv4", 00:18:26.125 "traddr": "10.0.0.2", 00:18:26.125 "trsvcid": "4420" 00:18:26.125 }, 00:18:26.125 "peer_address": { 00:18:26.125 "trtype": "TCP", 00:18:26.125 "adrfam": "IPv4", 00:18:26.125 "traddr": "10.0.0.1", 00:18:26.125 "trsvcid": "60306" 00:18:26.125 }, 00:18:26.125 "auth": { 00:18:26.125 "state": "completed", 00:18:26.125 "digest": "sha384", 00:18:26.125 "dhgroup": "ffdhe3072" 00:18:26.125 } 00:18:26.125 } 00:18:26.125 ]' 00:18:26.125 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:26.125 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:26.125 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:26.125 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:26.125 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:26.125 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:26.125 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:26.125 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:26.385 09:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGNhZGViNDU3OWZkNWE4MTcwY2UyNDJmODA2OTFlZTGXPKvD: --dhchap-ctrl-secret DHHC-1:02:NzRkMzY2NzliY2E3MGY5MTg2YmI0NzlkYjA5MzI2YTEzOWZhNDg2MjU3OTZjYmRlqL/vPg==: 00:18:26.385 09:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:OGNhZGViNDU3OWZkNWE4MTcwY2UyNDJmODA2OTFlZTGXPKvD: --dhchap-ctrl-secret DHHC-1:02:NzRkMzY2NzliY2E3MGY5MTg2YmI0NzlkYjA5MzI2YTEzOWZhNDg2MjU3OTZjYmRlqL/vPg==: 00:18:26.956 09:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:26.956 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:26.956 09:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:26.956 09:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.956 09:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.956 09:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.956 09:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:26.956 09:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:26.956 09:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:27.216 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:18:27.216 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:27.216 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:27.216 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:27.216 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:27.216 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:27.216 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:27.216 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.216 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.216 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.216 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:27.216 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:27.216 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:27.477 00:18:27.477 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:27.477 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:27.477 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.737 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.737 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:27.737 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.737 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.737 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.737 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:27.737 { 00:18:27.737 "cntlid": 69, 00:18:27.737 "qid": 0, 00:18:27.737 "state": "enabled", 00:18:27.737 "thread": "nvmf_tgt_poll_group_000", 00:18:27.737 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:27.737 "listen_address": { 00:18:27.737 "trtype": "TCP", 00:18:27.737 "adrfam": "IPv4", 00:18:27.737 "traddr": "10.0.0.2", 00:18:27.737 "trsvcid": "4420" 00:18:27.737 }, 00:18:27.737 "peer_address": { 00:18:27.737 "trtype": "TCP", 00:18:27.737 "adrfam": "IPv4", 00:18:27.737 "traddr": "10.0.0.1", 00:18:27.737 "trsvcid": "60334" 00:18:27.737 }, 00:18:27.737 "auth": { 00:18:27.737 "state": "completed", 00:18:27.737 "digest": "sha384", 00:18:27.737 "dhgroup": "ffdhe3072" 00:18:27.737 } 00:18:27.737 } 00:18:27.737 ]' 00:18:27.737 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:27.737 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:27.737 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:27.737 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:27.737 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:27.737 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:27.737 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:27.737 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:27.998 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTE1MmU1M2RlNTU3NzkxMzM3NTA1NDlmZmViMWEzZDhmODgwOTBlYzRkYWZlNTAyxXEW/Q==: --dhchap-ctrl-secret DHHC-1:01:MWQxYWNlODZmOWNmZTVlODVmMmQzMTkxZmMxZmJhYThS3flV: 00:18:27.998 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MTE1MmU1M2RlNTU3NzkxMzM3NTA1NDlmZmViMWEzZDhmODgwOTBlYzRkYWZlNTAyxXEW/Q==: --dhchap-ctrl-secret DHHC-1:01:MWQxYWNlODZmOWNmZTVlODVmMmQzMTkxZmMxZmJhYThS3flV: 00:18:28.568 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:28.568 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:28.568 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:28.568 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.568 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.568 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.568 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:28.568 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:28.568 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:28.828 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:18:28.828 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:28.828 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:28.828 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:28.828 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:28.828 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:28.828 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:28.828 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.828 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.828 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.828 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:28.828 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:28.828 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:29.088 00:18:29.089 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:29.089 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:29.089 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:29.349 09:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.349 09:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:29.349 09:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.349 09:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.349 09:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.349 09:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:29.349 { 00:18:29.349 "cntlid": 71, 00:18:29.349 "qid": 0, 00:18:29.349 "state": "enabled", 00:18:29.349 "thread": "nvmf_tgt_poll_group_000", 00:18:29.349 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:29.349 "listen_address": { 00:18:29.349 "trtype": "TCP", 00:18:29.349 "adrfam": "IPv4", 00:18:29.349 "traddr": "10.0.0.2", 00:18:29.349 "trsvcid": "4420" 00:18:29.349 }, 00:18:29.349 "peer_address": { 00:18:29.349 "trtype": "TCP", 00:18:29.349 "adrfam": "IPv4", 00:18:29.349 "traddr": "10.0.0.1", 00:18:29.349 "trsvcid": "60366" 00:18:29.349 }, 00:18:29.349 "auth": { 00:18:29.349 "state": "completed", 00:18:29.349 "digest": "sha384", 00:18:29.349 "dhgroup": "ffdhe3072" 00:18:29.349 } 00:18:29.349 } 00:18:29.349 ]' 00:18:29.349 09:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:29.349 09:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:29.349 09:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:29.349 09:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:29.349 09:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:29.349 09:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:29.349 09:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:29.349 09:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:29.609 09:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzEwNGY1NTYxNTRjNjU4OTMyYjEyYzBlODY5ODhiNmRhNGMxNmU4MmQyODkyNzAyZDFlY2VmOGI4N2JkMzlhZnmzGvg=: 00:18:29.609 09:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YzEwNGY1NTYxNTRjNjU4OTMyYjEyYzBlODY5ODhiNmRhNGMxNmU4MmQyODkyNzAyZDFlY2VmOGI4N2JkMzlhZnmzGvg=: 00:18:30.179 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:30.179 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:30.179 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:30.179 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.179 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.179 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.179 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:30.179 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:30.179 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:30.179 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:30.439 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:18:30.439 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:30.439 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:30.439 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:30.439 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:30.439 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:30.439 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:30.439 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.439 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.439 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.439 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:30.439 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:30.439 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:30.698 00:18:30.698 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:30.698 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:30.698 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:30.958 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.958 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:30.958 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.958 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.958 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.958 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:30.958 { 00:18:30.958 "cntlid": 73, 00:18:30.958 "qid": 0, 00:18:30.958 "state": "enabled", 00:18:30.958 "thread": "nvmf_tgt_poll_group_000", 00:18:30.958 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:30.958 "listen_address": { 00:18:30.958 "trtype": "TCP", 00:18:30.958 "adrfam": "IPv4", 00:18:30.958 "traddr": "10.0.0.2", 00:18:30.958 "trsvcid": "4420" 00:18:30.958 }, 00:18:30.958 "peer_address": { 00:18:30.958 "trtype": "TCP", 00:18:30.958 "adrfam": "IPv4", 00:18:30.958 "traddr": "10.0.0.1", 00:18:30.958 "trsvcid": "60378" 00:18:30.958 }, 00:18:30.958 "auth": { 00:18:30.958 "state": "completed", 00:18:30.958 "digest": "sha384", 00:18:30.958 "dhgroup": "ffdhe4096" 00:18:30.958 } 00:18:30.958 } 00:18:30.958 ]' 00:18:30.958 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:30.958 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:30.958 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:30.958 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:30.958 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:30.958 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:30.958 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:30.958 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:31.218 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjlmZjM2NmI5M2I1ODY3MTlmNDg3ZTExMDhmZTI1OTQ4MjgyMWVlYWZjMWQ2MzI2iijJ7w==: --dhchap-ctrl-secret DHHC-1:03:MmMxZTVhNDUwOWFjN2ExZTNmMjExNjQwZjMwZTBhNjQzYTlhOWZlYzQ1Y2FlMDg4ZWU4NTc4MTA3NTUwOWVjY6iUJvY=: 00:18:31.218 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YjlmZjM2NmI5M2I1ODY3MTlmNDg3ZTExMDhmZTI1OTQ4MjgyMWVlYWZjMWQ2MzI2iijJ7w==: --dhchap-ctrl-secret DHHC-1:03:MmMxZTVhNDUwOWFjN2ExZTNmMjExNjQwZjMwZTBhNjQzYTlhOWZlYzQ1Y2FlMDg4ZWU4NTc4MTA3NTUwOWVjY6iUJvY=: 00:18:31.788 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:31.788 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:31.788 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:31.788 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.788 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.788 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.788 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:31.788 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:31.788 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:32.048 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:18:32.048 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:32.048 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:32.048 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:32.048 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:32.048 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:32.048 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:32.048 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.048 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.048 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.048 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:32.048 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:32.048 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:32.309 00:18:32.309 09:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:32.309 09:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:32.309 09:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:32.569 09:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.569 09:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:32.569 09:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.569 09:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.569 09:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.569 09:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:32.569 { 00:18:32.569 "cntlid": 75, 00:18:32.569 "qid": 0, 00:18:32.569 "state": "enabled", 00:18:32.569 "thread": "nvmf_tgt_poll_group_000", 00:18:32.569 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:32.569 "listen_address": { 00:18:32.569 "trtype": "TCP", 00:18:32.569 "adrfam": "IPv4", 00:18:32.569 "traddr": "10.0.0.2", 00:18:32.569 "trsvcid": "4420" 00:18:32.569 }, 00:18:32.569 "peer_address": { 00:18:32.569 "trtype": "TCP", 00:18:32.569 "adrfam": "IPv4", 00:18:32.569 "traddr": "10.0.0.1", 00:18:32.569 "trsvcid": "60400" 00:18:32.569 }, 00:18:32.569 "auth": { 00:18:32.569 "state": "completed", 00:18:32.569 "digest": "sha384", 00:18:32.569 "dhgroup": "ffdhe4096" 00:18:32.569 } 00:18:32.569 } 00:18:32.569 ]' 00:18:32.569 09:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:32.569 09:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:32.569 09:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:32.569 09:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:32.569 09:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:32.569 09:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:32.569 09:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:32.569 09:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:32.829 09:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGNhZGViNDU3OWZkNWE4MTcwY2UyNDJmODA2OTFlZTGXPKvD: --dhchap-ctrl-secret DHHC-1:02:NzRkMzY2NzliY2E3MGY5MTg2YmI0NzlkYjA5MzI2YTEzOWZhNDg2MjU3OTZjYmRlqL/vPg==: 00:18:32.829 09:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:OGNhZGViNDU3OWZkNWE4MTcwY2UyNDJmODA2OTFlZTGXPKvD: --dhchap-ctrl-secret DHHC-1:02:NzRkMzY2NzliY2E3MGY5MTg2YmI0NzlkYjA5MzI2YTEzOWZhNDg2MjU3OTZjYmRlqL/vPg==: 00:18:33.400 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:33.400 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:33.400 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:33.400 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.400 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.400 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.400 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:33.400 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:33.400 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:33.660 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:18:33.660 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:33.660 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:33.660 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:33.660 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:33.660 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:33.660 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:33.660 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.660 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.660 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.660 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:33.660 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:33.660 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:33.919 00:18:33.919 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:33.919 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:33.920 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:34.179 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.179 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:34.179 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.179 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.179 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.179 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:34.179 { 00:18:34.179 "cntlid": 77, 00:18:34.179 "qid": 0, 00:18:34.179 "state": "enabled", 00:18:34.179 "thread": "nvmf_tgt_poll_group_000", 00:18:34.179 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:34.179 "listen_address": { 00:18:34.179 "trtype": "TCP", 00:18:34.179 "adrfam": "IPv4", 00:18:34.179 "traddr": "10.0.0.2", 00:18:34.179 "trsvcid": "4420" 00:18:34.179 }, 00:18:34.179 "peer_address": { 00:18:34.179 "trtype": "TCP", 00:18:34.179 "adrfam": "IPv4", 00:18:34.179 "traddr": "10.0.0.1", 00:18:34.179 "trsvcid": "52970" 00:18:34.179 }, 00:18:34.179 "auth": { 00:18:34.179 "state": "completed", 00:18:34.179 "digest": "sha384", 00:18:34.179 "dhgroup": "ffdhe4096" 00:18:34.179 } 00:18:34.179 } 00:18:34.179 ]' 00:18:34.179 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:34.179 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:34.179 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:34.179 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:34.179 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:34.439 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:34.439 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:34.439 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:34.439 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTE1MmU1M2RlNTU3NzkxMzM3NTA1NDlmZmViMWEzZDhmODgwOTBlYzRkYWZlNTAyxXEW/Q==: --dhchap-ctrl-secret DHHC-1:01:MWQxYWNlODZmOWNmZTVlODVmMmQzMTkxZmMxZmJhYThS3flV: 00:18:34.439 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MTE1MmU1M2RlNTU3NzkxMzM3NTA1NDlmZmViMWEzZDhmODgwOTBlYzRkYWZlNTAyxXEW/Q==: --dhchap-ctrl-secret DHHC-1:01:MWQxYWNlODZmOWNmZTVlODVmMmQzMTkxZmMxZmJhYThS3flV: 00:18:35.379 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:35.379 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:35.379 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:35.379 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.379 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.379 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.379 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:35.379 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:35.379 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:35.379 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:18:35.379 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:35.379 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:35.379 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:35.379 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:35.379 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:35.379 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:35.379 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.379 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.379 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.379 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:35.379 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:35.379 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:35.640 00:18:35.640 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:35.640 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:35.640 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:35.901 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.901 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:35.901 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.901 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.901 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.901 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:35.901 { 00:18:35.901 "cntlid": 79, 00:18:35.901 "qid": 0, 00:18:35.901 "state": "enabled", 00:18:35.901 "thread": "nvmf_tgt_poll_group_000", 00:18:35.901 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:35.901 "listen_address": { 00:18:35.901 "trtype": "TCP", 00:18:35.901 "adrfam": "IPv4", 00:18:35.901 "traddr": "10.0.0.2", 00:18:35.901 "trsvcid": "4420" 00:18:35.901 }, 00:18:35.901 "peer_address": { 00:18:35.901 "trtype": "TCP", 00:18:35.901 "adrfam": "IPv4", 00:18:35.901 "traddr": "10.0.0.1", 00:18:35.901 "trsvcid": "52986" 00:18:35.901 }, 00:18:35.901 "auth": { 00:18:35.901 "state": "completed", 00:18:35.901 "digest": "sha384", 00:18:35.901 "dhgroup": "ffdhe4096" 00:18:35.901 } 00:18:35.901 } 00:18:35.901 ]' 00:18:35.901 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:35.901 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:35.901 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:35.901 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:35.901 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:35.901 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:35.901 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:35.901 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:36.161 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzEwNGY1NTYxNTRjNjU4OTMyYjEyYzBlODY5ODhiNmRhNGMxNmU4MmQyODkyNzAyZDFlY2VmOGI4N2JkMzlhZnmzGvg=: 00:18:36.161 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YzEwNGY1NTYxNTRjNjU4OTMyYjEyYzBlODY5ODhiNmRhNGMxNmU4MmQyODkyNzAyZDFlY2VmOGI4N2JkMzlhZnmzGvg=: 00:18:36.731 09:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:36.731 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:36.731 09:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:36.731 09:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.731 09:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.731 09:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.731 09:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:36.731 09:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:36.731 09:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:36.731 09:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:36.991 09:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:18:36.991 09:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:36.991 09:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:36.991 09:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:36.991 09:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:36.991 09:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:36.991 09:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:36.991 09:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.991 09:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.991 09:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.991 09:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:36.991 09:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:36.991 09:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:37.251 00:18:37.251 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:37.251 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:37.251 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:37.510 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.510 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:37.510 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.510 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.510 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.510 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:37.510 { 00:18:37.510 "cntlid": 81, 00:18:37.510 "qid": 0, 00:18:37.510 "state": "enabled", 00:18:37.510 "thread": "nvmf_tgt_poll_group_000", 00:18:37.510 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:37.510 "listen_address": { 00:18:37.510 "trtype": "TCP", 00:18:37.510 "adrfam": "IPv4", 00:18:37.510 "traddr": "10.0.0.2", 00:18:37.510 "trsvcid": "4420" 00:18:37.510 }, 00:18:37.510 "peer_address": { 00:18:37.510 "trtype": "TCP", 00:18:37.510 "adrfam": "IPv4", 00:18:37.510 "traddr": "10.0.0.1", 00:18:37.510 "trsvcid": "53010" 00:18:37.510 }, 00:18:37.510 "auth": { 00:18:37.510 "state": "completed", 00:18:37.510 "digest": "sha384", 00:18:37.510 "dhgroup": "ffdhe6144" 00:18:37.510 } 00:18:37.510 } 00:18:37.510 ]' 00:18:37.510 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:37.510 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:37.510 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:37.510 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:37.510 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:37.770 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:37.770 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:37.770 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:37.770 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjlmZjM2NmI5M2I1ODY3MTlmNDg3ZTExMDhmZTI1OTQ4MjgyMWVlYWZjMWQ2MzI2iijJ7w==: --dhchap-ctrl-secret DHHC-1:03:MmMxZTVhNDUwOWFjN2ExZTNmMjExNjQwZjMwZTBhNjQzYTlhOWZlYzQ1Y2FlMDg4ZWU4NTc4MTA3NTUwOWVjY6iUJvY=: 00:18:37.770 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YjlmZjM2NmI5M2I1ODY3MTlmNDg3ZTExMDhmZTI1OTQ4MjgyMWVlYWZjMWQ2MzI2iijJ7w==: --dhchap-ctrl-secret DHHC-1:03:MmMxZTVhNDUwOWFjN2ExZTNmMjExNjQwZjMwZTBhNjQzYTlhOWZlYzQ1Y2FlMDg4ZWU4NTc4MTA3NTUwOWVjY6iUJvY=: 00:18:38.710 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:38.710 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:38.710 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:38.710 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.710 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.710 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.710 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:38.710 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:38.710 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:38.710 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:18:38.710 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:38.710 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:38.710 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:38.710 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:38.710 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:38.710 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:38.710 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.710 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.710 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.710 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:38.710 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:38.710 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:38.969 00:18:38.969 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:38.969 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:38.969 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:39.230 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.230 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:39.230 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.230 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.230 09:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.230 09:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:39.230 { 00:18:39.230 "cntlid": 83, 00:18:39.230 "qid": 0, 00:18:39.230 "state": "enabled", 00:18:39.230 "thread": "nvmf_tgt_poll_group_000", 00:18:39.230 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:39.230 "listen_address": { 00:18:39.230 "trtype": "TCP", 00:18:39.230 "adrfam": "IPv4", 00:18:39.230 "traddr": "10.0.0.2", 00:18:39.230 "trsvcid": "4420" 00:18:39.230 }, 00:18:39.230 "peer_address": { 00:18:39.230 "trtype": "TCP", 00:18:39.230 "adrfam": "IPv4", 00:18:39.230 "traddr": "10.0.0.1", 00:18:39.230 "trsvcid": "53040" 00:18:39.230 }, 00:18:39.230 "auth": { 00:18:39.230 "state": "completed", 00:18:39.230 "digest": "sha384", 00:18:39.230 "dhgroup": "ffdhe6144" 00:18:39.230 } 00:18:39.230 } 00:18:39.230 ]' 00:18:39.230 09:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:39.230 09:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:39.230 09:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:39.230 09:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:39.230 09:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:39.230 09:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:39.230 09:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:39.230 09:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:39.490 09:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGNhZGViNDU3OWZkNWE4MTcwY2UyNDJmODA2OTFlZTGXPKvD: --dhchap-ctrl-secret DHHC-1:02:NzRkMzY2NzliY2E3MGY5MTg2YmI0NzlkYjA5MzI2YTEzOWZhNDg2MjU3OTZjYmRlqL/vPg==: 00:18:39.490 09:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:OGNhZGViNDU3OWZkNWE4MTcwY2UyNDJmODA2OTFlZTGXPKvD: --dhchap-ctrl-secret DHHC-1:02:NzRkMzY2NzliY2E3MGY5MTg2YmI0NzlkYjA5MzI2YTEzOWZhNDg2MjU3OTZjYmRlqL/vPg==: 00:18:40.061 09:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:40.061 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:40.061 09:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:40.061 09:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.061 09:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.061 09:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.061 09:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:40.061 09:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:40.061 09:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:40.321 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:18:40.321 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:40.321 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:40.321 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:40.321 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:40.321 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:40.321 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:40.321 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.321 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.321 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.321 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:40.321 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:40.321 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:40.582 00:18:40.843 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:40.843 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:40.843 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:40.843 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.843 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:40.843 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.843 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.843 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.843 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:40.843 { 00:18:40.843 "cntlid": 85, 00:18:40.843 "qid": 0, 00:18:40.843 "state": "enabled", 00:18:40.843 "thread": "nvmf_tgt_poll_group_000", 00:18:40.843 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:40.843 "listen_address": { 00:18:40.843 "trtype": "TCP", 00:18:40.843 "adrfam": "IPv4", 00:18:40.843 "traddr": "10.0.0.2", 00:18:40.843 "trsvcid": "4420" 00:18:40.843 }, 00:18:40.843 "peer_address": { 00:18:40.843 "trtype": "TCP", 00:18:40.843 "adrfam": "IPv4", 00:18:40.843 "traddr": "10.0.0.1", 00:18:40.843 "trsvcid": "53064" 00:18:40.843 }, 00:18:40.843 "auth": { 00:18:40.843 "state": "completed", 00:18:40.843 "digest": "sha384", 00:18:40.843 "dhgroup": "ffdhe6144" 00:18:40.843 } 00:18:40.843 } 00:18:40.843 ]' 00:18:40.843 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:41.104 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:41.104 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:41.104 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:41.104 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:41.104 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:41.104 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:41.104 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:41.365 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTE1MmU1M2RlNTU3NzkxMzM3NTA1NDlmZmViMWEzZDhmODgwOTBlYzRkYWZlNTAyxXEW/Q==: --dhchap-ctrl-secret DHHC-1:01:MWQxYWNlODZmOWNmZTVlODVmMmQzMTkxZmMxZmJhYThS3flV: 00:18:41.365 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MTE1MmU1M2RlNTU3NzkxMzM3NTA1NDlmZmViMWEzZDhmODgwOTBlYzRkYWZlNTAyxXEW/Q==: --dhchap-ctrl-secret DHHC-1:01:MWQxYWNlODZmOWNmZTVlODVmMmQzMTkxZmMxZmJhYThS3flV: 00:18:41.937 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:41.937 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:41.937 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:41.937 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.937 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.937 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.937 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:41.938 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:41.938 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:42.221 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:18:42.221 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:42.221 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:42.221 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:42.221 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:42.221 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:42.221 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:42.221 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.221 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.221 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.221 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:42.221 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:42.221 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:42.482 00:18:42.482 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:42.482 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:42.482 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:42.742 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.742 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:42.742 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.742 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.742 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.742 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:42.742 { 00:18:42.742 "cntlid": 87, 00:18:42.742 "qid": 0, 00:18:42.742 "state": "enabled", 00:18:42.743 "thread": "nvmf_tgt_poll_group_000", 00:18:42.743 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:42.743 "listen_address": { 00:18:42.743 "trtype": "TCP", 00:18:42.743 "adrfam": "IPv4", 00:18:42.743 "traddr": "10.0.0.2", 00:18:42.743 "trsvcid": "4420" 00:18:42.743 }, 00:18:42.743 "peer_address": { 00:18:42.743 "trtype": "TCP", 00:18:42.743 "adrfam": "IPv4", 00:18:42.743 "traddr": "10.0.0.1", 00:18:42.743 "trsvcid": "53082" 00:18:42.743 }, 00:18:42.743 "auth": { 00:18:42.743 "state": "completed", 00:18:42.743 "digest": "sha384", 00:18:42.743 "dhgroup": "ffdhe6144" 00:18:42.743 } 00:18:42.743 } 00:18:42.743 ]' 00:18:42.743 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:42.743 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:42.743 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:42.743 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:42.743 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:42.743 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:42.743 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:42.743 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:43.003 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzEwNGY1NTYxNTRjNjU4OTMyYjEyYzBlODY5ODhiNmRhNGMxNmU4MmQyODkyNzAyZDFlY2VmOGI4N2JkMzlhZnmzGvg=: 00:18:43.003 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YzEwNGY1NTYxNTRjNjU4OTMyYjEyYzBlODY5ODhiNmRhNGMxNmU4MmQyODkyNzAyZDFlY2VmOGI4N2JkMzlhZnmzGvg=: 00:18:43.574 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:43.574 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:43.574 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:43.574 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.574 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.574 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.574 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:43.574 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:43.574 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:43.574 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:43.834 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:18:43.834 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:43.835 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:43.835 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:43.835 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:43.835 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:43.835 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:43.835 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.835 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.835 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.835 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:43.835 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:43.835 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:44.403 00:18:44.403 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:44.403 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:44.403 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:44.403 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.403 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:44.403 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.403 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.663 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.663 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:44.663 { 00:18:44.663 "cntlid": 89, 00:18:44.663 "qid": 0, 00:18:44.663 "state": "enabled", 00:18:44.663 "thread": "nvmf_tgt_poll_group_000", 00:18:44.663 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:44.663 "listen_address": { 00:18:44.663 "trtype": "TCP", 00:18:44.663 "adrfam": "IPv4", 00:18:44.663 "traddr": "10.0.0.2", 00:18:44.663 "trsvcid": "4420" 00:18:44.663 }, 00:18:44.663 "peer_address": { 00:18:44.663 "trtype": "TCP", 00:18:44.663 "adrfam": "IPv4", 00:18:44.663 "traddr": "10.0.0.1", 00:18:44.663 "trsvcid": "55314" 00:18:44.663 }, 00:18:44.663 "auth": { 00:18:44.663 "state": "completed", 00:18:44.663 "digest": "sha384", 00:18:44.663 "dhgroup": "ffdhe8192" 00:18:44.663 } 00:18:44.663 } 00:18:44.663 ]' 00:18:44.663 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:44.663 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:44.663 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:44.663 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:44.663 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:44.663 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:44.663 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:44.663 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:44.923 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjlmZjM2NmI5M2I1ODY3MTlmNDg3ZTExMDhmZTI1OTQ4MjgyMWVlYWZjMWQ2MzI2iijJ7w==: --dhchap-ctrl-secret DHHC-1:03:MmMxZTVhNDUwOWFjN2ExZTNmMjExNjQwZjMwZTBhNjQzYTlhOWZlYzQ1Y2FlMDg4ZWU4NTc4MTA3NTUwOWVjY6iUJvY=: 00:18:44.923 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YjlmZjM2NmI5M2I1ODY3MTlmNDg3ZTExMDhmZTI1OTQ4MjgyMWVlYWZjMWQ2MzI2iijJ7w==: --dhchap-ctrl-secret DHHC-1:03:MmMxZTVhNDUwOWFjN2ExZTNmMjExNjQwZjMwZTBhNjQzYTlhOWZlYzQ1Y2FlMDg4ZWU4NTc4MTA3NTUwOWVjY6iUJvY=: 00:18:45.521 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:45.521 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:45.521 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:45.521 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.521 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.521 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.521 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:45.521 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:45.521 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:45.823 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:18:45.823 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:45.823 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:45.823 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:45.823 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:45.823 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:45.823 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:45.824 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.824 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.824 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.824 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:45.824 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:45.824 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:46.129 00:18:46.130 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:46.130 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:46.130 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:46.391 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.391 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:46.391 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.391 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.391 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.391 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:46.391 { 00:18:46.391 "cntlid": 91, 00:18:46.391 "qid": 0, 00:18:46.391 "state": "enabled", 00:18:46.391 "thread": "nvmf_tgt_poll_group_000", 00:18:46.391 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:46.391 "listen_address": { 00:18:46.391 "trtype": "TCP", 00:18:46.391 "adrfam": "IPv4", 00:18:46.391 "traddr": "10.0.0.2", 00:18:46.391 "trsvcid": "4420" 00:18:46.391 }, 00:18:46.391 "peer_address": { 00:18:46.391 "trtype": "TCP", 00:18:46.391 "adrfam": "IPv4", 00:18:46.391 "traddr": "10.0.0.1", 00:18:46.391 "trsvcid": "55340" 00:18:46.391 }, 00:18:46.391 "auth": { 00:18:46.391 "state": "completed", 00:18:46.391 "digest": "sha384", 00:18:46.391 "dhgroup": "ffdhe8192" 00:18:46.391 } 00:18:46.391 } 00:18:46.391 ]' 00:18:46.391 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:46.391 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:46.391 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:46.391 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:46.391 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:46.653 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:46.653 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:46.653 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:46.653 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGNhZGViNDU3OWZkNWE4MTcwY2UyNDJmODA2OTFlZTGXPKvD: --dhchap-ctrl-secret DHHC-1:02:NzRkMzY2NzliY2E3MGY5MTg2YmI0NzlkYjA5MzI2YTEzOWZhNDg2MjU3OTZjYmRlqL/vPg==: 00:18:46.653 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:OGNhZGViNDU3OWZkNWE4MTcwY2UyNDJmODA2OTFlZTGXPKvD: --dhchap-ctrl-secret DHHC-1:02:NzRkMzY2NzliY2E3MGY5MTg2YmI0NzlkYjA5MzI2YTEzOWZhNDg2MjU3OTZjYmRlqL/vPg==: 00:18:47.593 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:47.593 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:47.593 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:47.593 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.593 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.593 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.593 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:47.594 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:47.594 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:47.594 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:18:47.594 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:47.594 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:47.594 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:47.594 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:47.594 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:47.594 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:47.594 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.594 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.594 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.594 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:47.594 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:47.594 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:48.164 00:18:48.164 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:48.164 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:48.164 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:48.164 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.164 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:48.164 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.164 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.164 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.164 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:48.164 { 00:18:48.164 "cntlid": 93, 00:18:48.164 "qid": 0, 00:18:48.164 "state": "enabled", 00:18:48.164 "thread": "nvmf_tgt_poll_group_000", 00:18:48.164 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:48.164 "listen_address": { 00:18:48.164 "trtype": "TCP", 00:18:48.164 "adrfam": "IPv4", 00:18:48.164 "traddr": "10.0.0.2", 00:18:48.164 "trsvcid": "4420" 00:18:48.164 }, 00:18:48.164 "peer_address": { 00:18:48.164 "trtype": "TCP", 00:18:48.164 "adrfam": "IPv4", 00:18:48.164 "traddr": "10.0.0.1", 00:18:48.164 "trsvcid": "55376" 00:18:48.164 }, 00:18:48.164 "auth": { 00:18:48.164 "state": "completed", 00:18:48.164 "digest": "sha384", 00:18:48.164 "dhgroup": "ffdhe8192" 00:18:48.164 } 00:18:48.164 } 00:18:48.164 ]' 00:18:48.164 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:48.431 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:48.431 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:48.431 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:48.431 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:48.431 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:48.431 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:48.431 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:48.694 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTE1MmU1M2RlNTU3NzkxMzM3NTA1NDlmZmViMWEzZDhmODgwOTBlYzRkYWZlNTAyxXEW/Q==: --dhchap-ctrl-secret DHHC-1:01:MWQxYWNlODZmOWNmZTVlODVmMmQzMTkxZmMxZmJhYThS3flV: 00:18:48.694 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MTE1MmU1M2RlNTU3NzkxMzM3NTA1NDlmZmViMWEzZDhmODgwOTBlYzRkYWZlNTAyxXEW/Q==: --dhchap-ctrl-secret DHHC-1:01:MWQxYWNlODZmOWNmZTVlODVmMmQzMTkxZmMxZmJhYThS3flV: 00:18:49.263 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:49.263 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:49.263 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:49.263 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.263 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.263 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.264 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:49.264 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:49.264 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:49.523 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:18:49.523 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:49.523 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:49.523 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:49.523 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:49.523 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:49.523 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:49.523 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.523 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.523 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.523 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:49.523 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:49.523 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:50.093 00:18:50.093 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:50.093 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:50.093 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:50.093 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.093 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:50.093 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.093 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.093 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.093 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:50.093 { 00:18:50.093 "cntlid": 95, 00:18:50.093 "qid": 0, 00:18:50.093 "state": "enabled", 00:18:50.093 "thread": "nvmf_tgt_poll_group_000", 00:18:50.093 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:50.093 "listen_address": { 00:18:50.093 "trtype": "TCP", 00:18:50.093 "adrfam": "IPv4", 00:18:50.093 "traddr": "10.0.0.2", 00:18:50.093 "trsvcid": "4420" 00:18:50.093 }, 00:18:50.093 "peer_address": { 00:18:50.093 "trtype": "TCP", 00:18:50.093 "adrfam": "IPv4", 00:18:50.093 "traddr": "10.0.0.1", 00:18:50.093 "trsvcid": "55402" 00:18:50.093 }, 00:18:50.093 "auth": { 00:18:50.093 "state": "completed", 00:18:50.093 "digest": "sha384", 00:18:50.093 "dhgroup": "ffdhe8192" 00:18:50.093 } 00:18:50.093 } 00:18:50.093 ]' 00:18:50.093 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:50.353 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:50.354 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:50.354 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:50.354 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:50.354 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:50.354 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:50.354 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:50.614 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzEwNGY1NTYxNTRjNjU4OTMyYjEyYzBlODY5ODhiNmRhNGMxNmU4MmQyODkyNzAyZDFlY2VmOGI4N2JkMzlhZnmzGvg=: 00:18:50.614 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YzEwNGY1NTYxNTRjNjU4OTMyYjEyYzBlODY5ODhiNmRhNGMxNmU4MmQyODkyNzAyZDFlY2VmOGI4N2JkMzlhZnmzGvg=: 00:18:51.184 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:51.184 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:51.184 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:51.184 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.184 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.184 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.184 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:18:51.184 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:51.184 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:51.184 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:51.184 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:51.445 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:18:51.445 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:51.445 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:51.445 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:51.445 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:51.445 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:51.445 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:51.445 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.445 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.445 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.445 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:51.445 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:51.445 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:51.445 00:18:51.445 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:51.445 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:51.445 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:51.706 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.706 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:51.706 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.706 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.706 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.706 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:51.706 { 00:18:51.706 "cntlid": 97, 00:18:51.706 "qid": 0, 00:18:51.706 "state": "enabled", 00:18:51.706 "thread": "nvmf_tgt_poll_group_000", 00:18:51.706 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:51.706 "listen_address": { 00:18:51.706 "trtype": "TCP", 00:18:51.706 "adrfam": "IPv4", 00:18:51.706 "traddr": "10.0.0.2", 00:18:51.706 "trsvcid": "4420" 00:18:51.706 }, 00:18:51.706 "peer_address": { 00:18:51.706 "trtype": "TCP", 00:18:51.706 "adrfam": "IPv4", 00:18:51.706 "traddr": "10.0.0.1", 00:18:51.706 "trsvcid": "55422" 00:18:51.706 }, 00:18:51.706 "auth": { 00:18:51.706 "state": "completed", 00:18:51.706 "digest": "sha512", 00:18:51.706 "dhgroup": "null" 00:18:51.706 } 00:18:51.706 } 00:18:51.706 ]' 00:18:51.706 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:51.706 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:51.706 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:51.966 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:51.966 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:51.966 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:51.966 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:51.966 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:51.966 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjlmZjM2NmI5M2I1ODY3MTlmNDg3ZTExMDhmZTI1OTQ4MjgyMWVlYWZjMWQ2MzI2iijJ7w==: --dhchap-ctrl-secret DHHC-1:03:MmMxZTVhNDUwOWFjN2ExZTNmMjExNjQwZjMwZTBhNjQzYTlhOWZlYzQ1Y2FlMDg4ZWU4NTc4MTA3NTUwOWVjY6iUJvY=: 00:18:51.966 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YjlmZjM2NmI5M2I1ODY3MTlmNDg3ZTExMDhmZTI1OTQ4MjgyMWVlYWZjMWQ2MzI2iijJ7w==: --dhchap-ctrl-secret DHHC-1:03:MmMxZTVhNDUwOWFjN2ExZTNmMjExNjQwZjMwZTBhNjQzYTlhOWZlYzQ1Y2FlMDg4ZWU4NTc4MTA3NTUwOWVjY6iUJvY=: 00:18:52.907 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:52.907 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:52.907 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:52.907 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.907 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.907 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.907 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:52.907 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:52.907 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:52.907 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:18:52.907 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:52.907 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:52.907 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:52.907 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:52.907 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:52.907 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:52.907 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.907 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.907 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.907 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:52.907 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:52.907 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:53.167 00:18:53.167 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:53.167 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:53.167 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:53.427 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.427 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:53.427 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.427 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.427 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.427 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:53.427 { 00:18:53.427 "cntlid": 99, 00:18:53.427 "qid": 0, 00:18:53.427 "state": "enabled", 00:18:53.427 "thread": "nvmf_tgt_poll_group_000", 00:18:53.427 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:53.427 "listen_address": { 00:18:53.427 "trtype": "TCP", 00:18:53.427 "adrfam": "IPv4", 00:18:53.427 "traddr": "10.0.0.2", 00:18:53.427 "trsvcid": "4420" 00:18:53.427 }, 00:18:53.427 "peer_address": { 00:18:53.427 "trtype": "TCP", 00:18:53.427 "adrfam": "IPv4", 00:18:53.427 "traddr": "10.0.0.1", 00:18:53.427 "trsvcid": "55432" 00:18:53.427 }, 00:18:53.427 "auth": { 00:18:53.427 "state": "completed", 00:18:53.427 "digest": "sha512", 00:18:53.427 "dhgroup": "null" 00:18:53.427 } 00:18:53.427 } 00:18:53.427 ]' 00:18:53.427 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:53.427 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:53.427 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:53.427 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:53.427 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:53.427 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:53.427 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:53.427 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:53.688 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGNhZGViNDU3OWZkNWE4MTcwY2UyNDJmODA2OTFlZTGXPKvD: --dhchap-ctrl-secret DHHC-1:02:NzRkMzY2NzliY2E3MGY5MTg2YmI0NzlkYjA5MzI2YTEzOWZhNDg2MjU3OTZjYmRlqL/vPg==: 00:18:53.688 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:OGNhZGViNDU3OWZkNWE4MTcwY2UyNDJmODA2OTFlZTGXPKvD: --dhchap-ctrl-secret DHHC-1:02:NzRkMzY2NzliY2E3MGY5MTg2YmI0NzlkYjA5MzI2YTEzOWZhNDg2MjU3OTZjYmRlqL/vPg==: 00:18:54.258 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:54.258 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:54.258 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:54.258 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.258 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.258 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.258 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:54.258 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:54.258 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:54.518 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:18:54.518 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:54.518 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:54.518 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:54.518 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:54.518 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:54.518 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:54.518 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.518 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.518 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.518 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:54.518 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:54.518 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:54.778 00:18:54.778 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:54.778 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:54.778 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:55.039 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.039 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:55.039 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.039 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.039 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.039 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:55.039 { 00:18:55.039 "cntlid": 101, 00:18:55.039 "qid": 0, 00:18:55.039 "state": "enabled", 00:18:55.039 "thread": "nvmf_tgt_poll_group_000", 00:18:55.039 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:55.039 "listen_address": { 00:18:55.039 "trtype": "TCP", 00:18:55.039 "adrfam": "IPv4", 00:18:55.039 "traddr": "10.0.0.2", 00:18:55.039 "trsvcid": "4420" 00:18:55.039 }, 00:18:55.039 "peer_address": { 00:18:55.039 "trtype": "TCP", 00:18:55.039 "adrfam": "IPv4", 00:18:55.039 "traddr": "10.0.0.1", 00:18:55.039 "trsvcid": "53160" 00:18:55.039 }, 00:18:55.039 "auth": { 00:18:55.039 "state": "completed", 00:18:55.039 "digest": "sha512", 00:18:55.039 "dhgroup": "null" 00:18:55.039 } 00:18:55.039 } 00:18:55.039 ]' 00:18:55.039 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:55.039 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:55.039 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:55.040 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:55.040 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:55.040 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:55.040 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:55.040 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:55.300 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTE1MmU1M2RlNTU3NzkxMzM3NTA1NDlmZmViMWEzZDhmODgwOTBlYzRkYWZlNTAyxXEW/Q==: --dhchap-ctrl-secret DHHC-1:01:MWQxYWNlODZmOWNmZTVlODVmMmQzMTkxZmMxZmJhYThS3flV: 00:18:55.300 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MTE1MmU1M2RlNTU3NzkxMzM3NTA1NDlmZmViMWEzZDhmODgwOTBlYzRkYWZlNTAyxXEW/Q==: --dhchap-ctrl-secret DHHC-1:01:MWQxYWNlODZmOWNmZTVlODVmMmQzMTkxZmMxZmJhYThS3flV: 00:18:55.872 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:55.872 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:55.872 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:55.872 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.872 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.872 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.872 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:55.872 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:55.872 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:56.133 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:18:56.133 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:56.133 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:56.133 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:56.133 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:56.133 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:56.134 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:56.134 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.134 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.134 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.134 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:56.134 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:56.134 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:56.394 00:18:56.394 09:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:56.394 09:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:56.394 09:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:56.655 09:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.655 09:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:56.655 09:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.655 09:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.655 09:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.655 09:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:56.655 { 00:18:56.655 "cntlid": 103, 00:18:56.655 "qid": 0, 00:18:56.655 "state": "enabled", 00:18:56.655 "thread": "nvmf_tgt_poll_group_000", 00:18:56.655 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:56.655 "listen_address": { 00:18:56.655 "trtype": "TCP", 00:18:56.655 "adrfam": "IPv4", 00:18:56.655 "traddr": "10.0.0.2", 00:18:56.655 "trsvcid": "4420" 00:18:56.655 }, 00:18:56.655 "peer_address": { 00:18:56.655 "trtype": "TCP", 00:18:56.655 "adrfam": "IPv4", 00:18:56.655 "traddr": "10.0.0.1", 00:18:56.655 "trsvcid": "53176" 00:18:56.655 }, 00:18:56.655 "auth": { 00:18:56.655 "state": "completed", 00:18:56.655 "digest": "sha512", 00:18:56.655 "dhgroup": "null" 00:18:56.655 } 00:18:56.655 } 00:18:56.655 ]' 00:18:56.655 09:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:56.655 09:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:56.655 09:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:56.655 09:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:56.655 09:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:56.655 09:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:56.655 09:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:56.655 09:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:56.916 09:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzEwNGY1NTYxNTRjNjU4OTMyYjEyYzBlODY5ODhiNmRhNGMxNmU4MmQyODkyNzAyZDFlY2VmOGI4N2JkMzlhZnmzGvg=: 00:18:56.916 09:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YzEwNGY1NTYxNTRjNjU4OTMyYjEyYzBlODY5ODhiNmRhNGMxNmU4MmQyODkyNzAyZDFlY2VmOGI4N2JkMzlhZnmzGvg=: 00:18:57.498 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:57.499 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:57.499 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:57.499 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.499 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.499 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.499 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:57.499 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:57.499 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:57.499 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:57.762 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:18:57.762 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:57.763 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:57.763 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:57.763 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:57.763 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:57.763 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:57.763 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.763 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.763 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.763 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:57.763 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:57.763 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:58.023 00:18:58.023 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:58.023 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:58.023 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:58.023 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.023 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:58.023 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.023 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.023 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.023 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:58.023 { 00:18:58.023 "cntlid": 105, 00:18:58.023 "qid": 0, 00:18:58.023 "state": "enabled", 00:18:58.023 "thread": "nvmf_tgt_poll_group_000", 00:18:58.023 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:58.023 "listen_address": { 00:18:58.023 "trtype": "TCP", 00:18:58.023 "adrfam": "IPv4", 00:18:58.023 "traddr": "10.0.0.2", 00:18:58.023 "trsvcid": "4420" 00:18:58.023 }, 00:18:58.023 "peer_address": { 00:18:58.023 "trtype": "TCP", 00:18:58.023 "adrfam": "IPv4", 00:18:58.023 "traddr": "10.0.0.1", 00:18:58.023 "trsvcid": "53190" 00:18:58.023 }, 00:18:58.023 "auth": { 00:18:58.023 "state": "completed", 00:18:58.023 "digest": "sha512", 00:18:58.023 "dhgroup": "ffdhe2048" 00:18:58.023 } 00:18:58.023 } 00:18:58.023 ]' 00:18:58.023 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:58.284 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:58.284 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:58.284 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:58.284 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:58.284 09:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:58.284 09:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:58.284 09:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:58.546 09:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjlmZjM2NmI5M2I1ODY3MTlmNDg3ZTExMDhmZTI1OTQ4MjgyMWVlYWZjMWQ2MzI2iijJ7w==: --dhchap-ctrl-secret DHHC-1:03:MmMxZTVhNDUwOWFjN2ExZTNmMjExNjQwZjMwZTBhNjQzYTlhOWZlYzQ1Y2FlMDg4ZWU4NTc4MTA3NTUwOWVjY6iUJvY=: 00:18:58.546 09:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YjlmZjM2NmI5M2I1ODY3MTlmNDg3ZTExMDhmZTI1OTQ4MjgyMWVlYWZjMWQ2MzI2iijJ7w==: --dhchap-ctrl-secret DHHC-1:03:MmMxZTVhNDUwOWFjN2ExZTNmMjExNjQwZjMwZTBhNjQzYTlhOWZlYzQ1Y2FlMDg4ZWU4NTc4MTA3NTUwOWVjY6iUJvY=: 00:18:59.118 09:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:59.118 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:59.118 09:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:59.118 09:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.118 09:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.118 09:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.118 09:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:59.118 09:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:59.118 09:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:59.380 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:18:59.380 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:59.380 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:59.380 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:59.380 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:59.380 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:59.380 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:59.380 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.380 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.380 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.380 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:59.380 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:59.380 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:59.380 00:18:59.641 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:59.641 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:59.641 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:59.641 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.641 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:59.641 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.641 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.641 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.641 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:59.641 { 00:18:59.641 "cntlid": 107, 00:18:59.641 "qid": 0, 00:18:59.641 "state": "enabled", 00:18:59.641 "thread": "nvmf_tgt_poll_group_000", 00:18:59.641 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:59.641 "listen_address": { 00:18:59.641 "trtype": "TCP", 00:18:59.641 "adrfam": "IPv4", 00:18:59.641 "traddr": "10.0.0.2", 00:18:59.641 "trsvcid": "4420" 00:18:59.641 }, 00:18:59.641 "peer_address": { 00:18:59.641 "trtype": "TCP", 00:18:59.641 "adrfam": "IPv4", 00:18:59.641 "traddr": "10.0.0.1", 00:18:59.641 "trsvcid": "53206" 00:18:59.641 }, 00:18:59.641 "auth": { 00:18:59.641 "state": "completed", 00:18:59.641 "digest": "sha512", 00:18:59.641 "dhgroup": "ffdhe2048" 00:18:59.641 } 00:18:59.641 } 00:18:59.641 ]' 00:18:59.641 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:59.641 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:59.641 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:59.903 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:59.903 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:59.903 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:59.903 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:59.903 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:59.903 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGNhZGViNDU3OWZkNWE4MTcwY2UyNDJmODA2OTFlZTGXPKvD: --dhchap-ctrl-secret DHHC-1:02:NzRkMzY2NzliY2E3MGY5MTg2YmI0NzlkYjA5MzI2YTEzOWZhNDg2MjU3OTZjYmRlqL/vPg==: 00:18:59.903 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:OGNhZGViNDU3OWZkNWE4MTcwY2UyNDJmODA2OTFlZTGXPKvD: --dhchap-ctrl-secret DHHC-1:02:NzRkMzY2NzliY2E3MGY5MTg2YmI0NzlkYjA5MzI2YTEzOWZhNDg2MjU3OTZjYmRlqL/vPg==: 00:19:00.847 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:00.847 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:00.847 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:00.847 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.847 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.847 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.847 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:00.847 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:00.847 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:00.847 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:19:00.847 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:00.847 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:00.847 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:00.847 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:00.847 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:00.847 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:00.847 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.847 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.847 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.847 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:00.847 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:00.847 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:01.107 00:19:01.107 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:01.107 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:01.107 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:01.368 09:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.368 09:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:01.368 09:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.368 09:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.368 09:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.368 09:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:01.368 { 00:19:01.368 "cntlid": 109, 00:19:01.368 "qid": 0, 00:19:01.368 "state": "enabled", 00:19:01.368 "thread": "nvmf_tgt_poll_group_000", 00:19:01.368 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:01.368 "listen_address": { 00:19:01.368 "trtype": "TCP", 00:19:01.368 "adrfam": "IPv4", 00:19:01.368 "traddr": "10.0.0.2", 00:19:01.368 "trsvcid": "4420" 00:19:01.368 }, 00:19:01.368 "peer_address": { 00:19:01.368 "trtype": "TCP", 00:19:01.368 "adrfam": "IPv4", 00:19:01.368 "traddr": "10.0.0.1", 00:19:01.368 "trsvcid": "53236" 00:19:01.368 }, 00:19:01.368 "auth": { 00:19:01.368 "state": "completed", 00:19:01.368 "digest": "sha512", 00:19:01.368 "dhgroup": "ffdhe2048" 00:19:01.368 } 00:19:01.368 } 00:19:01.368 ]' 00:19:01.368 09:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:01.368 09:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:01.368 09:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:01.368 09:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:01.368 09:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:01.368 09:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:01.368 09:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:01.368 09:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:01.628 09:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTE1MmU1M2RlNTU3NzkxMzM3NTA1NDlmZmViMWEzZDhmODgwOTBlYzRkYWZlNTAyxXEW/Q==: --dhchap-ctrl-secret DHHC-1:01:MWQxYWNlODZmOWNmZTVlODVmMmQzMTkxZmMxZmJhYThS3flV: 00:19:01.628 09:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MTE1MmU1M2RlNTU3NzkxMzM3NTA1NDlmZmViMWEzZDhmODgwOTBlYzRkYWZlNTAyxXEW/Q==: --dhchap-ctrl-secret DHHC-1:01:MWQxYWNlODZmOWNmZTVlODVmMmQzMTkxZmMxZmJhYThS3flV: 00:19:02.199 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:02.199 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:02.199 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:02.199 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.199 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.199 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.199 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:02.199 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:02.200 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:02.460 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:19:02.460 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:02.460 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:02.460 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:02.460 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:02.460 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:02.460 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:02.460 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.460 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.460 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.460 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:02.460 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:02.460 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:02.721 00:19:02.721 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:02.721 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:02.721 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:02.982 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.982 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:02.982 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.982 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.982 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.982 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:02.982 { 00:19:02.982 "cntlid": 111, 00:19:02.982 "qid": 0, 00:19:02.982 "state": "enabled", 00:19:02.982 "thread": "nvmf_tgt_poll_group_000", 00:19:02.982 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:02.982 "listen_address": { 00:19:02.982 "trtype": "TCP", 00:19:02.982 "adrfam": "IPv4", 00:19:02.982 "traddr": "10.0.0.2", 00:19:02.982 "trsvcid": "4420" 00:19:02.982 }, 00:19:02.982 "peer_address": { 00:19:02.982 "trtype": "TCP", 00:19:02.982 "adrfam": "IPv4", 00:19:02.982 "traddr": "10.0.0.1", 00:19:02.982 "trsvcid": "53262" 00:19:02.982 }, 00:19:02.982 "auth": { 00:19:02.982 "state": "completed", 00:19:02.982 "digest": "sha512", 00:19:02.982 "dhgroup": "ffdhe2048" 00:19:02.982 } 00:19:02.982 } 00:19:02.982 ]' 00:19:02.982 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:02.982 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:02.982 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:02.982 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:02.982 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:02.982 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:02.982 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:02.982 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:03.248 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzEwNGY1NTYxNTRjNjU4OTMyYjEyYzBlODY5ODhiNmRhNGMxNmU4MmQyODkyNzAyZDFlY2VmOGI4N2JkMzlhZnmzGvg=: 00:19:03.248 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YzEwNGY1NTYxNTRjNjU4OTMyYjEyYzBlODY5ODhiNmRhNGMxNmU4MmQyODkyNzAyZDFlY2VmOGI4N2JkMzlhZnmzGvg=: 00:19:03.819 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:03.819 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:03.819 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:03.819 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.819 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.819 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.819 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:03.819 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:03.819 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:03.819 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:04.080 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:19:04.080 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:04.080 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:04.080 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:04.080 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:04.080 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:04.080 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:04.080 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.080 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.080 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.081 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:04.081 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:04.081 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:04.341 00:19:04.341 09:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:04.341 09:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:04.341 09:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:04.601 09:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.601 09:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:04.601 09:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.601 09:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.601 09:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.601 09:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:04.601 { 00:19:04.601 "cntlid": 113, 00:19:04.601 "qid": 0, 00:19:04.601 "state": "enabled", 00:19:04.601 "thread": "nvmf_tgt_poll_group_000", 00:19:04.602 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:04.602 "listen_address": { 00:19:04.602 "trtype": "TCP", 00:19:04.602 "adrfam": "IPv4", 00:19:04.602 "traddr": "10.0.0.2", 00:19:04.602 "trsvcid": "4420" 00:19:04.602 }, 00:19:04.602 "peer_address": { 00:19:04.602 "trtype": "TCP", 00:19:04.602 "adrfam": "IPv4", 00:19:04.602 "traddr": "10.0.0.1", 00:19:04.602 "trsvcid": "45902" 00:19:04.602 }, 00:19:04.602 "auth": { 00:19:04.602 "state": "completed", 00:19:04.602 "digest": "sha512", 00:19:04.602 "dhgroup": "ffdhe3072" 00:19:04.602 } 00:19:04.602 } 00:19:04.602 ]' 00:19:04.602 09:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:04.602 09:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:04.602 09:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:04.602 09:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:04.602 09:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:04.602 09:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:04.602 09:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:04.602 09:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:04.862 09:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjlmZjM2NmI5M2I1ODY3MTlmNDg3ZTExMDhmZTI1OTQ4MjgyMWVlYWZjMWQ2MzI2iijJ7w==: --dhchap-ctrl-secret DHHC-1:03:MmMxZTVhNDUwOWFjN2ExZTNmMjExNjQwZjMwZTBhNjQzYTlhOWZlYzQ1Y2FlMDg4ZWU4NTc4MTA3NTUwOWVjY6iUJvY=: 00:19:04.862 09:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YjlmZjM2NmI5M2I1ODY3MTlmNDg3ZTExMDhmZTI1OTQ4MjgyMWVlYWZjMWQ2MzI2iijJ7w==: --dhchap-ctrl-secret DHHC-1:03:MmMxZTVhNDUwOWFjN2ExZTNmMjExNjQwZjMwZTBhNjQzYTlhOWZlYzQ1Y2FlMDg4ZWU4NTc4MTA3NTUwOWVjY6iUJvY=: 00:19:05.433 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:05.433 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:05.433 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:05.433 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.433 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.433 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.433 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:05.433 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:05.433 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:05.694 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:19:05.694 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:05.694 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:05.694 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:05.694 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:05.694 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:05.694 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:05.694 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.694 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.694 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.694 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:05.694 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:05.694 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:05.954 00:19:05.954 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:05.954 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:05.954 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:06.215 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.215 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:06.215 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.215 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.215 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.215 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:06.215 { 00:19:06.215 "cntlid": 115, 00:19:06.215 "qid": 0, 00:19:06.215 "state": "enabled", 00:19:06.215 "thread": "nvmf_tgt_poll_group_000", 00:19:06.215 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:06.215 "listen_address": { 00:19:06.215 "trtype": "TCP", 00:19:06.215 "adrfam": "IPv4", 00:19:06.215 "traddr": "10.0.0.2", 00:19:06.215 "trsvcid": "4420" 00:19:06.215 }, 00:19:06.215 "peer_address": { 00:19:06.215 "trtype": "TCP", 00:19:06.215 "adrfam": "IPv4", 00:19:06.215 "traddr": "10.0.0.1", 00:19:06.215 "trsvcid": "45924" 00:19:06.215 }, 00:19:06.215 "auth": { 00:19:06.215 "state": "completed", 00:19:06.215 "digest": "sha512", 00:19:06.215 "dhgroup": "ffdhe3072" 00:19:06.215 } 00:19:06.215 } 00:19:06.215 ]' 00:19:06.215 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:06.215 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:06.215 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:06.215 09:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:06.215 09:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:06.215 09:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:06.215 09:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:06.215 09:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:06.476 09:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGNhZGViNDU3OWZkNWE4MTcwY2UyNDJmODA2OTFlZTGXPKvD: --dhchap-ctrl-secret DHHC-1:02:NzRkMzY2NzliY2E3MGY5MTg2YmI0NzlkYjA5MzI2YTEzOWZhNDg2MjU3OTZjYmRlqL/vPg==: 00:19:06.476 09:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:OGNhZGViNDU3OWZkNWE4MTcwY2UyNDJmODA2OTFlZTGXPKvD: --dhchap-ctrl-secret DHHC-1:02:NzRkMzY2NzliY2E3MGY5MTg2YmI0NzlkYjA5MzI2YTEzOWZhNDg2MjU3OTZjYmRlqL/vPg==: 00:19:07.047 09:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:07.047 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:07.047 09:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:07.047 09:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.047 09:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.047 09:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.047 09:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:07.047 09:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:07.047 09:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:07.308 09:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:19:07.308 09:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:07.308 09:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:07.308 09:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:07.308 09:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:07.308 09:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:07.308 09:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:07.308 09:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.308 09:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.308 09:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.308 09:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:07.308 09:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:07.308 09:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:07.571 00:19:07.571 09:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:07.571 09:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:07.571 09:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:07.832 09:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.832 09:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:07.832 09:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.832 09:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.832 09:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.832 09:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:07.832 { 00:19:07.832 "cntlid": 117, 00:19:07.832 "qid": 0, 00:19:07.832 "state": "enabled", 00:19:07.832 "thread": "nvmf_tgt_poll_group_000", 00:19:07.832 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:07.832 "listen_address": { 00:19:07.832 "trtype": "TCP", 00:19:07.832 "adrfam": "IPv4", 00:19:07.832 "traddr": "10.0.0.2", 00:19:07.832 "trsvcid": "4420" 00:19:07.832 }, 00:19:07.832 "peer_address": { 00:19:07.832 "trtype": "TCP", 00:19:07.832 "adrfam": "IPv4", 00:19:07.832 "traddr": "10.0.0.1", 00:19:07.832 "trsvcid": "45950" 00:19:07.832 }, 00:19:07.832 "auth": { 00:19:07.832 "state": "completed", 00:19:07.832 "digest": "sha512", 00:19:07.832 "dhgroup": "ffdhe3072" 00:19:07.832 } 00:19:07.832 } 00:19:07.832 ]' 00:19:07.832 09:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:07.832 09:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:07.832 09:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:07.832 09:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:07.832 09:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:07.832 09:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:07.832 09:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:07.832 09:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:08.092 09:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTE1MmU1M2RlNTU3NzkxMzM3NTA1NDlmZmViMWEzZDhmODgwOTBlYzRkYWZlNTAyxXEW/Q==: --dhchap-ctrl-secret DHHC-1:01:MWQxYWNlODZmOWNmZTVlODVmMmQzMTkxZmMxZmJhYThS3flV: 00:19:08.093 09:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MTE1MmU1M2RlNTU3NzkxMzM3NTA1NDlmZmViMWEzZDhmODgwOTBlYzRkYWZlNTAyxXEW/Q==: --dhchap-ctrl-secret DHHC-1:01:MWQxYWNlODZmOWNmZTVlODVmMmQzMTkxZmMxZmJhYThS3flV: 00:19:08.663 09:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:08.663 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:08.663 09:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:08.663 09:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.663 09:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.663 09:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.663 09:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:08.663 09:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:08.663 09:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:08.925 09:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:19:08.925 09:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:08.925 09:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:08.925 09:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:08.925 09:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:08.925 09:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:08.925 09:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:08.925 09:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.925 09:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.925 09:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.925 09:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:08.925 09:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:08.925 09:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:09.186 00:19:09.186 09:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:09.186 09:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:09.186 09:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:09.448 09:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:09.448 09:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:09.448 09:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.448 09:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.448 09:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.448 09:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:09.448 { 00:19:09.448 "cntlid": 119, 00:19:09.448 "qid": 0, 00:19:09.448 "state": "enabled", 00:19:09.448 "thread": "nvmf_tgt_poll_group_000", 00:19:09.448 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:09.448 "listen_address": { 00:19:09.448 "trtype": "TCP", 00:19:09.448 "adrfam": "IPv4", 00:19:09.448 "traddr": "10.0.0.2", 00:19:09.448 "trsvcid": "4420" 00:19:09.448 }, 00:19:09.448 "peer_address": { 00:19:09.448 "trtype": "TCP", 00:19:09.448 "adrfam": "IPv4", 00:19:09.448 "traddr": "10.0.0.1", 00:19:09.448 "trsvcid": "45960" 00:19:09.448 }, 00:19:09.448 "auth": { 00:19:09.448 "state": "completed", 00:19:09.448 "digest": "sha512", 00:19:09.448 "dhgroup": "ffdhe3072" 00:19:09.448 } 00:19:09.448 } 00:19:09.448 ]' 00:19:09.448 09:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:09.448 09:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:09.448 09:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:09.448 09:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:09.448 09:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:09.448 09:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:09.448 09:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:09.448 09:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:09.708 09:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzEwNGY1NTYxNTRjNjU4OTMyYjEyYzBlODY5ODhiNmRhNGMxNmU4MmQyODkyNzAyZDFlY2VmOGI4N2JkMzlhZnmzGvg=: 00:19:09.708 09:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YzEwNGY1NTYxNTRjNjU4OTMyYjEyYzBlODY5ODhiNmRhNGMxNmU4MmQyODkyNzAyZDFlY2VmOGI4N2JkMzlhZnmzGvg=: 00:19:10.278 09:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:10.278 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:10.278 09:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:10.278 09:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.278 09:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.278 09:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.278 09:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:10.278 09:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:10.278 09:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:10.278 09:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:10.538 09:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:19:10.538 09:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:10.538 09:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:10.538 09:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:10.538 09:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:10.538 09:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:10.538 09:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:10.538 09:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.538 09:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.538 09:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.538 09:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:10.538 09:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:10.538 09:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:10.799 00:19:10.799 09:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:10.799 09:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:10.799 09:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:11.060 09:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.060 09:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:11.060 09:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.060 09:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.060 09:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.060 09:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:11.060 { 00:19:11.060 "cntlid": 121, 00:19:11.060 "qid": 0, 00:19:11.060 "state": "enabled", 00:19:11.060 "thread": "nvmf_tgt_poll_group_000", 00:19:11.060 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:11.060 "listen_address": { 00:19:11.060 "trtype": "TCP", 00:19:11.060 "adrfam": "IPv4", 00:19:11.060 "traddr": "10.0.0.2", 00:19:11.060 "trsvcid": "4420" 00:19:11.060 }, 00:19:11.060 "peer_address": { 00:19:11.060 "trtype": "TCP", 00:19:11.060 "adrfam": "IPv4", 00:19:11.060 "traddr": "10.0.0.1", 00:19:11.060 "trsvcid": "45978" 00:19:11.060 }, 00:19:11.060 "auth": { 00:19:11.060 "state": "completed", 00:19:11.060 "digest": "sha512", 00:19:11.060 "dhgroup": "ffdhe4096" 00:19:11.060 } 00:19:11.060 } 00:19:11.060 ]' 00:19:11.060 09:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:11.060 09:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:11.060 09:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:11.060 09:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:11.060 09:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:11.060 09:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:11.060 09:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:11.060 09:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:11.323 09:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjlmZjM2NmI5M2I1ODY3MTlmNDg3ZTExMDhmZTI1OTQ4MjgyMWVlYWZjMWQ2MzI2iijJ7w==: --dhchap-ctrl-secret DHHC-1:03:MmMxZTVhNDUwOWFjN2ExZTNmMjExNjQwZjMwZTBhNjQzYTlhOWZlYzQ1Y2FlMDg4ZWU4NTc4MTA3NTUwOWVjY6iUJvY=: 00:19:11.323 09:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YjlmZjM2NmI5M2I1ODY3MTlmNDg3ZTExMDhmZTI1OTQ4MjgyMWVlYWZjMWQ2MzI2iijJ7w==: --dhchap-ctrl-secret DHHC-1:03:MmMxZTVhNDUwOWFjN2ExZTNmMjExNjQwZjMwZTBhNjQzYTlhOWZlYzQ1Y2FlMDg4ZWU4NTc4MTA3NTUwOWVjY6iUJvY=: 00:19:11.895 09:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:12.156 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:12.156 09:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:12.156 09:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.156 09:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.156 09:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.156 09:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:12.156 09:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:12.156 09:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:12.156 09:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:19:12.156 09:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:12.156 09:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:12.156 09:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:12.156 09:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:12.156 09:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:12.156 09:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:12.156 09:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.156 09:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.156 09:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.156 09:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:12.156 09:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:12.156 09:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:12.417 00:19:12.417 09:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:12.417 09:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:12.417 09:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:12.678 09:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.678 09:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:12.678 09:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.678 09:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.678 09:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.678 09:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:12.678 { 00:19:12.678 "cntlid": 123, 00:19:12.678 "qid": 0, 00:19:12.678 "state": "enabled", 00:19:12.678 "thread": "nvmf_tgt_poll_group_000", 00:19:12.678 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:12.678 "listen_address": { 00:19:12.678 "trtype": "TCP", 00:19:12.678 "adrfam": "IPv4", 00:19:12.678 "traddr": "10.0.0.2", 00:19:12.678 "trsvcid": "4420" 00:19:12.678 }, 00:19:12.678 "peer_address": { 00:19:12.678 "trtype": "TCP", 00:19:12.678 "adrfam": "IPv4", 00:19:12.678 "traddr": "10.0.0.1", 00:19:12.678 "trsvcid": "46002" 00:19:12.678 }, 00:19:12.678 "auth": { 00:19:12.678 "state": "completed", 00:19:12.678 "digest": "sha512", 00:19:12.678 "dhgroup": "ffdhe4096" 00:19:12.678 } 00:19:12.678 } 00:19:12.678 ]' 00:19:12.678 09:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:12.678 09:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:12.678 09:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:12.678 09:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:12.678 09:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:12.940 09:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:12.940 09:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:12.940 09:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:12.940 09:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGNhZGViNDU3OWZkNWE4MTcwY2UyNDJmODA2OTFlZTGXPKvD: --dhchap-ctrl-secret DHHC-1:02:NzRkMzY2NzliY2E3MGY5MTg2YmI0NzlkYjA5MzI2YTEzOWZhNDg2MjU3OTZjYmRlqL/vPg==: 00:19:12.940 09:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:OGNhZGViNDU3OWZkNWE4MTcwY2UyNDJmODA2OTFlZTGXPKvD: --dhchap-ctrl-secret DHHC-1:02:NzRkMzY2NzliY2E3MGY5MTg2YmI0NzlkYjA5MzI2YTEzOWZhNDg2MjU3OTZjYmRlqL/vPg==: 00:19:13.512 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:13.773 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:13.773 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:13.773 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.773 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.773 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.773 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:13.773 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:13.774 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:13.774 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:19:13.774 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:13.774 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:13.774 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:13.774 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:13.774 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:13.774 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:13.774 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.774 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.774 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.774 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:13.774 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:13.774 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:14.035 00:19:14.035 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:14.035 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:14.035 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:14.296 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.296 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:14.296 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.296 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.296 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.296 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:14.296 { 00:19:14.296 "cntlid": 125, 00:19:14.296 "qid": 0, 00:19:14.296 "state": "enabled", 00:19:14.296 "thread": "nvmf_tgt_poll_group_000", 00:19:14.296 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:14.296 "listen_address": { 00:19:14.296 "trtype": "TCP", 00:19:14.296 "adrfam": "IPv4", 00:19:14.296 "traddr": "10.0.0.2", 00:19:14.296 "trsvcid": "4420" 00:19:14.296 }, 00:19:14.296 "peer_address": { 00:19:14.296 "trtype": "TCP", 00:19:14.296 "adrfam": "IPv4", 00:19:14.296 "traddr": "10.0.0.1", 00:19:14.296 "trsvcid": "33632" 00:19:14.296 }, 00:19:14.296 "auth": { 00:19:14.296 "state": "completed", 00:19:14.296 "digest": "sha512", 00:19:14.296 "dhgroup": "ffdhe4096" 00:19:14.296 } 00:19:14.296 } 00:19:14.296 ]' 00:19:14.296 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:14.296 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:14.296 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:14.296 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:14.296 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:14.297 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:14.297 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:14.297 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:14.557 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTE1MmU1M2RlNTU3NzkxMzM3NTA1NDlmZmViMWEzZDhmODgwOTBlYzRkYWZlNTAyxXEW/Q==: --dhchap-ctrl-secret DHHC-1:01:MWQxYWNlODZmOWNmZTVlODVmMmQzMTkxZmMxZmJhYThS3flV: 00:19:14.557 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MTE1MmU1M2RlNTU3NzkxMzM3NTA1NDlmZmViMWEzZDhmODgwOTBlYzRkYWZlNTAyxXEW/Q==: --dhchap-ctrl-secret DHHC-1:01:MWQxYWNlODZmOWNmZTVlODVmMmQzMTkxZmMxZmJhYThS3flV: 00:19:15.129 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:15.129 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:15.129 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:15.129 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.129 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.129 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.129 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:15.129 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:15.130 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:15.391 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:19:15.391 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:15.391 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:15.391 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:15.391 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:15.391 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:15.391 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:15.391 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.391 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.391 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.391 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:15.391 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:15.391 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:15.651 00:19:15.651 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:15.651 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:15.651 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.912 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.912 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:15.912 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.912 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.912 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.912 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:15.912 { 00:19:15.912 "cntlid": 127, 00:19:15.912 "qid": 0, 00:19:15.912 "state": "enabled", 00:19:15.912 "thread": "nvmf_tgt_poll_group_000", 00:19:15.912 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:15.912 "listen_address": { 00:19:15.912 "trtype": "TCP", 00:19:15.912 "adrfam": "IPv4", 00:19:15.912 "traddr": "10.0.0.2", 00:19:15.912 "trsvcid": "4420" 00:19:15.912 }, 00:19:15.912 "peer_address": { 00:19:15.912 "trtype": "TCP", 00:19:15.912 "adrfam": "IPv4", 00:19:15.912 "traddr": "10.0.0.1", 00:19:15.912 "trsvcid": "33660" 00:19:15.912 }, 00:19:15.912 "auth": { 00:19:15.912 "state": "completed", 00:19:15.912 "digest": "sha512", 00:19:15.912 "dhgroup": "ffdhe4096" 00:19:15.912 } 00:19:15.912 } 00:19:15.912 ]' 00:19:15.912 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:15.912 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:15.912 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:15.912 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:15.912 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:15.912 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:15.912 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:15.912 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:16.174 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzEwNGY1NTYxNTRjNjU4OTMyYjEyYzBlODY5ODhiNmRhNGMxNmU4MmQyODkyNzAyZDFlY2VmOGI4N2JkMzlhZnmzGvg=: 00:19:16.174 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YzEwNGY1NTYxNTRjNjU4OTMyYjEyYzBlODY5ODhiNmRhNGMxNmU4MmQyODkyNzAyZDFlY2VmOGI4N2JkMzlhZnmzGvg=: 00:19:16.746 09:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:16.746 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:16.746 09:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:16.746 09:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.746 09:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.746 09:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.746 09:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:16.746 09:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:16.746 09:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:16.746 09:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:17.007 09:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:19:17.007 09:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:17.007 09:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:17.007 09:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:17.007 09:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:17.007 09:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:17.007 09:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:17.007 09:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.007 09:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.007 09:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.007 09:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:17.007 09:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:17.007 09:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:17.268 00:19:17.529 09:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:17.529 09:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:17.529 09:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:17.529 09:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.529 09:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:17.529 09:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.529 09:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.529 09:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.529 09:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:17.529 { 00:19:17.529 "cntlid": 129, 00:19:17.529 "qid": 0, 00:19:17.529 "state": "enabled", 00:19:17.529 "thread": "nvmf_tgt_poll_group_000", 00:19:17.529 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:17.529 "listen_address": { 00:19:17.529 "trtype": "TCP", 00:19:17.529 "adrfam": "IPv4", 00:19:17.529 "traddr": "10.0.0.2", 00:19:17.529 "trsvcid": "4420" 00:19:17.529 }, 00:19:17.529 "peer_address": { 00:19:17.529 "trtype": "TCP", 00:19:17.529 "adrfam": "IPv4", 00:19:17.529 "traddr": "10.0.0.1", 00:19:17.529 "trsvcid": "33680" 00:19:17.529 }, 00:19:17.529 "auth": { 00:19:17.529 "state": "completed", 00:19:17.529 "digest": "sha512", 00:19:17.529 "dhgroup": "ffdhe6144" 00:19:17.529 } 00:19:17.529 } 00:19:17.529 ]' 00:19:17.529 09:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:17.529 09:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:17.529 09:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:17.790 09:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:17.790 09:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:17.790 09:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:17.790 09:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:17.790 09:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:18.051 09:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjlmZjM2NmI5M2I1ODY3MTlmNDg3ZTExMDhmZTI1OTQ4MjgyMWVlYWZjMWQ2MzI2iijJ7w==: --dhchap-ctrl-secret DHHC-1:03:MmMxZTVhNDUwOWFjN2ExZTNmMjExNjQwZjMwZTBhNjQzYTlhOWZlYzQ1Y2FlMDg4ZWU4NTc4MTA3NTUwOWVjY6iUJvY=: 00:19:18.051 09:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YjlmZjM2NmI5M2I1ODY3MTlmNDg3ZTExMDhmZTI1OTQ4MjgyMWVlYWZjMWQ2MzI2iijJ7w==: --dhchap-ctrl-secret DHHC-1:03:MmMxZTVhNDUwOWFjN2ExZTNmMjExNjQwZjMwZTBhNjQzYTlhOWZlYzQ1Y2FlMDg4ZWU4NTc4MTA3NTUwOWVjY6iUJvY=: 00:19:18.623 09:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:18.623 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:18.623 09:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:18.623 09:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.623 09:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.623 09:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.623 09:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:18.623 09:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:18.623 09:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:18.884 09:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:19:18.884 09:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:18.884 09:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:18.884 09:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:18.884 09:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:18.884 09:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:18.884 09:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:18.884 09:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.884 09:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.884 09:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.884 09:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:18.884 09:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:18.884 09:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:19.144 00:19:19.144 09:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:19.144 09:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:19.144 09:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:19.405 09:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.405 09:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:19.406 09:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.406 09:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.406 09:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.406 09:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:19.406 { 00:19:19.406 "cntlid": 131, 00:19:19.406 "qid": 0, 00:19:19.406 "state": "enabled", 00:19:19.406 "thread": "nvmf_tgt_poll_group_000", 00:19:19.406 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:19.406 "listen_address": { 00:19:19.406 "trtype": "TCP", 00:19:19.406 "adrfam": "IPv4", 00:19:19.406 "traddr": "10.0.0.2", 00:19:19.406 "trsvcid": "4420" 00:19:19.406 }, 00:19:19.406 "peer_address": { 00:19:19.406 "trtype": "TCP", 00:19:19.406 "adrfam": "IPv4", 00:19:19.406 "traddr": "10.0.0.1", 00:19:19.406 "trsvcid": "33714" 00:19:19.406 }, 00:19:19.406 "auth": { 00:19:19.406 "state": "completed", 00:19:19.406 "digest": "sha512", 00:19:19.406 "dhgroup": "ffdhe6144" 00:19:19.406 } 00:19:19.406 } 00:19:19.406 ]' 00:19:19.406 09:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:19.406 09:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:19.406 09:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:19.406 09:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:19.406 09:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:19.406 09:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:19.406 09:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:19.406 09:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:19.668 09:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGNhZGViNDU3OWZkNWE4MTcwY2UyNDJmODA2OTFlZTGXPKvD: --dhchap-ctrl-secret DHHC-1:02:NzRkMzY2NzliY2E3MGY5MTg2YmI0NzlkYjA5MzI2YTEzOWZhNDg2MjU3OTZjYmRlqL/vPg==: 00:19:19.668 09:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:OGNhZGViNDU3OWZkNWE4MTcwY2UyNDJmODA2OTFlZTGXPKvD: --dhchap-ctrl-secret DHHC-1:02:NzRkMzY2NzliY2E3MGY5MTg2YmI0NzlkYjA5MzI2YTEzOWZhNDg2MjU3OTZjYmRlqL/vPg==: 00:19:20.239 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:20.239 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:20.239 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:20.239 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.239 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.239 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.239 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:20.239 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:20.239 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:20.500 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:19:20.500 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:20.500 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:20.500 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:20.500 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:20.500 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:20.500 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:20.500 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.500 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.500 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.500 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:20.500 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:20.500 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:20.761 00:19:20.761 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:20.761 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:20.761 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:21.022 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.022 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:21.022 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.022 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.022 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.022 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:21.022 { 00:19:21.022 "cntlid": 133, 00:19:21.022 "qid": 0, 00:19:21.022 "state": "enabled", 00:19:21.022 "thread": "nvmf_tgt_poll_group_000", 00:19:21.022 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:21.022 "listen_address": { 00:19:21.022 "trtype": "TCP", 00:19:21.022 "adrfam": "IPv4", 00:19:21.022 "traddr": "10.0.0.2", 00:19:21.022 "trsvcid": "4420" 00:19:21.022 }, 00:19:21.022 "peer_address": { 00:19:21.022 "trtype": "TCP", 00:19:21.022 "adrfam": "IPv4", 00:19:21.022 "traddr": "10.0.0.1", 00:19:21.022 "trsvcid": "33738" 00:19:21.022 }, 00:19:21.022 "auth": { 00:19:21.022 "state": "completed", 00:19:21.022 "digest": "sha512", 00:19:21.022 "dhgroup": "ffdhe6144" 00:19:21.022 } 00:19:21.022 } 00:19:21.022 ]' 00:19:21.022 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:21.022 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:21.022 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:21.022 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:21.022 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:21.283 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:21.283 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:21.283 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:21.283 09:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTE1MmU1M2RlNTU3NzkxMzM3NTA1NDlmZmViMWEzZDhmODgwOTBlYzRkYWZlNTAyxXEW/Q==: --dhchap-ctrl-secret DHHC-1:01:MWQxYWNlODZmOWNmZTVlODVmMmQzMTkxZmMxZmJhYThS3flV: 00:19:21.283 09:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MTE1MmU1M2RlNTU3NzkxMzM3NTA1NDlmZmViMWEzZDhmODgwOTBlYzRkYWZlNTAyxXEW/Q==: --dhchap-ctrl-secret DHHC-1:01:MWQxYWNlODZmOWNmZTVlODVmMmQzMTkxZmMxZmJhYThS3flV: 00:19:22.225 09:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:22.225 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:22.225 09:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:22.225 09:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.225 09:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.225 09:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.225 09:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:22.225 09:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:22.225 09:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:22.225 09:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:19:22.225 09:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:22.225 09:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:22.225 09:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:22.225 09:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:22.225 09:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:22.225 09:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:22.225 09:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.225 09:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.225 09:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.225 09:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:22.225 09:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:22.225 09:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:22.486 00:19:22.486 09:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:22.486 09:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:22.486 09:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:22.747 09:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.747 09:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:22.747 09:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.747 09:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.747 09:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.747 09:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:22.747 { 00:19:22.747 "cntlid": 135, 00:19:22.747 "qid": 0, 00:19:22.747 "state": "enabled", 00:19:22.747 "thread": "nvmf_tgt_poll_group_000", 00:19:22.747 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:22.747 "listen_address": { 00:19:22.747 "trtype": "TCP", 00:19:22.747 "adrfam": "IPv4", 00:19:22.747 "traddr": "10.0.0.2", 00:19:22.747 "trsvcid": "4420" 00:19:22.747 }, 00:19:22.747 "peer_address": { 00:19:22.747 "trtype": "TCP", 00:19:22.747 "adrfam": "IPv4", 00:19:22.747 "traddr": "10.0.0.1", 00:19:22.747 "trsvcid": "33752" 00:19:22.747 }, 00:19:22.747 "auth": { 00:19:22.747 "state": "completed", 00:19:22.747 "digest": "sha512", 00:19:22.747 "dhgroup": "ffdhe6144" 00:19:22.747 } 00:19:22.747 } 00:19:22.747 ]' 00:19:22.747 09:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:22.747 09:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:22.747 09:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:22.747 09:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:22.747 09:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:23.008 09:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:23.008 09:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:23.008 09:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:23.008 09:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzEwNGY1NTYxNTRjNjU4OTMyYjEyYzBlODY5ODhiNmRhNGMxNmU4MmQyODkyNzAyZDFlY2VmOGI4N2JkMzlhZnmzGvg=: 00:19:23.008 09:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YzEwNGY1NTYxNTRjNjU4OTMyYjEyYzBlODY5ODhiNmRhNGMxNmU4MmQyODkyNzAyZDFlY2VmOGI4N2JkMzlhZnmzGvg=: 00:19:23.646 09:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:23.646 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:23.646 09:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:23.646 09:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.646 09:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.646 09:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.646 09:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:23.646 09:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:23.646 09:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:23.646 09:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:23.941 09:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:19:23.941 09:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:23.941 09:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:23.941 09:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:23.941 09:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:23.941 09:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:23.941 09:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:23.941 09:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.941 09:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.941 09:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.941 09:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:23.941 09:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:23.941 09:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:24.513 00:19:24.513 09:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:24.513 09:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:24.513 09:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:24.513 09:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.513 09:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:24.513 09:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.513 09:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.513 09:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.513 09:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:24.513 { 00:19:24.513 "cntlid": 137, 00:19:24.513 "qid": 0, 00:19:24.513 "state": "enabled", 00:19:24.513 "thread": "nvmf_tgt_poll_group_000", 00:19:24.513 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:24.513 "listen_address": { 00:19:24.513 "trtype": "TCP", 00:19:24.513 "adrfam": "IPv4", 00:19:24.513 "traddr": "10.0.0.2", 00:19:24.513 "trsvcid": "4420" 00:19:24.513 }, 00:19:24.513 "peer_address": { 00:19:24.513 "trtype": "TCP", 00:19:24.513 "adrfam": "IPv4", 00:19:24.513 "traddr": "10.0.0.1", 00:19:24.513 "trsvcid": "36082" 00:19:24.513 }, 00:19:24.513 "auth": { 00:19:24.513 "state": "completed", 00:19:24.513 "digest": "sha512", 00:19:24.513 "dhgroup": "ffdhe8192" 00:19:24.513 } 00:19:24.513 } 00:19:24.513 ]' 00:19:24.513 09:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:24.513 09:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:24.513 09:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:24.777 09:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:24.777 09:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:24.777 09:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:24.777 09:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:24.777 09:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:24.777 09:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjlmZjM2NmI5M2I1ODY3MTlmNDg3ZTExMDhmZTI1OTQ4MjgyMWVlYWZjMWQ2MzI2iijJ7w==: --dhchap-ctrl-secret DHHC-1:03:MmMxZTVhNDUwOWFjN2ExZTNmMjExNjQwZjMwZTBhNjQzYTlhOWZlYzQ1Y2FlMDg4ZWU4NTc4MTA3NTUwOWVjY6iUJvY=: 00:19:24.777 09:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YjlmZjM2NmI5M2I1ODY3MTlmNDg3ZTExMDhmZTI1OTQ4MjgyMWVlYWZjMWQ2MzI2iijJ7w==: --dhchap-ctrl-secret DHHC-1:03:MmMxZTVhNDUwOWFjN2ExZTNmMjExNjQwZjMwZTBhNjQzYTlhOWZlYzQ1Y2FlMDg4ZWU4NTc4MTA3NTUwOWVjY6iUJvY=: 00:19:25.718 09:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:25.718 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:25.718 09:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:25.718 09:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.718 09:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.718 09:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.718 09:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:25.718 09:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:25.718 09:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:25.718 09:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:19:25.718 09:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:25.718 09:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:25.718 09:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:25.718 09:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:25.718 09:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:25.718 09:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:25.718 09:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.718 09:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.718 09:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.718 09:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:25.718 09:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:25.718 09:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:26.289 00:19:26.289 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:26.289 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:26.289 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:26.289 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.289 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:26.289 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.289 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.550 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.550 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:26.550 { 00:19:26.550 "cntlid": 139, 00:19:26.550 "qid": 0, 00:19:26.550 "state": "enabled", 00:19:26.550 "thread": "nvmf_tgt_poll_group_000", 00:19:26.550 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:26.550 "listen_address": { 00:19:26.550 "trtype": "TCP", 00:19:26.550 "adrfam": "IPv4", 00:19:26.550 "traddr": "10.0.0.2", 00:19:26.550 "trsvcid": "4420" 00:19:26.550 }, 00:19:26.550 "peer_address": { 00:19:26.550 "trtype": "TCP", 00:19:26.550 "adrfam": "IPv4", 00:19:26.550 "traddr": "10.0.0.1", 00:19:26.550 "trsvcid": "36100" 00:19:26.550 }, 00:19:26.550 "auth": { 00:19:26.550 "state": "completed", 00:19:26.550 "digest": "sha512", 00:19:26.550 "dhgroup": "ffdhe8192" 00:19:26.550 } 00:19:26.550 } 00:19:26.550 ]' 00:19:26.550 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:26.550 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:26.550 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:26.550 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:26.550 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:26.550 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:26.550 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:26.550 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:26.812 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGNhZGViNDU3OWZkNWE4MTcwY2UyNDJmODA2OTFlZTGXPKvD: --dhchap-ctrl-secret DHHC-1:02:NzRkMzY2NzliY2E3MGY5MTg2YmI0NzlkYjA5MzI2YTEzOWZhNDg2MjU3OTZjYmRlqL/vPg==: 00:19:26.812 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:OGNhZGViNDU3OWZkNWE4MTcwY2UyNDJmODA2OTFlZTGXPKvD: --dhchap-ctrl-secret DHHC-1:02:NzRkMzY2NzliY2E3MGY5MTg2YmI0NzlkYjA5MzI2YTEzOWZhNDg2MjU3OTZjYmRlqL/vPg==: 00:19:27.383 09:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:27.383 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:27.383 09:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:27.383 09:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.383 09:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.383 09:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.383 09:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:27.383 09:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:27.383 09:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:27.644 09:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:19:27.644 09:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:27.644 09:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:27.644 09:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:27.644 09:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:27.644 09:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:27.644 09:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:27.644 09:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.644 09:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.644 09:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.644 09:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:27.644 09:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:27.644 09:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:27.905 00:19:28.166 09:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:28.166 09:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:28.166 09:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:28.166 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.166 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:28.166 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.166 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.166 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.166 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:28.166 { 00:19:28.166 "cntlid": 141, 00:19:28.166 "qid": 0, 00:19:28.166 "state": "enabled", 00:19:28.166 "thread": "nvmf_tgt_poll_group_000", 00:19:28.166 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:28.166 "listen_address": { 00:19:28.166 "trtype": "TCP", 00:19:28.166 "adrfam": "IPv4", 00:19:28.166 "traddr": "10.0.0.2", 00:19:28.166 "trsvcid": "4420" 00:19:28.166 }, 00:19:28.166 "peer_address": { 00:19:28.166 "trtype": "TCP", 00:19:28.166 "adrfam": "IPv4", 00:19:28.166 "traddr": "10.0.0.1", 00:19:28.167 "trsvcid": "36128" 00:19:28.167 }, 00:19:28.167 "auth": { 00:19:28.167 "state": "completed", 00:19:28.167 "digest": "sha512", 00:19:28.167 "dhgroup": "ffdhe8192" 00:19:28.167 } 00:19:28.167 } 00:19:28.167 ]' 00:19:28.167 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:28.167 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:28.167 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:28.427 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:28.427 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:28.427 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:28.427 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:28.427 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:28.427 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTE1MmU1M2RlNTU3NzkxMzM3NTA1NDlmZmViMWEzZDhmODgwOTBlYzRkYWZlNTAyxXEW/Q==: --dhchap-ctrl-secret DHHC-1:01:MWQxYWNlODZmOWNmZTVlODVmMmQzMTkxZmMxZmJhYThS3flV: 00:19:28.427 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MTE1MmU1M2RlNTU3NzkxMzM3NTA1NDlmZmViMWEzZDhmODgwOTBlYzRkYWZlNTAyxXEW/Q==: --dhchap-ctrl-secret DHHC-1:01:MWQxYWNlODZmOWNmZTVlODVmMmQzMTkxZmMxZmJhYThS3flV: 00:19:29.368 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:29.368 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:29.368 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:29.368 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.368 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.368 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.368 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:29.368 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:29.368 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:29.368 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:19:29.368 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:29.368 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:29.368 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:29.368 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:29.368 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:29.368 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:29.368 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.368 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.368 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.368 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:29.368 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:29.368 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:29.941 00:19:29.941 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:29.941 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:29.941 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:30.204 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.204 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:30.204 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.204 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.204 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.204 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:30.204 { 00:19:30.204 "cntlid": 143, 00:19:30.204 "qid": 0, 00:19:30.204 "state": "enabled", 00:19:30.204 "thread": "nvmf_tgt_poll_group_000", 00:19:30.204 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:30.204 "listen_address": { 00:19:30.204 "trtype": "TCP", 00:19:30.204 "adrfam": "IPv4", 00:19:30.204 "traddr": "10.0.0.2", 00:19:30.204 "trsvcid": "4420" 00:19:30.204 }, 00:19:30.204 "peer_address": { 00:19:30.204 "trtype": "TCP", 00:19:30.204 "adrfam": "IPv4", 00:19:30.204 "traddr": "10.0.0.1", 00:19:30.204 "trsvcid": "36156" 00:19:30.204 }, 00:19:30.204 "auth": { 00:19:30.204 "state": "completed", 00:19:30.204 "digest": "sha512", 00:19:30.204 "dhgroup": "ffdhe8192" 00:19:30.204 } 00:19:30.204 } 00:19:30.204 ]' 00:19:30.204 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:30.204 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:30.204 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:30.204 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:30.204 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:30.204 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:30.204 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:30.204 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:30.465 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzEwNGY1NTYxNTRjNjU4OTMyYjEyYzBlODY5ODhiNmRhNGMxNmU4MmQyODkyNzAyZDFlY2VmOGI4N2JkMzlhZnmzGvg=: 00:19:30.465 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YzEwNGY1NTYxNTRjNjU4OTMyYjEyYzBlODY5ODhiNmRhNGMxNmU4MmQyODkyNzAyZDFlY2VmOGI4N2JkMzlhZnmzGvg=: 00:19:31.035 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:31.035 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:31.035 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:31.035 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.035 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.035 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.035 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:19:31.035 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:19:31.035 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:19:31.035 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:31.035 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:31.035 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:31.296 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:19:31.296 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:31.296 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:31.296 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:31.296 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:31.296 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:31.296 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:31.296 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.296 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.296 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.296 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:31.296 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:31.296 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:31.870 00:19:31.870 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:31.870 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:31.870 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.870 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.870 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:31.870 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.870 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.870 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.870 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:31.870 { 00:19:31.870 "cntlid": 145, 00:19:31.870 "qid": 0, 00:19:31.870 "state": "enabled", 00:19:31.870 "thread": "nvmf_tgt_poll_group_000", 00:19:31.870 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:31.870 "listen_address": { 00:19:31.870 "trtype": "TCP", 00:19:31.870 "adrfam": "IPv4", 00:19:31.870 "traddr": "10.0.0.2", 00:19:31.870 "trsvcid": "4420" 00:19:31.870 }, 00:19:31.870 "peer_address": { 00:19:31.870 "trtype": "TCP", 00:19:31.870 "adrfam": "IPv4", 00:19:31.870 "traddr": "10.0.0.1", 00:19:31.870 "trsvcid": "36182" 00:19:31.870 }, 00:19:31.870 "auth": { 00:19:31.870 "state": "completed", 00:19:31.870 "digest": "sha512", 00:19:31.870 "dhgroup": "ffdhe8192" 00:19:31.870 } 00:19:31.870 } 00:19:31.870 ]' 00:19:31.870 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:32.131 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:32.131 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:32.131 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:32.131 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:32.131 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:32.131 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:32.131 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:32.131 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjlmZjM2NmI5M2I1ODY3MTlmNDg3ZTExMDhmZTI1OTQ4MjgyMWVlYWZjMWQ2MzI2iijJ7w==: --dhchap-ctrl-secret DHHC-1:03:MmMxZTVhNDUwOWFjN2ExZTNmMjExNjQwZjMwZTBhNjQzYTlhOWZlYzQ1Y2FlMDg4ZWU4NTc4MTA3NTUwOWVjY6iUJvY=: 00:19:32.131 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YjlmZjM2NmI5M2I1ODY3MTlmNDg3ZTExMDhmZTI1OTQ4MjgyMWVlYWZjMWQ2MzI2iijJ7w==: --dhchap-ctrl-secret DHHC-1:03:MmMxZTVhNDUwOWFjN2ExZTNmMjExNjQwZjMwZTBhNjQzYTlhOWZlYzQ1Y2FlMDg4ZWU4NTc4MTA3NTUwOWVjY6iUJvY=: 00:19:33.073 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:33.073 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:33.073 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:33.073 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.073 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.073 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.073 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:19:33.073 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.073 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.073 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.073 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:19:33.073 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:33.073 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:19:33.073 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:33.073 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:33.073 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:33.073 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:33.073 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:19:33.073 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:19:33.073 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:19:33.334 request: 00:19:33.334 { 00:19:33.334 "name": "nvme0", 00:19:33.334 "trtype": "tcp", 00:19:33.334 "traddr": "10.0.0.2", 00:19:33.334 "adrfam": "ipv4", 00:19:33.334 "trsvcid": "4420", 00:19:33.334 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:33.334 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:33.334 "prchk_reftag": false, 00:19:33.334 "prchk_guard": false, 00:19:33.334 "hdgst": false, 00:19:33.334 "ddgst": false, 00:19:33.334 "dhchap_key": "key2", 00:19:33.334 "allow_unrecognized_csi": false, 00:19:33.334 "method": "bdev_nvme_attach_controller", 00:19:33.334 "req_id": 1 00:19:33.334 } 00:19:33.334 Got JSON-RPC error response 00:19:33.334 response: 00:19:33.334 { 00:19:33.334 "code": -5, 00:19:33.334 "message": "Input/output error" 00:19:33.334 } 00:19:33.334 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:33.334 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:33.334 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:33.334 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:33.334 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:33.334 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.334 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.334 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.334 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.334 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.334 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.334 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.334 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:33.334 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:33.334 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:33.334 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:33.334 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:33.334 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:33.334 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:33.334 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:33.334 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:33.334 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:33.905 request: 00:19:33.905 { 00:19:33.905 "name": "nvme0", 00:19:33.905 "trtype": "tcp", 00:19:33.905 "traddr": "10.0.0.2", 00:19:33.905 "adrfam": "ipv4", 00:19:33.905 "trsvcid": "4420", 00:19:33.905 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:33.905 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:33.905 "prchk_reftag": false, 00:19:33.905 "prchk_guard": false, 00:19:33.905 "hdgst": false, 00:19:33.905 "ddgst": false, 00:19:33.905 "dhchap_key": "key1", 00:19:33.905 "dhchap_ctrlr_key": "ckey2", 00:19:33.905 "allow_unrecognized_csi": false, 00:19:33.905 "method": "bdev_nvme_attach_controller", 00:19:33.905 "req_id": 1 00:19:33.906 } 00:19:33.906 Got JSON-RPC error response 00:19:33.906 response: 00:19:33.906 { 00:19:33.906 "code": -5, 00:19:33.906 "message": "Input/output error" 00:19:33.906 } 00:19:33.906 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:33.906 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:33.906 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:33.906 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:33.906 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:33.906 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.906 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.906 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.906 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:19:33.906 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.906 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.906 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.906 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.906 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:33.906 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.906 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:33.906 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:33.906 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:33.906 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:33.906 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.906 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.906 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:34.166 request: 00:19:34.167 { 00:19:34.167 "name": "nvme0", 00:19:34.167 "trtype": "tcp", 00:19:34.167 "traddr": "10.0.0.2", 00:19:34.167 "adrfam": "ipv4", 00:19:34.167 "trsvcid": "4420", 00:19:34.167 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:34.167 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:34.167 "prchk_reftag": false, 00:19:34.167 "prchk_guard": false, 00:19:34.167 "hdgst": false, 00:19:34.167 "ddgst": false, 00:19:34.167 "dhchap_key": "key1", 00:19:34.167 "dhchap_ctrlr_key": "ckey1", 00:19:34.167 "allow_unrecognized_csi": false, 00:19:34.167 "method": "bdev_nvme_attach_controller", 00:19:34.167 "req_id": 1 00:19:34.167 } 00:19:34.167 Got JSON-RPC error response 00:19:34.167 response: 00:19:34.167 { 00:19:34.167 "code": -5, 00:19:34.167 "message": "Input/output error" 00:19:34.167 } 00:19:34.427 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:34.427 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:34.427 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:34.427 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:34.427 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:34.427 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.427 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.428 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.428 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 1345288 00:19:34.428 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1345288 ']' 00:19:34.428 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1345288 00:19:34.428 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:19:34.428 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:34.428 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1345288 00:19:34.428 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:34.428 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:34.428 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1345288' 00:19:34.428 killing process with pid 1345288 00:19:34.428 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1345288 00:19:34.428 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1345288 00:19:34.428 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:19:34.428 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:34.428 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:34.428 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.428 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=1371580 00:19:34.428 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 1371580 00:19:34.428 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:19:34.428 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1371580 ']' 00:19:34.428 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:34.428 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:34.428 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:34.428 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:34.428 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.370 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:35.370 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:35.370 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:35.370 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:35.370 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.370 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:35.370 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:35.370 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 1371580 00:19:35.370 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1371580 ']' 00:19:35.370 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:35.370 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:35.370 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:35.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:35.370 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:35.370 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.631 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:35.631 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:35.631 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:19:35.631 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.631 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.631 null0 00:19:35.631 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.631 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:35.631 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.5aJ 00:19:35.631 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.631 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.631 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.631 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.PEA ]] 00:19:35.631 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.PEA 00:19:35.631 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.631 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.631 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.631 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:35.631 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.PxE 00:19:35.631 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.631 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.631 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.631 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.h0w ]] 00:19:35.631 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.h0w 00:19:35.631 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.631 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.631 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.631 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:35.631 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.ShE 00:19:35.631 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.631 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.631 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.631 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.hCe ]] 00:19:35.631 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.hCe 00:19:35.631 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.631 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.631 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.631 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:35.631 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.iLE 00:19:35.631 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.631 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.891 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.891 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:19:35.891 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:19:35.891 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:35.891 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:35.891 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:35.891 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:35.891 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:35.891 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:35.891 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.891 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.891 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.891 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:35.891 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:35.891 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:36.463 nvme0n1 00:19:36.463 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:36.463 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:36.463 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:36.723 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.723 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:36.723 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.723 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.723 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.723 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:36.723 { 00:19:36.723 "cntlid": 1, 00:19:36.723 "qid": 0, 00:19:36.723 "state": "enabled", 00:19:36.723 "thread": "nvmf_tgt_poll_group_000", 00:19:36.723 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:36.723 "listen_address": { 00:19:36.723 "trtype": "TCP", 00:19:36.723 "adrfam": "IPv4", 00:19:36.723 "traddr": "10.0.0.2", 00:19:36.723 "trsvcid": "4420" 00:19:36.723 }, 00:19:36.723 "peer_address": { 00:19:36.723 "trtype": "TCP", 00:19:36.723 "adrfam": "IPv4", 00:19:36.723 "traddr": "10.0.0.1", 00:19:36.723 "trsvcid": "49900" 00:19:36.723 }, 00:19:36.723 "auth": { 00:19:36.723 "state": "completed", 00:19:36.723 "digest": "sha512", 00:19:36.723 "dhgroup": "ffdhe8192" 00:19:36.723 } 00:19:36.723 } 00:19:36.723 ]' 00:19:36.723 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:36.723 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:36.723 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:36.723 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:36.724 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:36.984 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:36.984 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:36.984 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:36.984 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzEwNGY1NTYxNTRjNjU4OTMyYjEyYzBlODY5ODhiNmRhNGMxNmU4MmQyODkyNzAyZDFlY2VmOGI4N2JkMzlhZnmzGvg=: 00:19:36.984 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YzEwNGY1NTYxNTRjNjU4OTMyYjEyYzBlODY5ODhiNmRhNGMxNmU4MmQyODkyNzAyZDFlY2VmOGI4N2JkMzlhZnmzGvg=: 00:19:37.555 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:37.816 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:37.816 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:37.816 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.816 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.816 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.816 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:37.816 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.816 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.816 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.816 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:19:37.816 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:19:37.816 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:19:37.816 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:37.816 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:19:37.816 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:37.816 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:37.816 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:37.816 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:37.816 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:37.816 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:37.816 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:38.078 request: 00:19:38.078 { 00:19:38.078 "name": "nvme0", 00:19:38.078 "trtype": "tcp", 00:19:38.078 "traddr": "10.0.0.2", 00:19:38.078 "adrfam": "ipv4", 00:19:38.078 "trsvcid": "4420", 00:19:38.078 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:38.078 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:38.078 "prchk_reftag": false, 00:19:38.078 "prchk_guard": false, 00:19:38.078 "hdgst": false, 00:19:38.078 "ddgst": false, 00:19:38.078 "dhchap_key": "key3", 00:19:38.078 "allow_unrecognized_csi": false, 00:19:38.078 "method": "bdev_nvme_attach_controller", 00:19:38.078 "req_id": 1 00:19:38.078 } 00:19:38.078 Got JSON-RPC error response 00:19:38.078 response: 00:19:38.078 { 00:19:38.078 "code": -5, 00:19:38.078 "message": "Input/output error" 00:19:38.078 } 00:19:38.078 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:38.078 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:38.078 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:38.078 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:38.078 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:19:38.078 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:19:38.078 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:38.078 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:38.339 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:19:38.339 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:38.339 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:19:38.339 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:38.339 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:38.339 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:38.339 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:38.339 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:38.339 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:38.339 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:38.600 request: 00:19:38.600 { 00:19:38.600 "name": "nvme0", 00:19:38.600 "trtype": "tcp", 00:19:38.600 "traddr": "10.0.0.2", 00:19:38.600 "adrfam": "ipv4", 00:19:38.600 "trsvcid": "4420", 00:19:38.600 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:38.600 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:38.600 "prchk_reftag": false, 00:19:38.600 "prchk_guard": false, 00:19:38.600 "hdgst": false, 00:19:38.600 "ddgst": false, 00:19:38.600 "dhchap_key": "key3", 00:19:38.600 "allow_unrecognized_csi": false, 00:19:38.600 "method": "bdev_nvme_attach_controller", 00:19:38.600 "req_id": 1 00:19:38.600 } 00:19:38.600 Got JSON-RPC error response 00:19:38.600 response: 00:19:38.600 { 00:19:38.600 "code": -5, 00:19:38.600 "message": "Input/output error" 00:19:38.600 } 00:19:38.600 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:38.600 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:38.600 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:38.601 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:38.601 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:19:38.601 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:19:38.601 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:19:38.601 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:38.601 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:38.601 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:38.601 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:38.601 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.601 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.601 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.601 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:38.601 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.601 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.601 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.601 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:38.601 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:38.861 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:38.861 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:38.861 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:38.861 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:38.861 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:38.861 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:38.861 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:38.861 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:39.123 request: 00:19:39.123 { 00:19:39.123 "name": "nvme0", 00:19:39.123 "trtype": "tcp", 00:19:39.123 "traddr": "10.0.0.2", 00:19:39.123 "adrfam": "ipv4", 00:19:39.123 "trsvcid": "4420", 00:19:39.123 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:39.123 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:39.123 "prchk_reftag": false, 00:19:39.123 "prchk_guard": false, 00:19:39.123 "hdgst": false, 00:19:39.123 "ddgst": false, 00:19:39.123 "dhchap_key": "key0", 00:19:39.123 "dhchap_ctrlr_key": "key1", 00:19:39.123 "allow_unrecognized_csi": false, 00:19:39.123 "method": "bdev_nvme_attach_controller", 00:19:39.123 "req_id": 1 00:19:39.123 } 00:19:39.123 Got JSON-RPC error response 00:19:39.123 response: 00:19:39.123 { 00:19:39.123 "code": -5, 00:19:39.123 "message": "Input/output error" 00:19:39.123 } 00:19:39.123 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:39.123 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:39.123 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:39.123 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:39.123 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:19:39.123 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:19:39.123 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:19:39.383 nvme0n1 00:19:39.383 09:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:19:39.383 09:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:19:39.383 09:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.383 09:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.383 09:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:39.383 09:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:39.643 09:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:19:39.643 09:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.643 09:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.643 09:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.643 09:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:19:39.643 09:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:39.643 09:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:40.584 nvme0n1 00:19:40.584 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:19:40.584 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:19:40.584 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:40.584 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.584 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:40.584 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.584 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.584 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.584 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:19:40.584 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:19:40.584 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:40.845 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.845 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:MTE1MmU1M2RlNTU3NzkxMzM3NTA1NDlmZmViMWEzZDhmODgwOTBlYzRkYWZlNTAyxXEW/Q==: --dhchap-ctrl-secret DHHC-1:03:YzEwNGY1NTYxNTRjNjU4OTMyYjEyYzBlODY5ODhiNmRhNGMxNmU4MmQyODkyNzAyZDFlY2VmOGI4N2JkMzlhZnmzGvg=: 00:19:40.845 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MTE1MmU1M2RlNTU3NzkxMzM3NTA1NDlmZmViMWEzZDhmODgwOTBlYzRkYWZlNTAyxXEW/Q==: --dhchap-ctrl-secret DHHC-1:03:YzEwNGY1NTYxNTRjNjU4OTMyYjEyYzBlODY5ODhiNmRhNGMxNmU4MmQyODkyNzAyZDFlY2VmOGI4N2JkMzlhZnmzGvg=: 00:19:41.416 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:19:41.416 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:19:41.416 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:19:41.416 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:19:41.416 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:19:41.416 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:19:41.416 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:19:41.416 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:41.416 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:41.676 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:19:41.676 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:41.676 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:19:41.676 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:41.676 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:41.676 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:41.676 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:41.676 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:19:41.676 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:41.676 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:41.938 request: 00:19:41.938 { 00:19:41.938 "name": "nvme0", 00:19:41.938 "trtype": "tcp", 00:19:41.938 "traddr": "10.0.0.2", 00:19:41.938 "adrfam": "ipv4", 00:19:41.938 "trsvcid": "4420", 00:19:41.938 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:41.938 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:41.938 "prchk_reftag": false, 00:19:41.938 "prchk_guard": false, 00:19:41.938 "hdgst": false, 00:19:41.938 "ddgst": false, 00:19:41.938 "dhchap_key": "key1", 00:19:41.938 "allow_unrecognized_csi": false, 00:19:41.938 "method": "bdev_nvme_attach_controller", 00:19:41.938 "req_id": 1 00:19:41.938 } 00:19:41.938 Got JSON-RPC error response 00:19:41.938 response: 00:19:41.938 { 00:19:41.938 "code": -5, 00:19:41.938 "message": "Input/output error" 00:19:41.938 } 00:19:41.938 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:41.938 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:41.938 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:41.938 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:41.938 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:41.938 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:41.938 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:42.880 nvme0n1 00:19:42.880 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:19:42.880 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:19:42.880 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:42.880 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.880 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:42.880 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:43.140 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:43.140 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.140 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.140 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.140 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:19:43.140 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:19:43.140 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:19:43.401 nvme0n1 00:19:43.401 09:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:19:43.401 09:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:19:43.401 09:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:43.662 09:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.662 09:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:43.662 09:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:43.923 09:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:43.923 09:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.923 09:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.923 09:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.923 09:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:OGNhZGViNDU3OWZkNWE4MTcwY2UyNDJmODA2OTFlZTGXPKvD: '' 2s 00:19:43.923 09:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:19:43.923 09:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:19:43.923 09:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:OGNhZGViNDU3OWZkNWE4MTcwY2UyNDJmODA2OTFlZTGXPKvD: 00:19:43.923 09:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:19:43.923 09:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:19:43.923 09:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:19:43.923 09:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:OGNhZGViNDU3OWZkNWE4MTcwY2UyNDJmODA2OTFlZTGXPKvD: ]] 00:19:43.923 09:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:OGNhZGViNDU3OWZkNWE4MTcwY2UyNDJmODA2OTFlZTGXPKvD: 00:19:43.923 09:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:19:43.923 09:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:19:43.923 09:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:19:45.839 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:19:45.839 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:19:45.839 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:19:45.839 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:19:45.839 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:19:45.839 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:19:45.839 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:19:45.839 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key key2 00:19:45.839 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.839 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.839 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.839 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:MTE1MmU1M2RlNTU3NzkxMzM3NTA1NDlmZmViMWEzZDhmODgwOTBlYzRkYWZlNTAyxXEW/Q==: 2s 00:19:45.839 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:19:45.839 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:19:45.839 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:19:45.839 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:MTE1MmU1M2RlNTU3NzkxMzM3NTA1NDlmZmViMWEzZDhmODgwOTBlYzRkYWZlNTAyxXEW/Q==: 00:19:45.839 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:19:45.839 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:19:45.839 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:19:45.839 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:MTE1MmU1M2RlNTU3NzkxMzM3NTA1NDlmZmViMWEzZDhmODgwOTBlYzRkYWZlNTAyxXEW/Q==: ]] 00:19:45.839 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:MTE1MmU1M2RlNTU3NzkxMzM3NTA1NDlmZmViMWEzZDhmODgwOTBlYzRkYWZlNTAyxXEW/Q==: 00:19:45.839 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:19:45.839 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:19:47.755 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:19:47.755 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:19:47.755 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:19:47.755 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:19:47.755 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:19:47.755 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:19:48.016 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:19:48.016 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:48.016 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:48.016 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:48.016 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.016 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.016 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.016 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:48.016 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:48.016 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:48.587 nvme0n1 00:19:48.587 09:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:48.587 09:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.587 09:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.587 09:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.587 09:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:48.587 09:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:49.160 09:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:19:49.160 09:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:19:49.160 09:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:49.422 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.422 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:49.422 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.422 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.422 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.422 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:19:49.422 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:19:49.422 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:19:49.422 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:19:49.422 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:49.683 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.683 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:49.683 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.683 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.683 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.683 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:49.683 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:49.683 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:49.683 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:19:49.683 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:49.683 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:19:49.683 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:49.683 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:49.683 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:50.255 request: 00:19:50.255 { 00:19:50.255 "name": "nvme0", 00:19:50.255 "dhchap_key": "key1", 00:19:50.255 "dhchap_ctrlr_key": "key3", 00:19:50.255 "method": "bdev_nvme_set_keys", 00:19:50.255 "req_id": 1 00:19:50.255 } 00:19:50.255 Got JSON-RPC error response 00:19:50.255 response: 00:19:50.255 { 00:19:50.255 "code": -13, 00:19:50.255 "message": "Permission denied" 00:19:50.255 } 00:19:50.255 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:50.255 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:50.255 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:50.255 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:50.255 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:19:50.255 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:50.255 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:19:50.255 09:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:19:50.255 09:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:19:51.198 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:19:51.198 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:19:51.199 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:51.460 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:19:51.460 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:51.460 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.460 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.460 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.460 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:51.460 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:51.460 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:52.403 nvme0n1 00:19:52.403 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:52.403 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.403 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.403 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.403 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:52.403 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:52.403 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:52.403 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:19:52.403 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:52.403 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:19:52.403 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:52.403 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:52.403 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:52.664 request: 00:19:52.664 { 00:19:52.664 "name": "nvme0", 00:19:52.664 "dhchap_key": "key2", 00:19:52.664 "dhchap_ctrlr_key": "key0", 00:19:52.664 "method": "bdev_nvme_set_keys", 00:19:52.664 "req_id": 1 00:19:52.664 } 00:19:52.664 Got JSON-RPC error response 00:19:52.664 response: 00:19:52.664 { 00:19:52.664 "code": -13, 00:19:52.664 "message": "Permission denied" 00:19:52.664 } 00:19:52.664 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:52.664 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:52.664 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:52.664 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:52.664 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:19:52.664 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:19:52.664 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:52.925 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:19:52.925 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:19:53.866 09:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:19:53.866 09:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:19:53.866 09:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:54.127 09:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:19:54.127 09:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:19:54.127 09:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:19:54.127 09:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1345408 00:19:54.127 09:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1345408 ']' 00:19:54.127 09:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1345408 00:19:54.127 09:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:19:54.127 09:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:54.127 09:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1345408 00:19:54.127 09:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:54.127 09:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:54.127 09:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1345408' 00:19:54.127 killing process with pid 1345408 00:19:54.127 09:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1345408 00:19:54.127 09:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1345408 00:19:54.388 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:19:54.388 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:54.388 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:19:54.388 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:54.388 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:19:54.388 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:54.388 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:54.388 rmmod nvme_tcp 00:19:54.388 rmmod nvme_fabrics 00:19:54.388 rmmod nvme_keyring 00:19:54.388 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:54.388 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:19:54.388 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:19:54.388 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 1371580 ']' 00:19:54.388 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 1371580 00:19:54.388 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1371580 ']' 00:19:54.388 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1371580 00:19:54.388 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:19:54.388 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:54.388 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1371580 00:19:54.388 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:54.388 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:54.388 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1371580' 00:19:54.388 killing process with pid 1371580 00:19:54.388 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1371580 00:19:54.388 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1371580 00:19:54.649 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:54.649 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:54.649 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:54.649 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:19:54.649 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:19:54.649 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:54.649 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:19:54.649 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:54.649 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:54.649 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:54.649 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:54.649 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:56.563 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:56.563 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.5aJ /tmp/spdk.key-sha256.PxE /tmp/spdk.key-sha384.ShE /tmp/spdk.key-sha512.iLE /tmp/spdk.key-sha512.PEA /tmp/spdk.key-sha384.h0w /tmp/spdk.key-sha256.hCe '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:19:56.563 00:19:56.563 real 2m36.837s 00:19:56.563 user 5m52.999s 00:19:56.563 sys 0m24.696s 00:19:56.563 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:56.563 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.563 ************************************ 00:19:56.563 END TEST nvmf_auth_target 00:19:56.563 ************************************ 00:19:56.823 09:53:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:19:56.823 09:53:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:56.823 09:53:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:56.823 09:53:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:56.823 09:53:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:56.823 ************************************ 00:19:56.823 START TEST nvmf_bdevio_no_huge 00:19:56.823 ************************************ 00:19:56.823 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:56.823 * Looking for test storage... 00:19:56.823 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:56.823 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:56.823 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:19:56.823 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:56.823 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:56.823 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:56.823 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:56.823 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:56.823 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:19:56.823 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:19:56.823 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:19:56.823 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:19:56.823 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:19:56.823 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:19:56.823 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:19:56.823 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:56.823 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:19:56.823 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:19:56.823 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:56.823 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:56.823 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:19:56.823 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:19:56.823 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:56.823 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:19:56.823 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:19:56.823 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:19:56.823 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:19:56.823 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:56.823 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:19:56.823 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:19:56.823 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:56.823 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:56.823 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:19:56.823 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:56.823 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:56.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:56.823 --rc genhtml_branch_coverage=1 00:19:56.823 --rc genhtml_function_coverage=1 00:19:56.823 --rc genhtml_legend=1 00:19:56.823 --rc geninfo_all_blocks=1 00:19:56.823 --rc geninfo_unexecuted_blocks=1 00:19:56.823 00:19:56.823 ' 00:19:56.823 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:56.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:56.823 --rc genhtml_branch_coverage=1 00:19:56.823 --rc genhtml_function_coverage=1 00:19:56.823 --rc genhtml_legend=1 00:19:56.823 --rc geninfo_all_blocks=1 00:19:56.823 --rc geninfo_unexecuted_blocks=1 00:19:56.823 00:19:56.823 ' 00:19:56.823 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:56.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:56.823 --rc genhtml_branch_coverage=1 00:19:56.823 --rc genhtml_function_coverage=1 00:19:56.823 --rc genhtml_legend=1 00:19:56.823 --rc geninfo_all_blocks=1 00:19:56.823 --rc geninfo_unexecuted_blocks=1 00:19:56.823 00:19:56.823 ' 00:19:56.823 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:56.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:56.823 --rc genhtml_branch_coverage=1 00:19:56.823 --rc genhtml_function_coverage=1 00:19:56.823 --rc genhtml_legend=1 00:19:56.823 --rc geninfo_all_blocks=1 00:19:56.823 --rc geninfo_unexecuted_blocks=1 00:19:56.823 00:19:56.823 ' 00:19:56.823 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:56.823 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:19:56.823 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:56.823 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:56.823 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:56.823 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:56.823 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:56.823 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:56.823 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:56.823 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:56.823 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:56.823 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:57.084 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:57.084 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:57.084 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:57.084 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:57.084 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:57.084 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:57.084 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:57.084 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:19:57.084 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:57.084 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:57.084 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:57.084 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.084 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.084 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.084 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:19:57.084 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.084 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:19:57.084 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:57.084 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:57.084 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:57.084 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:57.084 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:57.084 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:57.084 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:57.084 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:57.084 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:57.084 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:57.084 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:57.084 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:57.084 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:19:57.084 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:57.084 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:57.084 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:57.084 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:57.084 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:57.084 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:57.084 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:57.084 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:57.084 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:57.084 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:57.084 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:19:57.084 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:05.224 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:05.224 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:20:05.224 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:05.224 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:05.224 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:05.224 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:05.224 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:05.224 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:20:05.224 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:05.224 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:20:05.224 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:20:05.224 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:20:05.224 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:20:05.224 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:20:05.224 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:20:05.224 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:05.224 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:05.224 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:05.224 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:05.224 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:05.224 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:05.224 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:05.224 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:05.224 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:05.224 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:05.224 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:05.224 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:05.224 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:05.224 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:05.224 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:05.224 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:05.224 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:05.224 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:05.224 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:05.224 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:05.224 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:05.224 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:05.224 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:05.224 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:05.224 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:05.224 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:05.224 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:05.224 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:05.224 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:05.224 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:05.224 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:05.224 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:05.224 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:05.224 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:05.224 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:05.224 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:05.224 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:05.225 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:05.225 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:05.225 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:05.225 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:05.225 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:05.225 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:05.225 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:05.225 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:05.225 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:05.225 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:05.225 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:05.225 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:05.225 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:05.225 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:05.225 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:05.225 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:05.225 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:05.225 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:05.225 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:05.225 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:05.225 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:05.225 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:20:05.225 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:05.225 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:05.225 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:05.225 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:05.225 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:05.225 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:05.225 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:05.225 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:05.225 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:05.225 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:05.225 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:05.225 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:05.225 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:05.225 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:05.225 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:05.225 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:05.225 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:05.225 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:05.225 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:05.225 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:05.225 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:05.225 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:05.225 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:05.225 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:05.225 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:05.225 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:05.225 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:05.225 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.643 ms 00:20:05.225 00:20:05.225 --- 10.0.0.2 ping statistics --- 00:20:05.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:05.225 rtt min/avg/max/mdev = 0.643/0.643/0.643/0.000 ms 00:20:05.225 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:05.225 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:05.225 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.348 ms 00:20:05.225 00:20:05.225 --- 10.0.0.1 ping statistics --- 00:20:05.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:05.225 rtt min/avg/max/mdev = 0.348/0.348/0.348/0.000 ms 00:20:05.225 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:05.225 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:20:05.225 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:05.225 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:05.225 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:05.225 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:05.225 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:05.225 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:05.225 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:05.225 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:20:05.225 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:05.225 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:05.225 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:05.225 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=1379757 00:20:05.225 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 1379757 00:20:05.225 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:20:05.225 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 1379757 ']' 00:20:05.225 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:05.225 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:05.225 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:05.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:05.225 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:05.225 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:05.225 [2024-11-20 09:53:35.366665] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:20:05.225 [2024-11-20 09:53:35.366736] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:20:05.225 [2024-11-20 09:53:35.471185] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:05.225 [2024-11-20 09:53:35.531194] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:05.225 [2024-11-20 09:53:35.531240] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:05.225 [2024-11-20 09:53:35.531249] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:05.226 [2024-11-20 09:53:35.531256] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:05.226 [2024-11-20 09:53:35.531263] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:05.226 [2024-11-20 09:53:35.533112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:05.226 [2024-11-20 09:53:35.533421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:20:05.226 [2024-11-20 09:53:35.533647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:20:05.226 [2024-11-20 09:53:35.533744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:05.488 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:05.488 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:20:05.488 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:05.488 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:05.488 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:05.488 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:05.489 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:05.489 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.489 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:05.489 [2024-11-20 09:53:36.249116] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:05.489 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.489 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:05.489 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.489 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:05.489 Malloc0 00:20:05.489 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.489 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:05.489 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.489 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:05.489 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.489 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:05.489 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.489 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:05.489 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.489 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:05.489 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.489 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:05.489 [2024-11-20 09:53:36.302993] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:05.489 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.489 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:20:05.489 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:20:05.489 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:20:05.489 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:20:05.489 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:05.489 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:05.489 { 00:20:05.489 "params": { 00:20:05.489 "name": "Nvme$subsystem", 00:20:05.489 "trtype": "$TEST_TRANSPORT", 00:20:05.489 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:05.489 "adrfam": "ipv4", 00:20:05.489 "trsvcid": "$NVMF_PORT", 00:20:05.489 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:05.489 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:05.489 "hdgst": ${hdgst:-false}, 00:20:05.489 "ddgst": ${ddgst:-false} 00:20:05.489 }, 00:20:05.489 "method": "bdev_nvme_attach_controller" 00:20:05.489 } 00:20:05.489 EOF 00:20:05.489 )") 00:20:05.489 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:20:05.489 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:20:05.489 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:20:05.489 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:05.489 "params": { 00:20:05.489 "name": "Nvme1", 00:20:05.489 "trtype": "tcp", 00:20:05.489 "traddr": "10.0.0.2", 00:20:05.489 "adrfam": "ipv4", 00:20:05.489 "trsvcid": "4420", 00:20:05.489 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:05.489 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:05.489 "hdgst": false, 00:20:05.489 "ddgst": false 00:20:05.489 }, 00:20:05.489 "method": "bdev_nvme_attach_controller" 00:20:05.489 }' 00:20:05.489 [2024-11-20 09:53:36.360503] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:20:05.489 [2024-11-20 09:53:36.360575] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1380031 ] 00:20:05.749 [2024-11-20 09:53:36.458664] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:05.749 [2024-11-20 09:53:36.518788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:05.749 [2024-11-20 09:53:36.518948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:05.749 [2024-11-20 09:53:36.518948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:06.010 I/O targets: 00:20:06.010 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:20:06.010 00:20:06.010 00:20:06.010 CUnit - A unit testing framework for C - Version 2.1-3 00:20:06.010 http://cunit.sourceforge.net/ 00:20:06.010 00:20:06.010 00:20:06.010 Suite: bdevio tests on: Nvme1n1 00:20:06.010 Test: blockdev write read block ...passed 00:20:06.010 Test: blockdev write zeroes read block ...passed 00:20:06.271 Test: blockdev write zeroes read no split ...passed 00:20:06.271 Test: blockdev write zeroes read split ...passed 00:20:06.271 Test: blockdev write zeroes read split partial ...passed 00:20:06.271 Test: blockdev reset ...[2024-11-20 09:53:36.966567] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:20:06.271 [2024-11-20 09:53:36.966672] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1faf800 (9): Bad file descriptor 00:20:06.271 [2024-11-20 09:53:36.982409] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:20:06.271 passed 00:20:06.271 Test: blockdev write read 8 blocks ...passed 00:20:06.271 Test: blockdev write read size > 128k ...passed 00:20:06.271 Test: blockdev write read invalid size ...passed 00:20:06.271 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:06.271 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:06.271 Test: blockdev write read max offset ...passed 00:20:06.271 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:06.271 Test: blockdev writev readv 8 blocks ...passed 00:20:06.271 Test: blockdev writev readv 30 x 1block ...passed 00:20:06.532 Test: blockdev writev readv block ...passed 00:20:06.532 Test: blockdev writev readv size > 128k ...passed 00:20:06.532 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:06.532 Test: blockdev comparev and writev ...[2024-11-20 09:53:37.199053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:06.532 [2024-11-20 09:53:37.199102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:06.532 [2024-11-20 09:53:37.199120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:06.532 [2024-11-20 09:53:37.199131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:06.532 [2024-11-20 09:53:37.199437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:06.532 [2024-11-20 09:53:37.199451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:06.532 [2024-11-20 09:53:37.199466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:06.532 [2024-11-20 09:53:37.199474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:06.532 [2024-11-20 09:53:37.199772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:06.532 [2024-11-20 09:53:37.199791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:06.532 [2024-11-20 09:53:37.199806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:06.532 [2024-11-20 09:53:37.199815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:06.532 [2024-11-20 09:53:37.200099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:06.532 [2024-11-20 09:53:37.200111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:06.532 [2024-11-20 09:53:37.200125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:06.532 [2024-11-20 09:53:37.200144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:06.532 passed 00:20:06.532 Test: blockdev nvme passthru rw ...passed 00:20:06.532 Test: blockdev nvme passthru vendor specific ...[2024-11-20 09:53:37.283502] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:06.532 [2024-11-20 09:53:37.283519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:06.532 [2024-11-20 09:53:37.283652] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:06.532 [2024-11-20 09:53:37.283663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:06.532 [2024-11-20 09:53:37.283785] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:06.532 [2024-11-20 09:53:37.283796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:06.532 [2024-11-20 09:53:37.283921] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:06.532 [2024-11-20 09:53:37.283931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:06.532 passed 00:20:06.532 Test: blockdev nvme admin passthru ...passed 00:20:06.532 Test: blockdev copy ...passed 00:20:06.532 00:20:06.532 Run Summary: Type Total Ran Passed Failed Inactive 00:20:06.532 suites 1 1 n/a 0 0 00:20:06.532 tests 23 23 23 0 0 00:20:06.532 asserts 152 152 152 0 n/a 00:20:06.532 00:20:06.532 Elapsed time = 1.043 seconds 00:20:06.794 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:06.794 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.794 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:06.794 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.794 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:20:06.794 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:20:06.794 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:06.794 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:20:06.794 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:06.794 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:20:06.794 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:06.794 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:06.794 rmmod nvme_tcp 00:20:06.794 rmmod nvme_fabrics 00:20:06.794 rmmod nvme_keyring 00:20:07.054 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:07.054 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:20:07.054 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:20:07.054 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 1379757 ']' 00:20:07.054 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 1379757 00:20:07.054 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 1379757 ']' 00:20:07.054 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 1379757 00:20:07.054 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:20:07.054 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:07.054 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1379757 00:20:07.054 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:20:07.054 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:20:07.054 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1379757' 00:20:07.054 killing process with pid 1379757 00:20:07.054 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 1379757 00:20:07.054 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 1379757 00:20:07.315 09:53:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:07.315 09:53:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:07.315 09:53:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:07.315 09:53:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:20:07.315 09:53:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:20:07.315 09:53:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:20:07.315 09:53:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:07.315 09:53:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:07.315 09:53:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:07.315 09:53:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:07.315 09:53:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:07.315 09:53:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:09.859 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:09.859 00:20:09.859 real 0m12.712s 00:20:09.859 user 0m14.652s 00:20:09.859 sys 0m6.861s 00:20:09.859 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:09.859 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:09.859 ************************************ 00:20:09.859 END TEST nvmf_bdevio_no_huge 00:20:09.859 ************************************ 00:20:09.859 09:53:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:09.859 09:53:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:09.859 09:53:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:09.859 09:53:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:09.859 ************************************ 00:20:09.859 START TEST nvmf_tls 00:20:09.859 ************************************ 00:20:09.859 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:09.859 * Looking for test storage... 00:20:09.859 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:09.859 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:09.859 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:20:09.859 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:09.859 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:09.859 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:09.859 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:09.859 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:09.859 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:20:09.859 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:20:09.859 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:20:09.859 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:20:09.859 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:20:09.859 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:20:09.859 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:20:09.859 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:09.859 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:20:09.859 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:20:09.859 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:09.859 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:09.860 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:20:09.860 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:20:09.860 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:09.860 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:20:09.860 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:20:09.860 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:20:09.860 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:20:09.860 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:09.860 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:20:09.860 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:20:09.860 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:09.860 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:09.860 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:20:09.860 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:09.860 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:09.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:09.860 --rc genhtml_branch_coverage=1 00:20:09.860 --rc genhtml_function_coverage=1 00:20:09.860 --rc genhtml_legend=1 00:20:09.860 --rc geninfo_all_blocks=1 00:20:09.860 --rc geninfo_unexecuted_blocks=1 00:20:09.860 00:20:09.860 ' 00:20:09.860 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:09.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:09.860 --rc genhtml_branch_coverage=1 00:20:09.860 --rc genhtml_function_coverage=1 00:20:09.860 --rc genhtml_legend=1 00:20:09.860 --rc geninfo_all_blocks=1 00:20:09.860 --rc geninfo_unexecuted_blocks=1 00:20:09.860 00:20:09.860 ' 00:20:09.860 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:09.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:09.860 --rc genhtml_branch_coverage=1 00:20:09.860 --rc genhtml_function_coverage=1 00:20:09.860 --rc genhtml_legend=1 00:20:09.860 --rc geninfo_all_blocks=1 00:20:09.860 --rc geninfo_unexecuted_blocks=1 00:20:09.860 00:20:09.860 ' 00:20:09.860 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:09.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:09.860 --rc genhtml_branch_coverage=1 00:20:09.860 --rc genhtml_function_coverage=1 00:20:09.860 --rc genhtml_legend=1 00:20:09.860 --rc geninfo_all_blocks=1 00:20:09.860 --rc geninfo_unexecuted_blocks=1 00:20:09.860 00:20:09.860 ' 00:20:09.860 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:09.860 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:20:09.860 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:09.860 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:09.860 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:09.860 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:09.860 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:09.860 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:09.860 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:09.860 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:09.860 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:09.860 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:09.860 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:09.860 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:09.860 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:09.860 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:09.860 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:09.860 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:09.860 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:09.860 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:20:09.860 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:09.860 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:09.860 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:09.860 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:09.860 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:09.860 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:09.860 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:20:09.860 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:09.860 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:20:09.860 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:09.860 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:09.860 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:09.860 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:09.860 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:09.860 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:09.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:09.860 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:09.860 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:09.860 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:09.860 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:09.860 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:20:09.860 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:09.860 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:09.860 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:09.860 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:09.861 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:09.861 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:09.861 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:09.861 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:09.861 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:09.861 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:09.861 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:20:09.861 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:18.083 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:18.083 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:20:18.083 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:18.083 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:18.083 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:18.083 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:18.083 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:18.083 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:20:18.083 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:18.083 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:20:18.083 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:20:18.083 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:20:18.083 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:20:18.083 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:20:18.083 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:20:18.083 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:18.083 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:18.083 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:18.083 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:18.083 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:18.083 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:18.083 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:18.083 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:18.083 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:18.083 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:18.083 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:18.083 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:18.083 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:18.083 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:18.083 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:18.083 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:18.083 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:18.083 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:18.083 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:18.083 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:18.083 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:18.083 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:18.083 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:18.083 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:18.083 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:18.083 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:18.083 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:18.083 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:18.083 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:18.083 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:18.083 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:18.083 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:18.083 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:18.083 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:18.083 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:18.083 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:18.083 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:18.083 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:18.083 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:18.083 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:18.083 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:18.083 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:18.083 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:18.083 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:18.083 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:18.083 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:18.083 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:18.083 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:18.083 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:18.083 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:18.083 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:18.083 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:18.083 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:18.083 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:18.083 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:18.083 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:18.083 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:18.083 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:18.083 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:20:18.083 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:18.083 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:18.083 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:18.083 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:18.083 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:18.083 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:18.083 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:18.083 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:18.083 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:18.083 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:18.083 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:18.083 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:18.083 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:18.083 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:18.083 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:18.083 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:18.083 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:18.083 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:18.083 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:18.083 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:18.083 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:18.083 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:18.083 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:18.084 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:18.084 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:18.084 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:18.084 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:18.084 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.653 ms 00:20:18.084 00:20:18.084 --- 10.0.0.2 ping statistics --- 00:20:18.084 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:18.084 rtt min/avg/max/mdev = 0.653/0.653/0.653/0.000 ms 00:20:18.084 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:18.084 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:18.084 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:20:18.084 00:20:18.084 --- 10.0.0.1 ping statistics --- 00:20:18.084 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:18.084 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:20:18.084 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:18.084 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:20:18.084 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:18.084 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:18.084 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:18.084 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:18.084 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:18.084 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:18.084 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:18.084 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:20:18.084 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:18.084 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:18.084 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:18.084 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1384461 00:20:18.084 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1384461 00:20:18.084 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:20:18.084 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1384461 ']' 00:20:18.084 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:18.084 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:18.084 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:18.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:18.084 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:18.084 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:18.084 [2024-11-20 09:53:48.169463] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:20:18.084 [2024-11-20 09:53:48.169541] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:18.084 [2024-11-20 09:53:48.273359] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:18.084 [2024-11-20 09:53:48.323602] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:18.084 [2024-11-20 09:53:48.323657] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:18.084 [2024-11-20 09:53:48.323666] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:18.084 [2024-11-20 09:53:48.323673] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:18.084 [2024-11-20 09:53:48.323679] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:18.084 [2024-11-20 09:53:48.324448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:18.084 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:18.084 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:18.084 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:18.084 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:18.084 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:18.345 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:18.345 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:20:18.345 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:20:18.345 true 00:20:18.345 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:18.345 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:20:18.606 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:20:18.606 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:20:18.606 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:18.867 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:18.867 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:20:19.129 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:20:19.129 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:20:19.129 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:20:19.129 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:19.129 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:20:19.391 09:53:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:20:19.391 09:53:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:20:19.391 09:53:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:19.391 09:53:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:20:19.652 09:53:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:20:19.652 09:53:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:20:19.652 09:53:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:20:19.913 09:53:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:19.913 09:53:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:20:19.913 09:53:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:20:19.913 09:53:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:20:19.913 09:53:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:20:20.174 09:53:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:20.174 09:53:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:20:20.436 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:20:20.436 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:20:20.436 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:20:20.436 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:20:20.436 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:20:20.436 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:20:20.436 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:20:20.436 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:20:20.436 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:20:20.436 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:20.436 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:20:20.436 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:20:20.436 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:20:20.436 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:20:20.436 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:20:20.436 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:20:20.436 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:20:20.436 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:20.436 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:20:20.436 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.BAilOcvkZX 00:20:20.436 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:20:20.436 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.5PyJJX7lEM 00:20:20.436 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:20.436 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:20.436 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.BAilOcvkZX 00:20:20.436 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.5PyJJX7lEM 00:20:20.436 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:20.697 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:20:20.958 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.BAilOcvkZX 00:20:20.958 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.BAilOcvkZX 00:20:20.958 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:21.220 [2024-11-20 09:53:51.894007] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:21.220 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:21.220 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:21.482 [2024-11-20 09:53:52.278976] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:21.482 [2024-11-20 09:53:52.279365] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:21.482 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:21.742 malloc0 00:20:21.742 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:22.003 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.BAilOcvkZX 00:20:22.003 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:22.264 09:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.BAilOcvkZX 00:20:34.491 Initializing NVMe Controllers 00:20:34.491 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:34.491 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:34.491 Initialization complete. Launching workers. 00:20:34.491 ======================================================== 00:20:34.491 Latency(us) 00:20:34.491 Device Information : IOPS MiB/s Average min max 00:20:34.491 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18588.79 72.61 3443.16 1090.99 4054.75 00:20:34.491 ======================================================== 00:20:34.491 Total : 18588.79 72.61 3443.16 1090.99 4054.75 00:20:34.491 00:20:34.491 09:54:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.BAilOcvkZX 00:20:34.491 09:54:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:34.491 09:54:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:34.491 09:54:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:34.491 09:54:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.BAilOcvkZX 00:20:34.491 09:54:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:34.491 09:54:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1387628 00:20:34.491 09:54:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:34.491 09:54:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1387628 /var/tmp/bdevperf.sock 00:20:34.491 09:54:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:34.491 09:54:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1387628 ']' 00:20:34.491 09:54:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:34.491 09:54:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:34.491 09:54:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:34.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:34.491 09:54:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:34.491 09:54:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:34.491 [2024-11-20 09:54:03.234885] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:20:34.491 [2024-11-20 09:54:03.234940] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1387628 ] 00:20:34.491 [2024-11-20 09:54:03.321487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:34.491 [2024-11-20 09:54:03.357470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:34.491 09:54:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:34.491 09:54:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:34.491 09:54:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.BAilOcvkZX 00:20:34.491 09:54:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:34.491 [2024-11-20 09:54:04.360948] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:34.491 TLSTESTn1 00:20:34.491 09:54:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:34.491 Running I/O for 10 seconds... 00:20:35.694 5001.00 IOPS, 19.54 MiB/s [2024-11-20T08:54:07.993Z] 5202.00 IOPS, 20.32 MiB/s [2024-11-20T08:54:08.565Z] 5504.00 IOPS, 21.50 MiB/s [2024-11-20T08:54:09.953Z] 5595.25 IOPS, 21.86 MiB/s [2024-11-20T08:54:10.897Z] 5760.60 IOPS, 22.50 MiB/s [2024-11-20T08:54:11.839Z] 5762.00 IOPS, 22.51 MiB/s [2024-11-20T08:54:12.781Z] 5849.71 IOPS, 22.85 MiB/s [2024-11-20T08:54:13.725Z] 5806.12 IOPS, 22.68 MiB/s [2024-11-20T08:54:14.669Z] 5834.78 IOPS, 22.79 MiB/s [2024-11-20T08:54:14.669Z] 5843.00 IOPS, 22.82 MiB/s 00:20:43.753 Latency(us) 00:20:43.753 [2024-11-20T08:54:14.669Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:43.753 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:43.753 Verification LBA range: start 0x0 length 0x2000 00:20:43.753 TLSTESTn1 : 10.05 5825.62 22.76 0.00 0.00 21903.09 6144.00 50899.63 00:20:43.753 [2024-11-20T08:54:14.669Z] =================================================================================================================== 00:20:43.753 [2024-11-20T08:54:14.669Z] Total : 5825.62 22.76 0.00 0.00 21903.09 6144.00 50899.63 00:20:43.753 { 00:20:43.753 "results": [ 00:20:43.753 { 00:20:43.753 "job": "TLSTESTn1", 00:20:43.753 "core_mask": "0x4", 00:20:43.753 "workload": "verify", 00:20:43.753 "status": "finished", 00:20:43.753 "verify_range": { 00:20:43.753 "start": 0, 00:20:43.753 "length": 8192 00:20:43.753 }, 00:20:43.753 "queue_depth": 128, 00:20:43.753 "io_size": 4096, 00:20:43.753 "runtime": 10.051636, 00:20:43.753 "iops": 5825.618834585733, 00:20:43.753 "mibps": 22.756323572600518, 00:20:43.753 "io_failed": 0, 00:20:43.753 "io_timeout": 0, 00:20:43.753 "avg_latency_us": 21903.085042380357, 00:20:43.753 "min_latency_us": 6144.0, 00:20:43.753 "max_latency_us": 50899.62666666666 00:20:43.753 } 00:20:43.753 ], 00:20:43.753 "core_count": 1 00:20:43.753 } 00:20:43.753 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:43.753 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1387628 00:20:43.753 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1387628 ']' 00:20:43.753 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1387628 00:20:43.753 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:43.753 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:43.753 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1387628 00:20:44.015 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:44.015 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:44.015 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1387628' 00:20:44.015 killing process with pid 1387628 00:20:44.015 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1387628 00:20:44.015 Received shutdown signal, test time was about 10.000000 seconds 00:20:44.015 00:20:44.015 Latency(us) 00:20:44.015 [2024-11-20T08:54:14.931Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:44.015 [2024-11-20T08:54:14.931Z] =================================================================================================================== 00:20:44.015 [2024-11-20T08:54:14.931Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:44.015 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1387628 00:20:44.015 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.5PyJJX7lEM 00:20:44.015 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:44.015 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.5PyJJX7lEM 00:20:44.015 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:44.015 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:44.015 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:44.015 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:44.015 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.5PyJJX7lEM 00:20:44.015 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:44.015 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:44.015 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:44.015 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.5PyJJX7lEM 00:20:44.015 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:44.015 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1390206 00:20:44.015 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:44.015 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1390206 /var/tmp/bdevperf.sock 00:20:44.015 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:44.015 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1390206 ']' 00:20:44.015 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:44.015 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:44.015 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:44.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:44.015 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:44.015 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:44.015 [2024-11-20 09:54:14.866938] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:20:44.015 [2024-11-20 09:54:14.866994] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1390206 ] 00:20:44.276 [2024-11-20 09:54:14.948538] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:44.276 [2024-11-20 09:54:14.977477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:44.848 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:44.848 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:44.848 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.5PyJJX7lEM 00:20:45.109 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:45.109 [2024-11-20 09:54:16.000134] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:45.109 [2024-11-20 09:54:16.009400] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:45.109 [2024-11-20 09:54:16.010303] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a4bb0 (107): Transport endpoint is not connected 00:20:45.109 [2024-11-20 09:54:16.011299] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a4bb0 (9): Bad file descriptor 00:20:45.109 [2024-11-20 09:54:16.012300] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:20:45.109 [2024-11-20 09:54:16.012309] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:45.109 [2024-11-20 09:54:16.012314] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:20:45.109 [2024-11-20 09:54:16.012322] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:20:45.109 request: 00:20:45.109 { 00:20:45.109 "name": "TLSTEST", 00:20:45.109 "trtype": "tcp", 00:20:45.109 "traddr": "10.0.0.2", 00:20:45.109 "adrfam": "ipv4", 00:20:45.109 "trsvcid": "4420", 00:20:45.109 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:45.109 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:45.109 "prchk_reftag": false, 00:20:45.109 "prchk_guard": false, 00:20:45.109 "hdgst": false, 00:20:45.109 "ddgst": false, 00:20:45.109 "psk": "key0", 00:20:45.109 "allow_unrecognized_csi": false, 00:20:45.109 "method": "bdev_nvme_attach_controller", 00:20:45.109 "req_id": 1 00:20:45.109 } 00:20:45.109 Got JSON-RPC error response 00:20:45.109 response: 00:20:45.109 { 00:20:45.109 "code": -5, 00:20:45.109 "message": "Input/output error" 00:20:45.109 } 00:20:45.369 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1390206 00:20:45.369 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1390206 ']' 00:20:45.369 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1390206 00:20:45.369 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:45.369 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:45.369 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1390206 00:20:45.369 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:45.369 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:45.369 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1390206' 00:20:45.369 killing process with pid 1390206 00:20:45.369 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1390206 00:20:45.369 Received shutdown signal, test time was about 10.000000 seconds 00:20:45.369 00:20:45.369 Latency(us) 00:20:45.369 [2024-11-20T08:54:16.285Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:45.369 [2024-11-20T08:54:16.285Z] =================================================================================================================== 00:20:45.369 [2024-11-20T08:54:16.285Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:45.369 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1390206 00:20:45.369 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:45.369 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:45.369 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:45.369 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:45.369 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:45.369 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.BAilOcvkZX 00:20:45.369 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:45.369 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.BAilOcvkZX 00:20:45.369 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:45.369 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:45.369 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:45.369 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:45.369 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.BAilOcvkZX 00:20:45.369 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:45.369 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:45.369 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:20:45.369 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.BAilOcvkZX 00:20:45.369 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:45.369 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1390455 00:20:45.369 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:45.369 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1390455 /var/tmp/bdevperf.sock 00:20:45.369 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:45.369 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1390455 ']' 00:20:45.369 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:45.369 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:45.369 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:45.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:45.369 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:45.369 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:45.369 [2024-11-20 09:54:16.253339] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:20:45.369 [2024-11-20 09:54:16.253392] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1390455 ] 00:20:45.631 [2024-11-20 09:54:16.339401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:45.631 [2024-11-20 09:54:16.367032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:46.201 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:46.202 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:46.202 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.BAilOcvkZX 00:20:46.462 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:20:46.723 [2024-11-20 09:54:17.377513] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:46.723 [2024-11-20 09:54:17.382725] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:46.723 [2024-11-20 09:54:17.382747] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:46.723 [2024-11-20 09:54:17.382767] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:46.723 [2024-11-20 09:54:17.382802] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15d2bb0 (107): Transport endpoint is not connected 00:20:46.723 [2024-11-20 09:54:17.383791] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15d2bb0 (9): Bad file descriptor 00:20:46.723 [2024-11-20 09:54:17.384793] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:20:46.723 [2024-11-20 09:54:17.384800] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:46.723 [2024-11-20 09:54:17.384806] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:20:46.723 [2024-11-20 09:54:17.384813] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:20:46.723 request: 00:20:46.723 { 00:20:46.723 "name": "TLSTEST", 00:20:46.723 "trtype": "tcp", 00:20:46.723 "traddr": "10.0.0.2", 00:20:46.723 "adrfam": "ipv4", 00:20:46.723 "trsvcid": "4420", 00:20:46.723 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:46.723 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:46.723 "prchk_reftag": false, 00:20:46.723 "prchk_guard": false, 00:20:46.723 "hdgst": false, 00:20:46.723 "ddgst": false, 00:20:46.723 "psk": "key0", 00:20:46.723 "allow_unrecognized_csi": false, 00:20:46.723 "method": "bdev_nvme_attach_controller", 00:20:46.723 "req_id": 1 00:20:46.723 } 00:20:46.723 Got JSON-RPC error response 00:20:46.723 response: 00:20:46.723 { 00:20:46.723 "code": -5, 00:20:46.723 "message": "Input/output error" 00:20:46.723 } 00:20:46.723 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1390455 00:20:46.723 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1390455 ']' 00:20:46.723 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1390455 00:20:46.723 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:46.723 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:46.723 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1390455 00:20:46.723 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:46.723 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:46.723 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1390455' 00:20:46.723 killing process with pid 1390455 00:20:46.723 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1390455 00:20:46.723 Received shutdown signal, test time was about 10.000000 seconds 00:20:46.723 00:20:46.723 Latency(us) 00:20:46.723 [2024-11-20T08:54:17.639Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:46.723 [2024-11-20T08:54:17.639Z] =================================================================================================================== 00:20:46.723 [2024-11-20T08:54:17.639Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:46.723 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1390455 00:20:46.723 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:46.723 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:46.723 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:46.723 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:46.723 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:46.723 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.BAilOcvkZX 00:20:46.723 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:46.723 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.BAilOcvkZX 00:20:46.723 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:46.723 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:46.723 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:46.723 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:46.723 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.BAilOcvkZX 00:20:46.723 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:46.723 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:20:46.723 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:46.723 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.BAilOcvkZX 00:20:46.723 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:46.723 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1390799 00:20:46.723 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:46.723 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1390799 /var/tmp/bdevperf.sock 00:20:46.723 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:46.723 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1390799 ']' 00:20:46.723 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:46.723 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:46.723 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:46.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:46.723 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:46.723 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:46.723 [2024-11-20 09:54:17.622793] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:20:46.723 [2024-11-20 09:54:17.622847] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1390799 ] 00:20:46.984 [2024-11-20 09:54:17.706374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:46.984 [2024-11-20 09:54:17.733279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:47.554 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:47.554 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:47.554 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.BAilOcvkZX 00:20:47.815 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:48.075 [2024-11-20 09:54:18.759718] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:48.075 [2024-11-20 09:54:18.766096] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:48.075 [2024-11-20 09:54:18.766116] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:48.075 [2024-11-20 09:54:18.766135] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:48.075 [2024-11-20 09:54:18.766839] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1290bb0 (107): Transport endpoint is not connected 00:20:48.075 [2024-11-20 09:54:18.767834] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1290bb0 (9): Bad file descriptor 00:20:48.075 [2024-11-20 09:54:18.768836] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:20:48.075 [2024-11-20 09:54:18.768843] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:48.075 [2024-11-20 09:54:18.768849] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:20:48.075 [2024-11-20 09:54:18.768857] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:20:48.075 request: 00:20:48.075 { 00:20:48.075 "name": "TLSTEST", 00:20:48.075 "trtype": "tcp", 00:20:48.075 "traddr": "10.0.0.2", 00:20:48.075 "adrfam": "ipv4", 00:20:48.075 "trsvcid": "4420", 00:20:48.075 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:48.075 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:48.075 "prchk_reftag": false, 00:20:48.075 "prchk_guard": false, 00:20:48.075 "hdgst": false, 00:20:48.075 "ddgst": false, 00:20:48.075 "psk": "key0", 00:20:48.075 "allow_unrecognized_csi": false, 00:20:48.075 "method": "bdev_nvme_attach_controller", 00:20:48.075 "req_id": 1 00:20:48.075 } 00:20:48.075 Got JSON-RPC error response 00:20:48.075 response: 00:20:48.075 { 00:20:48.075 "code": -5, 00:20:48.075 "message": "Input/output error" 00:20:48.075 } 00:20:48.075 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1390799 00:20:48.075 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1390799 ']' 00:20:48.075 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1390799 00:20:48.075 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:48.075 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:48.075 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1390799 00:20:48.075 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:48.075 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:48.075 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1390799' 00:20:48.075 killing process with pid 1390799 00:20:48.075 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1390799 00:20:48.075 Received shutdown signal, test time was about 10.000000 seconds 00:20:48.075 00:20:48.075 Latency(us) 00:20:48.075 [2024-11-20T08:54:18.991Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:48.075 [2024-11-20T08:54:18.991Z] =================================================================================================================== 00:20:48.075 [2024-11-20T08:54:18.991Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:48.075 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1390799 00:20:48.075 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:48.075 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:48.075 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:48.075 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:48.075 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:48.075 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:48.076 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:48.076 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:48.076 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:48.076 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:48.076 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:48.076 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:48.076 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:48.076 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:48.076 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:48.076 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:48.076 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:20:48.076 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:48.076 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1391140 00:20:48.076 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:48.076 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1391140 /var/tmp/bdevperf.sock 00:20:48.076 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:48.076 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1391140 ']' 00:20:48.076 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:48.076 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:48.076 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:48.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:48.076 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:48.076 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:48.336 [2024-11-20 09:54:19.013842] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:20:48.336 [2024-11-20 09:54:19.013897] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1391140 ] 00:20:48.336 [2024-11-20 09:54:19.099035] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:48.336 [2024-11-20 09:54:19.126409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:48.907 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:48.907 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:48.907 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:20:49.169 [2024-11-20 09:54:19.972520] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:20:49.169 [2024-11-20 09:54:19.972544] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:49.169 request: 00:20:49.169 { 00:20:49.169 "name": "key0", 00:20:49.169 "path": "", 00:20:49.169 "method": "keyring_file_add_key", 00:20:49.169 "req_id": 1 00:20:49.169 } 00:20:49.169 Got JSON-RPC error response 00:20:49.169 response: 00:20:49.169 { 00:20:49.169 "code": -1, 00:20:49.169 "message": "Operation not permitted" 00:20:49.169 } 00:20:49.169 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:49.430 [2024-11-20 09:54:20.157074] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:49.430 [2024-11-20 09:54:20.157104] bdev_nvme.c:6716:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:20:49.430 request: 00:20:49.430 { 00:20:49.430 "name": "TLSTEST", 00:20:49.430 "trtype": "tcp", 00:20:49.430 "traddr": "10.0.0.2", 00:20:49.430 "adrfam": "ipv4", 00:20:49.430 "trsvcid": "4420", 00:20:49.430 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:49.430 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:49.430 "prchk_reftag": false, 00:20:49.430 "prchk_guard": false, 00:20:49.430 "hdgst": false, 00:20:49.430 "ddgst": false, 00:20:49.430 "psk": "key0", 00:20:49.430 "allow_unrecognized_csi": false, 00:20:49.430 "method": "bdev_nvme_attach_controller", 00:20:49.430 "req_id": 1 00:20:49.430 } 00:20:49.430 Got JSON-RPC error response 00:20:49.430 response: 00:20:49.430 { 00:20:49.430 "code": -126, 00:20:49.430 "message": "Required key not available" 00:20:49.430 } 00:20:49.430 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1391140 00:20:49.430 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1391140 ']' 00:20:49.430 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1391140 00:20:49.430 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:49.430 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:49.430 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1391140 00:20:49.430 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:49.430 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:49.430 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1391140' 00:20:49.430 killing process with pid 1391140 00:20:49.430 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1391140 00:20:49.430 Received shutdown signal, test time was about 10.000000 seconds 00:20:49.430 00:20:49.430 Latency(us) 00:20:49.430 [2024-11-20T08:54:20.346Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:49.430 [2024-11-20T08:54:20.346Z] =================================================================================================================== 00:20:49.430 [2024-11-20T08:54:20.346Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:49.430 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1391140 00:20:49.430 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:49.430 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:49.430 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:49.430 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:49.430 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:49.430 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 1384461 00:20:49.430 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1384461 ']' 00:20:49.430 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1384461 00:20:49.430 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:49.430 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:49.430 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1384461 00:20:49.691 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:49.691 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:49.691 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1384461' 00:20:49.691 killing process with pid 1384461 00:20:49.691 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1384461 00:20:49.691 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1384461 00:20:49.691 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:20:49.691 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:20:49.691 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:20:49.691 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:20:49.691 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:20:49.691 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:20:49.691 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:20:49.691 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:49.691 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:20:49.691 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.mSgh9swVDZ 00:20:49.691 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:49.691 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.mSgh9swVDZ 00:20:49.691 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:20:49.691 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:49.691 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:49.691 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:49.691 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1391494 00:20:49.691 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1391494 00:20:49.691 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:49.691 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1391494 ']' 00:20:49.691 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:49.691 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:49.691 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:49.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:49.691 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:49.692 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:49.952 [2024-11-20 09:54:20.622823] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:20:49.952 [2024-11-20 09:54:20.622929] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:49.952 [2024-11-20 09:54:20.718901] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:49.952 [2024-11-20 09:54:20.749176] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:49.952 [2024-11-20 09:54:20.749203] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:49.952 [2024-11-20 09:54:20.749209] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:49.952 [2024-11-20 09:54:20.749214] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:49.952 [2024-11-20 09:54:20.749218] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:49.952 [2024-11-20 09:54:20.749681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:50.522 09:54:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:50.522 09:54:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:50.522 09:54:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:50.522 09:54:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:50.522 09:54:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:50.782 09:54:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:50.782 09:54:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.mSgh9swVDZ 00:20:50.782 09:54:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.mSgh9swVDZ 00:20:50.782 09:54:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:50.782 [2024-11-20 09:54:21.601795] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:50.782 09:54:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:51.044 09:54:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:51.044 [2024-11-20 09:54:21.922567] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:51.044 [2024-11-20 09:54:21.922774] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:51.044 09:54:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:51.304 malloc0 00:20:51.304 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:51.565 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.mSgh9swVDZ 00:20:51.565 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:51.825 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.mSgh9swVDZ 00:20:51.825 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:51.825 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:51.825 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:51.825 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.mSgh9swVDZ 00:20:51.825 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:51.825 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:51.825 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1391861 00:20:51.825 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:51.825 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1391861 /var/tmp/bdevperf.sock 00:20:51.825 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1391861 ']' 00:20:51.825 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:51.825 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:51.825 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:51.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:51.825 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:51.825 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:51.825 [2024-11-20 09:54:22.634843] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:20:51.825 [2024-11-20 09:54:22.634895] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1391861 ] 00:20:51.825 [2024-11-20 09:54:22.717623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:52.086 [2024-11-20 09:54:22.747101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:52.086 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:52.086 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:52.086 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.mSgh9swVDZ 00:20:52.345 09:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:52.345 [2024-11-20 09:54:23.164145] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:52.345 TLSTESTn1 00:20:52.607 09:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:52.607 Running I/O for 10 seconds... 00:20:54.492 6008.00 IOPS, 23.47 MiB/s [2024-11-20T08:54:26.793Z] 5952.50 IOPS, 23.25 MiB/s [2024-11-20T08:54:27.364Z] 5827.33 IOPS, 22.76 MiB/s [2024-11-20T08:54:28.748Z] 5838.00 IOPS, 22.80 MiB/s [2024-11-20T08:54:29.704Z] 5830.00 IOPS, 22.77 MiB/s [2024-11-20T08:54:30.646Z] 5734.00 IOPS, 22.40 MiB/s [2024-11-20T08:54:31.588Z] 5705.86 IOPS, 22.29 MiB/s [2024-11-20T08:54:32.530Z] 5702.50 IOPS, 22.28 MiB/s [2024-11-20T08:54:33.472Z] 5741.11 IOPS, 22.43 MiB/s [2024-11-20T08:54:33.472Z] 5730.40 IOPS, 22.38 MiB/s 00:21:02.556 Latency(us) 00:21:02.556 [2024-11-20T08:54:33.472Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:02.556 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:02.556 Verification LBA range: start 0x0 length 0x2000 00:21:02.556 TLSTESTn1 : 10.05 5713.08 22.32 0.00 0.00 22331.00 6116.69 50025.81 00:21:02.556 [2024-11-20T08:54:33.472Z] =================================================================================================================== 00:21:02.556 [2024-11-20T08:54:33.472Z] Total : 5713.08 22.32 0.00 0.00 22331.00 6116.69 50025.81 00:21:02.556 { 00:21:02.556 "results": [ 00:21:02.556 { 00:21:02.556 "job": "TLSTESTn1", 00:21:02.556 "core_mask": "0x4", 00:21:02.556 "workload": "verify", 00:21:02.556 "status": "finished", 00:21:02.556 "verify_range": { 00:21:02.556 "start": 0, 00:21:02.556 "length": 8192 00:21:02.556 }, 00:21:02.556 "queue_depth": 128, 00:21:02.556 "io_size": 4096, 00:21:02.556 "runtime": 10.052541, 00:21:02.556 "iops": 5713.082891181443, 00:21:02.556 "mibps": 22.316730043677513, 00:21:02.556 "io_failed": 0, 00:21:02.556 "io_timeout": 0, 00:21:02.556 "avg_latency_us": 22330.99864440227, 00:21:02.556 "min_latency_us": 6116.693333333334, 00:21:02.556 "max_latency_us": 50025.81333333333 00:21:02.556 } 00:21:02.556 ], 00:21:02.556 "core_count": 1 00:21:02.556 } 00:21:02.556 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:02.556 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1391861 00:21:02.556 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1391861 ']' 00:21:02.556 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1391861 00:21:02.556 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:02.556 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:02.556 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1391861 00:21:02.817 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:02.817 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:02.817 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1391861' 00:21:02.817 killing process with pid 1391861 00:21:02.817 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1391861 00:21:02.817 Received shutdown signal, test time was about 10.000000 seconds 00:21:02.817 00:21:02.817 Latency(us) 00:21:02.817 [2024-11-20T08:54:33.733Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:02.817 [2024-11-20T08:54:33.733Z] =================================================================================================================== 00:21:02.817 [2024-11-20T08:54:33.733Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:02.817 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1391861 00:21:02.817 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.mSgh9swVDZ 00:21:02.817 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.mSgh9swVDZ 00:21:02.817 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:21:02.817 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.mSgh9swVDZ 00:21:02.817 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:21:02.817 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:02.817 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:21:02.817 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:02.817 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.mSgh9swVDZ 00:21:02.817 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:02.817 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:02.817 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:02.817 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.mSgh9swVDZ 00:21:02.817 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:02.817 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1393875 00:21:02.817 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:02.817 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1393875 /var/tmp/bdevperf.sock 00:21:02.817 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:02.817 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1393875 ']' 00:21:02.817 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:02.817 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:02.817 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:02.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:02.817 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:02.817 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:02.817 [2024-11-20 09:54:33.670150] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:21:02.817 [2024-11-20 09:54:33.670208] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1393875 ] 00:21:03.078 [2024-11-20 09:54:33.752290] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:03.078 [2024-11-20 09:54:33.779889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:03.648 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:03.648 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:03.648 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.mSgh9swVDZ 00:21:03.909 [2024-11-20 09:54:34.626124] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.mSgh9swVDZ': 0100666 00:21:03.909 [2024-11-20 09:54:34.626150] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:21:03.909 request: 00:21:03.909 { 00:21:03.909 "name": "key0", 00:21:03.909 "path": "/tmp/tmp.mSgh9swVDZ", 00:21:03.909 "method": "keyring_file_add_key", 00:21:03.909 "req_id": 1 00:21:03.909 } 00:21:03.909 Got JSON-RPC error response 00:21:03.909 response: 00:21:03.909 { 00:21:03.909 "code": -1, 00:21:03.909 "message": "Operation not permitted" 00:21:03.909 } 00:21:03.909 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:03.909 [2024-11-20 09:54:34.806654] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:03.909 [2024-11-20 09:54:34.806674] bdev_nvme.c:6716:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:21:03.909 request: 00:21:03.909 { 00:21:03.909 "name": "TLSTEST", 00:21:03.909 "trtype": "tcp", 00:21:03.909 "traddr": "10.0.0.2", 00:21:03.909 "adrfam": "ipv4", 00:21:03.909 "trsvcid": "4420", 00:21:03.909 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:03.909 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:03.909 "prchk_reftag": false, 00:21:03.909 "prchk_guard": false, 00:21:03.909 "hdgst": false, 00:21:03.909 "ddgst": false, 00:21:03.909 "psk": "key0", 00:21:03.909 "allow_unrecognized_csi": false, 00:21:03.909 "method": "bdev_nvme_attach_controller", 00:21:03.909 "req_id": 1 00:21:03.909 } 00:21:03.909 Got JSON-RPC error response 00:21:03.909 response: 00:21:03.909 { 00:21:03.909 "code": -126, 00:21:03.909 "message": "Required key not available" 00:21:03.909 } 00:21:04.170 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1393875 00:21:04.170 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1393875 ']' 00:21:04.170 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1393875 00:21:04.170 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:04.170 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:04.170 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1393875 00:21:04.170 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:04.170 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:04.170 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1393875' 00:21:04.170 killing process with pid 1393875 00:21:04.170 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1393875 00:21:04.170 Received shutdown signal, test time was about 10.000000 seconds 00:21:04.170 00:21:04.170 Latency(us) 00:21:04.170 [2024-11-20T08:54:35.086Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:04.170 [2024-11-20T08:54:35.086Z] =================================================================================================================== 00:21:04.170 [2024-11-20T08:54:35.086Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:04.170 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1393875 00:21:04.170 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:21:04.170 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:21:04.170 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:04.170 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:04.170 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:04.170 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 1391494 00:21:04.170 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1391494 ']' 00:21:04.170 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1391494 00:21:04.170 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:04.170 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:04.170 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1391494 00:21:04.170 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:04.170 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:04.170 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1391494' 00:21:04.170 killing process with pid 1391494 00:21:04.170 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1391494 00:21:04.170 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1391494 00:21:04.430 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:21:04.430 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:04.430 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:04.430 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:04.430 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:04.430 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1394225 00:21:04.430 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1394225 00:21:04.430 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1394225 ']' 00:21:04.430 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:04.430 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:04.430 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:04.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:04.430 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:04.430 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:04.430 [2024-11-20 09:54:35.192593] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:21:04.430 [2024-11-20 09:54:35.192634] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:04.430 [2024-11-20 09:54:35.275900] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:04.430 [2024-11-20 09:54:35.304939] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:04.430 [2024-11-20 09:54:35.304969] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:04.430 [2024-11-20 09:54:35.304975] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:04.430 [2024-11-20 09:54:35.304980] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:04.430 [2024-11-20 09:54:35.304986] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:04.430 [2024-11-20 09:54:35.305465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:04.697 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:04.697 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:04.697 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:04.697 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:04.697 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:04.697 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:04.697 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.mSgh9swVDZ 00:21:04.697 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:21:04.697 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.mSgh9swVDZ 00:21:04.697 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:21:04.697 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:04.697 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:21:04.697 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:04.697 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.mSgh9swVDZ 00:21:04.697 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.mSgh9swVDZ 00:21:04.697 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:04.697 [2024-11-20 09:54:35.591590] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:05.018 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:05.018 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:05.308 [2024-11-20 09:54:35.964509] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:05.308 [2024-11-20 09:54:35.964711] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:05.308 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:05.308 malloc0 00:21:05.308 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:05.573 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.mSgh9swVDZ 00:21:05.834 [2024-11-20 09:54:36.519551] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.mSgh9swVDZ': 0100666 00:21:05.834 [2024-11-20 09:54:36.519575] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:21:05.834 request: 00:21:05.834 { 00:21:05.834 "name": "key0", 00:21:05.834 "path": "/tmp/tmp.mSgh9swVDZ", 00:21:05.834 "method": "keyring_file_add_key", 00:21:05.834 "req_id": 1 00:21:05.834 } 00:21:05.834 Got JSON-RPC error response 00:21:05.834 response: 00:21:05.834 { 00:21:05.834 "code": -1, 00:21:05.834 "message": "Operation not permitted" 00:21:05.834 } 00:21:05.834 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:21:05.834 [2024-11-20 09:54:36.700028] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:21:05.834 [2024-11-20 09:54:36.700054] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:21:05.834 request: 00:21:05.834 { 00:21:05.834 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:05.834 "host": "nqn.2016-06.io.spdk:host1", 00:21:05.834 "psk": "key0", 00:21:05.834 "method": "nvmf_subsystem_add_host", 00:21:05.834 "req_id": 1 00:21:05.834 } 00:21:05.834 Got JSON-RPC error response 00:21:05.834 response: 00:21:05.834 { 00:21:05.834 "code": -32603, 00:21:05.834 "message": "Internal error" 00:21:05.834 } 00:21:05.834 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:21:05.834 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:05.834 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:05.834 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:05.834 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 1394225 00:21:05.834 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1394225 ']' 00:21:05.834 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1394225 00:21:05.834 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:05.834 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:05.834 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1394225 00:21:06.095 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:06.095 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:06.095 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1394225' 00:21:06.095 killing process with pid 1394225 00:21:06.095 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1394225 00:21:06.095 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1394225 00:21:06.095 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.mSgh9swVDZ 00:21:06.095 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:21:06.095 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:06.095 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:06.095 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:06.095 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1394595 00:21:06.095 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1394595 00:21:06.095 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:06.095 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1394595 ']' 00:21:06.095 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:06.095 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:06.095 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:06.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:06.095 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:06.095 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:06.095 [2024-11-20 09:54:36.970853] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:21:06.095 [2024-11-20 09:54:36.970907] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:06.356 [2024-11-20 09:54:37.062525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:06.356 [2024-11-20 09:54:37.091334] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:06.356 [2024-11-20 09:54:37.091363] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:06.356 [2024-11-20 09:54:37.091369] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:06.356 [2024-11-20 09:54:37.091373] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:06.356 [2024-11-20 09:54:37.091378] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:06.356 [2024-11-20 09:54:37.091836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:06.927 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:06.927 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:06.927 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:06.927 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:06.927 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:06.927 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:06.927 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.mSgh9swVDZ 00:21:06.927 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.mSgh9swVDZ 00:21:06.927 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:07.188 [2024-11-20 09:54:37.963145] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:07.188 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:07.449 09:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:07.449 [2024-11-20 09:54:38.324026] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:07.449 [2024-11-20 09:54:38.324229] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:07.449 09:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:07.710 malloc0 00:21:07.710 09:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:07.970 09:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.mSgh9swVDZ 00:21:08.231 09:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:21:08.231 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=1395091 00:21:08.231 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:08.231 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:08.231 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 1395091 /var/tmp/bdevperf.sock 00:21:08.231 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1395091 ']' 00:21:08.231 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:08.231 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:08.231 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:08.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:08.231 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:08.231 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:08.231 [2024-11-20 09:54:39.124731] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:21:08.231 [2024-11-20 09:54:39.124786] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1395091 ] 00:21:08.496 [2024-11-20 09:54:39.213083] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:08.496 [2024-11-20 09:54:39.248179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:09.072 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:09.072 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:09.072 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.mSgh9swVDZ 00:21:09.332 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:09.591 [2024-11-20 09:54:40.263756] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:09.591 TLSTESTn1 00:21:09.591 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:21:09.853 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:21:09.853 "subsystems": [ 00:21:09.853 { 00:21:09.853 "subsystem": "keyring", 00:21:09.853 "config": [ 00:21:09.853 { 00:21:09.853 "method": "keyring_file_add_key", 00:21:09.854 "params": { 00:21:09.854 "name": "key0", 00:21:09.854 "path": "/tmp/tmp.mSgh9swVDZ" 00:21:09.854 } 00:21:09.854 } 00:21:09.854 ] 00:21:09.854 }, 00:21:09.854 { 00:21:09.854 "subsystem": "iobuf", 00:21:09.854 "config": [ 00:21:09.854 { 00:21:09.854 "method": "iobuf_set_options", 00:21:09.854 "params": { 00:21:09.854 "small_pool_count": 8192, 00:21:09.854 "large_pool_count": 1024, 00:21:09.854 "small_bufsize": 8192, 00:21:09.854 "large_bufsize": 135168, 00:21:09.854 "enable_numa": false 00:21:09.854 } 00:21:09.854 } 00:21:09.854 ] 00:21:09.854 }, 00:21:09.854 { 00:21:09.854 "subsystem": "sock", 00:21:09.854 "config": [ 00:21:09.854 { 00:21:09.854 "method": "sock_set_default_impl", 00:21:09.854 "params": { 00:21:09.854 "impl_name": "posix" 00:21:09.854 } 00:21:09.854 }, 00:21:09.854 { 00:21:09.854 "method": "sock_impl_set_options", 00:21:09.854 "params": { 00:21:09.854 "impl_name": "ssl", 00:21:09.854 "recv_buf_size": 4096, 00:21:09.854 "send_buf_size": 4096, 00:21:09.854 "enable_recv_pipe": true, 00:21:09.854 "enable_quickack": false, 00:21:09.854 "enable_placement_id": 0, 00:21:09.854 "enable_zerocopy_send_server": true, 00:21:09.854 "enable_zerocopy_send_client": false, 00:21:09.854 "zerocopy_threshold": 0, 00:21:09.854 "tls_version": 0, 00:21:09.854 "enable_ktls": false 00:21:09.854 } 00:21:09.854 }, 00:21:09.854 { 00:21:09.854 "method": "sock_impl_set_options", 00:21:09.854 "params": { 00:21:09.854 "impl_name": "posix", 00:21:09.854 "recv_buf_size": 2097152, 00:21:09.854 "send_buf_size": 2097152, 00:21:09.854 "enable_recv_pipe": true, 00:21:09.854 "enable_quickack": false, 00:21:09.854 "enable_placement_id": 0, 00:21:09.854 "enable_zerocopy_send_server": true, 00:21:09.854 "enable_zerocopy_send_client": false, 00:21:09.854 "zerocopy_threshold": 0, 00:21:09.854 "tls_version": 0, 00:21:09.854 "enable_ktls": false 00:21:09.854 } 00:21:09.854 } 00:21:09.854 ] 00:21:09.854 }, 00:21:09.854 { 00:21:09.854 "subsystem": "vmd", 00:21:09.854 "config": [] 00:21:09.854 }, 00:21:09.854 { 00:21:09.854 "subsystem": "accel", 00:21:09.854 "config": [ 00:21:09.854 { 00:21:09.854 "method": "accel_set_options", 00:21:09.854 "params": { 00:21:09.854 "small_cache_size": 128, 00:21:09.854 "large_cache_size": 16, 00:21:09.854 "task_count": 2048, 00:21:09.854 "sequence_count": 2048, 00:21:09.854 "buf_count": 2048 00:21:09.854 } 00:21:09.854 } 00:21:09.854 ] 00:21:09.854 }, 00:21:09.854 { 00:21:09.854 "subsystem": "bdev", 00:21:09.854 "config": [ 00:21:09.854 { 00:21:09.854 "method": "bdev_set_options", 00:21:09.854 "params": { 00:21:09.854 "bdev_io_pool_size": 65535, 00:21:09.854 "bdev_io_cache_size": 256, 00:21:09.854 "bdev_auto_examine": true, 00:21:09.854 "iobuf_small_cache_size": 128, 00:21:09.854 "iobuf_large_cache_size": 16 00:21:09.854 } 00:21:09.854 }, 00:21:09.854 { 00:21:09.854 "method": "bdev_raid_set_options", 00:21:09.854 "params": { 00:21:09.854 "process_window_size_kb": 1024, 00:21:09.854 "process_max_bandwidth_mb_sec": 0 00:21:09.854 } 00:21:09.854 }, 00:21:09.854 { 00:21:09.854 "method": "bdev_iscsi_set_options", 00:21:09.854 "params": { 00:21:09.854 "timeout_sec": 30 00:21:09.854 } 00:21:09.854 }, 00:21:09.854 { 00:21:09.854 "method": "bdev_nvme_set_options", 00:21:09.854 "params": { 00:21:09.854 "action_on_timeout": "none", 00:21:09.854 "timeout_us": 0, 00:21:09.854 "timeout_admin_us": 0, 00:21:09.854 "keep_alive_timeout_ms": 10000, 00:21:09.854 "arbitration_burst": 0, 00:21:09.854 "low_priority_weight": 0, 00:21:09.854 "medium_priority_weight": 0, 00:21:09.854 "high_priority_weight": 0, 00:21:09.854 "nvme_adminq_poll_period_us": 10000, 00:21:09.854 "nvme_ioq_poll_period_us": 0, 00:21:09.854 "io_queue_requests": 0, 00:21:09.854 "delay_cmd_submit": true, 00:21:09.854 "transport_retry_count": 4, 00:21:09.854 "bdev_retry_count": 3, 00:21:09.854 "transport_ack_timeout": 0, 00:21:09.854 "ctrlr_loss_timeout_sec": 0, 00:21:09.854 "reconnect_delay_sec": 0, 00:21:09.854 "fast_io_fail_timeout_sec": 0, 00:21:09.854 "disable_auto_failback": false, 00:21:09.854 "generate_uuids": false, 00:21:09.854 "transport_tos": 0, 00:21:09.854 "nvme_error_stat": false, 00:21:09.854 "rdma_srq_size": 0, 00:21:09.854 "io_path_stat": false, 00:21:09.854 "allow_accel_sequence": false, 00:21:09.854 "rdma_max_cq_size": 0, 00:21:09.854 "rdma_cm_event_timeout_ms": 0, 00:21:09.854 "dhchap_digests": [ 00:21:09.854 "sha256", 00:21:09.854 "sha384", 00:21:09.854 "sha512" 00:21:09.854 ], 00:21:09.854 "dhchap_dhgroups": [ 00:21:09.854 "null", 00:21:09.854 "ffdhe2048", 00:21:09.854 "ffdhe3072", 00:21:09.854 "ffdhe4096", 00:21:09.854 "ffdhe6144", 00:21:09.854 "ffdhe8192" 00:21:09.854 ] 00:21:09.854 } 00:21:09.854 }, 00:21:09.854 { 00:21:09.854 "method": "bdev_nvme_set_hotplug", 00:21:09.854 "params": { 00:21:09.854 "period_us": 100000, 00:21:09.854 "enable": false 00:21:09.854 } 00:21:09.854 }, 00:21:09.854 { 00:21:09.854 "method": "bdev_malloc_create", 00:21:09.854 "params": { 00:21:09.854 "name": "malloc0", 00:21:09.854 "num_blocks": 8192, 00:21:09.854 "block_size": 4096, 00:21:09.854 "physical_block_size": 4096, 00:21:09.854 "uuid": "f08c766e-95dc-440b-935d-ee37a6239502", 00:21:09.854 "optimal_io_boundary": 0, 00:21:09.854 "md_size": 0, 00:21:09.854 "dif_type": 0, 00:21:09.854 "dif_is_head_of_md": false, 00:21:09.854 "dif_pi_format": 0 00:21:09.854 } 00:21:09.854 }, 00:21:09.854 { 00:21:09.854 "method": "bdev_wait_for_examine" 00:21:09.854 } 00:21:09.854 ] 00:21:09.854 }, 00:21:09.854 { 00:21:09.854 "subsystem": "nbd", 00:21:09.854 "config": [] 00:21:09.854 }, 00:21:09.854 { 00:21:09.854 "subsystem": "scheduler", 00:21:09.854 "config": [ 00:21:09.854 { 00:21:09.854 "method": "framework_set_scheduler", 00:21:09.854 "params": { 00:21:09.854 "name": "static" 00:21:09.854 } 00:21:09.854 } 00:21:09.854 ] 00:21:09.854 }, 00:21:09.854 { 00:21:09.854 "subsystem": "nvmf", 00:21:09.854 "config": [ 00:21:09.854 { 00:21:09.854 "method": "nvmf_set_config", 00:21:09.854 "params": { 00:21:09.854 "discovery_filter": "match_any", 00:21:09.854 "admin_cmd_passthru": { 00:21:09.854 "identify_ctrlr": false 00:21:09.854 }, 00:21:09.854 "dhchap_digests": [ 00:21:09.854 "sha256", 00:21:09.854 "sha384", 00:21:09.854 "sha512" 00:21:09.854 ], 00:21:09.854 "dhchap_dhgroups": [ 00:21:09.854 "null", 00:21:09.854 "ffdhe2048", 00:21:09.854 "ffdhe3072", 00:21:09.854 "ffdhe4096", 00:21:09.854 "ffdhe6144", 00:21:09.854 "ffdhe8192" 00:21:09.854 ] 00:21:09.854 } 00:21:09.854 }, 00:21:09.854 { 00:21:09.854 "method": "nvmf_set_max_subsystems", 00:21:09.854 "params": { 00:21:09.854 "max_subsystems": 1024 00:21:09.854 } 00:21:09.854 }, 00:21:09.854 { 00:21:09.854 "method": "nvmf_set_crdt", 00:21:09.854 "params": { 00:21:09.854 "crdt1": 0, 00:21:09.855 "crdt2": 0, 00:21:09.855 "crdt3": 0 00:21:09.855 } 00:21:09.855 }, 00:21:09.855 { 00:21:09.855 "method": "nvmf_create_transport", 00:21:09.855 "params": { 00:21:09.855 "trtype": "TCP", 00:21:09.855 "max_queue_depth": 128, 00:21:09.855 "max_io_qpairs_per_ctrlr": 127, 00:21:09.855 "in_capsule_data_size": 4096, 00:21:09.855 "max_io_size": 131072, 00:21:09.855 "io_unit_size": 131072, 00:21:09.855 "max_aq_depth": 128, 00:21:09.855 "num_shared_buffers": 511, 00:21:09.855 "buf_cache_size": 4294967295, 00:21:09.855 "dif_insert_or_strip": false, 00:21:09.855 "zcopy": false, 00:21:09.855 "c2h_success": false, 00:21:09.855 "sock_priority": 0, 00:21:09.855 "abort_timeout_sec": 1, 00:21:09.855 "ack_timeout": 0, 00:21:09.855 "data_wr_pool_size": 0 00:21:09.855 } 00:21:09.855 }, 00:21:09.855 { 00:21:09.855 "method": "nvmf_create_subsystem", 00:21:09.855 "params": { 00:21:09.855 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:09.855 "allow_any_host": false, 00:21:09.855 "serial_number": "SPDK00000000000001", 00:21:09.855 "model_number": "SPDK bdev Controller", 00:21:09.855 "max_namespaces": 10, 00:21:09.855 "min_cntlid": 1, 00:21:09.855 "max_cntlid": 65519, 00:21:09.855 "ana_reporting": false 00:21:09.855 } 00:21:09.855 }, 00:21:09.855 { 00:21:09.855 "method": "nvmf_subsystem_add_host", 00:21:09.855 "params": { 00:21:09.855 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:09.855 "host": "nqn.2016-06.io.spdk:host1", 00:21:09.855 "psk": "key0" 00:21:09.855 } 00:21:09.855 }, 00:21:09.855 { 00:21:09.855 "method": "nvmf_subsystem_add_ns", 00:21:09.855 "params": { 00:21:09.855 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:09.855 "namespace": { 00:21:09.855 "nsid": 1, 00:21:09.855 "bdev_name": "malloc0", 00:21:09.855 "nguid": "F08C766E95DC440B935DEE37A6239502", 00:21:09.855 "uuid": "f08c766e-95dc-440b-935d-ee37a6239502", 00:21:09.855 "no_auto_visible": false 00:21:09.855 } 00:21:09.855 } 00:21:09.855 }, 00:21:09.855 { 00:21:09.855 "method": "nvmf_subsystem_add_listener", 00:21:09.855 "params": { 00:21:09.855 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:09.855 "listen_address": { 00:21:09.855 "trtype": "TCP", 00:21:09.855 "adrfam": "IPv4", 00:21:09.855 "traddr": "10.0.0.2", 00:21:09.855 "trsvcid": "4420" 00:21:09.855 }, 00:21:09.855 "secure_channel": true 00:21:09.855 } 00:21:09.855 } 00:21:09.855 ] 00:21:09.855 } 00:21:09.855 ] 00:21:09.855 }' 00:21:09.855 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:10.116 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:21:10.116 "subsystems": [ 00:21:10.116 { 00:21:10.116 "subsystem": "keyring", 00:21:10.116 "config": [ 00:21:10.116 { 00:21:10.116 "method": "keyring_file_add_key", 00:21:10.116 "params": { 00:21:10.116 "name": "key0", 00:21:10.116 "path": "/tmp/tmp.mSgh9swVDZ" 00:21:10.116 } 00:21:10.116 } 00:21:10.116 ] 00:21:10.116 }, 00:21:10.116 { 00:21:10.116 "subsystem": "iobuf", 00:21:10.116 "config": [ 00:21:10.116 { 00:21:10.116 "method": "iobuf_set_options", 00:21:10.116 "params": { 00:21:10.116 "small_pool_count": 8192, 00:21:10.116 "large_pool_count": 1024, 00:21:10.116 "small_bufsize": 8192, 00:21:10.116 "large_bufsize": 135168, 00:21:10.116 "enable_numa": false 00:21:10.116 } 00:21:10.116 } 00:21:10.116 ] 00:21:10.116 }, 00:21:10.116 { 00:21:10.116 "subsystem": "sock", 00:21:10.116 "config": [ 00:21:10.116 { 00:21:10.116 "method": "sock_set_default_impl", 00:21:10.116 "params": { 00:21:10.116 "impl_name": "posix" 00:21:10.116 } 00:21:10.116 }, 00:21:10.116 { 00:21:10.116 "method": "sock_impl_set_options", 00:21:10.116 "params": { 00:21:10.116 "impl_name": "ssl", 00:21:10.116 "recv_buf_size": 4096, 00:21:10.116 "send_buf_size": 4096, 00:21:10.117 "enable_recv_pipe": true, 00:21:10.117 "enable_quickack": false, 00:21:10.117 "enable_placement_id": 0, 00:21:10.117 "enable_zerocopy_send_server": true, 00:21:10.117 "enable_zerocopy_send_client": false, 00:21:10.117 "zerocopy_threshold": 0, 00:21:10.117 "tls_version": 0, 00:21:10.117 "enable_ktls": false 00:21:10.117 } 00:21:10.117 }, 00:21:10.117 { 00:21:10.117 "method": "sock_impl_set_options", 00:21:10.117 "params": { 00:21:10.117 "impl_name": "posix", 00:21:10.117 "recv_buf_size": 2097152, 00:21:10.117 "send_buf_size": 2097152, 00:21:10.117 "enable_recv_pipe": true, 00:21:10.117 "enable_quickack": false, 00:21:10.117 "enable_placement_id": 0, 00:21:10.117 "enable_zerocopy_send_server": true, 00:21:10.117 "enable_zerocopy_send_client": false, 00:21:10.117 "zerocopy_threshold": 0, 00:21:10.117 "tls_version": 0, 00:21:10.117 "enable_ktls": false 00:21:10.117 } 00:21:10.117 } 00:21:10.117 ] 00:21:10.117 }, 00:21:10.117 { 00:21:10.117 "subsystem": "vmd", 00:21:10.117 "config": [] 00:21:10.117 }, 00:21:10.117 { 00:21:10.117 "subsystem": "accel", 00:21:10.117 "config": [ 00:21:10.117 { 00:21:10.117 "method": "accel_set_options", 00:21:10.117 "params": { 00:21:10.117 "small_cache_size": 128, 00:21:10.117 "large_cache_size": 16, 00:21:10.117 "task_count": 2048, 00:21:10.117 "sequence_count": 2048, 00:21:10.117 "buf_count": 2048 00:21:10.117 } 00:21:10.117 } 00:21:10.117 ] 00:21:10.117 }, 00:21:10.117 { 00:21:10.117 "subsystem": "bdev", 00:21:10.117 "config": [ 00:21:10.117 { 00:21:10.117 "method": "bdev_set_options", 00:21:10.117 "params": { 00:21:10.117 "bdev_io_pool_size": 65535, 00:21:10.117 "bdev_io_cache_size": 256, 00:21:10.117 "bdev_auto_examine": true, 00:21:10.117 "iobuf_small_cache_size": 128, 00:21:10.117 "iobuf_large_cache_size": 16 00:21:10.117 } 00:21:10.117 }, 00:21:10.117 { 00:21:10.117 "method": "bdev_raid_set_options", 00:21:10.117 "params": { 00:21:10.117 "process_window_size_kb": 1024, 00:21:10.117 "process_max_bandwidth_mb_sec": 0 00:21:10.117 } 00:21:10.117 }, 00:21:10.117 { 00:21:10.117 "method": "bdev_iscsi_set_options", 00:21:10.117 "params": { 00:21:10.117 "timeout_sec": 30 00:21:10.117 } 00:21:10.117 }, 00:21:10.117 { 00:21:10.117 "method": "bdev_nvme_set_options", 00:21:10.117 "params": { 00:21:10.117 "action_on_timeout": "none", 00:21:10.117 "timeout_us": 0, 00:21:10.117 "timeout_admin_us": 0, 00:21:10.117 "keep_alive_timeout_ms": 10000, 00:21:10.117 "arbitration_burst": 0, 00:21:10.117 "low_priority_weight": 0, 00:21:10.117 "medium_priority_weight": 0, 00:21:10.117 "high_priority_weight": 0, 00:21:10.117 "nvme_adminq_poll_period_us": 10000, 00:21:10.117 "nvme_ioq_poll_period_us": 0, 00:21:10.117 "io_queue_requests": 512, 00:21:10.117 "delay_cmd_submit": true, 00:21:10.117 "transport_retry_count": 4, 00:21:10.117 "bdev_retry_count": 3, 00:21:10.117 "transport_ack_timeout": 0, 00:21:10.117 "ctrlr_loss_timeout_sec": 0, 00:21:10.117 "reconnect_delay_sec": 0, 00:21:10.117 "fast_io_fail_timeout_sec": 0, 00:21:10.117 "disable_auto_failback": false, 00:21:10.117 "generate_uuids": false, 00:21:10.117 "transport_tos": 0, 00:21:10.117 "nvme_error_stat": false, 00:21:10.117 "rdma_srq_size": 0, 00:21:10.117 "io_path_stat": false, 00:21:10.117 "allow_accel_sequence": false, 00:21:10.117 "rdma_max_cq_size": 0, 00:21:10.117 "rdma_cm_event_timeout_ms": 0, 00:21:10.117 "dhchap_digests": [ 00:21:10.117 "sha256", 00:21:10.117 "sha384", 00:21:10.117 "sha512" 00:21:10.117 ], 00:21:10.117 "dhchap_dhgroups": [ 00:21:10.117 "null", 00:21:10.117 "ffdhe2048", 00:21:10.117 "ffdhe3072", 00:21:10.117 "ffdhe4096", 00:21:10.117 "ffdhe6144", 00:21:10.117 "ffdhe8192" 00:21:10.117 ] 00:21:10.117 } 00:21:10.117 }, 00:21:10.117 { 00:21:10.117 "method": "bdev_nvme_attach_controller", 00:21:10.117 "params": { 00:21:10.117 "name": "TLSTEST", 00:21:10.117 "trtype": "TCP", 00:21:10.117 "adrfam": "IPv4", 00:21:10.117 "traddr": "10.0.0.2", 00:21:10.117 "trsvcid": "4420", 00:21:10.117 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:10.117 "prchk_reftag": false, 00:21:10.117 "prchk_guard": false, 00:21:10.117 "ctrlr_loss_timeout_sec": 0, 00:21:10.117 "reconnect_delay_sec": 0, 00:21:10.117 "fast_io_fail_timeout_sec": 0, 00:21:10.117 "psk": "key0", 00:21:10.117 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:10.117 "hdgst": false, 00:21:10.117 "ddgst": false, 00:21:10.117 "multipath": "multipath" 00:21:10.117 } 00:21:10.117 }, 00:21:10.117 { 00:21:10.117 "method": "bdev_nvme_set_hotplug", 00:21:10.117 "params": { 00:21:10.117 "period_us": 100000, 00:21:10.117 "enable": false 00:21:10.117 } 00:21:10.117 }, 00:21:10.117 { 00:21:10.117 "method": "bdev_wait_for_examine" 00:21:10.117 } 00:21:10.117 ] 00:21:10.117 }, 00:21:10.117 { 00:21:10.117 "subsystem": "nbd", 00:21:10.117 "config": [] 00:21:10.117 } 00:21:10.117 ] 00:21:10.117 }' 00:21:10.117 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 1395091 00:21:10.117 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1395091 ']' 00:21:10.117 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1395091 00:21:10.117 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:10.117 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:10.117 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1395091 00:21:10.117 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:10.117 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:10.117 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1395091' 00:21:10.117 killing process with pid 1395091 00:21:10.117 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1395091 00:21:10.117 Received shutdown signal, test time was about 10.000000 seconds 00:21:10.117 00:21:10.117 Latency(us) 00:21:10.117 [2024-11-20T08:54:41.033Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:10.117 [2024-11-20T08:54:41.033Z] =================================================================================================================== 00:21:10.117 [2024-11-20T08:54:41.033Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:10.117 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1395091 00:21:10.379 09:54:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 1394595 00:21:10.379 09:54:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1394595 ']' 00:21:10.379 09:54:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1394595 00:21:10.379 09:54:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:10.379 09:54:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:10.379 09:54:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1394595 00:21:10.379 09:54:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:10.379 09:54:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:10.379 09:54:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1394595' 00:21:10.379 killing process with pid 1394595 00:21:10.379 09:54:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1394595 00:21:10.379 09:54:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1394595 00:21:10.379 09:54:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:21:10.379 09:54:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:10.379 09:54:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:10.379 09:54:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:10.379 09:54:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:21:10.379 "subsystems": [ 00:21:10.379 { 00:21:10.379 "subsystem": "keyring", 00:21:10.379 "config": [ 00:21:10.379 { 00:21:10.379 "method": "keyring_file_add_key", 00:21:10.379 "params": { 00:21:10.379 "name": "key0", 00:21:10.379 "path": "/tmp/tmp.mSgh9swVDZ" 00:21:10.379 } 00:21:10.379 } 00:21:10.379 ] 00:21:10.379 }, 00:21:10.379 { 00:21:10.379 "subsystem": "iobuf", 00:21:10.379 "config": [ 00:21:10.379 { 00:21:10.379 "method": "iobuf_set_options", 00:21:10.379 "params": { 00:21:10.379 "small_pool_count": 8192, 00:21:10.379 "large_pool_count": 1024, 00:21:10.379 "small_bufsize": 8192, 00:21:10.379 "large_bufsize": 135168, 00:21:10.379 "enable_numa": false 00:21:10.379 } 00:21:10.379 } 00:21:10.379 ] 00:21:10.379 }, 00:21:10.379 { 00:21:10.379 "subsystem": "sock", 00:21:10.379 "config": [ 00:21:10.379 { 00:21:10.379 "method": "sock_set_default_impl", 00:21:10.379 "params": { 00:21:10.379 "impl_name": "posix" 00:21:10.379 } 00:21:10.379 }, 00:21:10.379 { 00:21:10.380 "method": "sock_impl_set_options", 00:21:10.380 "params": { 00:21:10.380 "impl_name": "ssl", 00:21:10.380 "recv_buf_size": 4096, 00:21:10.380 "send_buf_size": 4096, 00:21:10.380 "enable_recv_pipe": true, 00:21:10.380 "enable_quickack": false, 00:21:10.380 "enable_placement_id": 0, 00:21:10.380 "enable_zerocopy_send_server": true, 00:21:10.380 "enable_zerocopy_send_client": false, 00:21:10.380 "zerocopy_threshold": 0, 00:21:10.380 "tls_version": 0, 00:21:10.380 "enable_ktls": false 00:21:10.380 } 00:21:10.380 }, 00:21:10.380 { 00:21:10.380 "method": "sock_impl_set_options", 00:21:10.380 "params": { 00:21:10.380 "impl_name": "posix", 00:21:10.380 "recv_buf_size": 2097152, 00:21:10.380 "send_buf_size": 2097152, 00:21:10.380 "enable_recv_pipe": true, 00:21:10.380 "enable_quickack": false, 00:21:10.380 "enable_placement_id": 0, 00:21:10.380 "enable_zerocopy_send_server": true, 00:21:10.380 "enable_zerocopy_send_client": false, 00:21:10.380 "zerocopy_threshold": 0, 00:21:10.380 "tls_version": 0, 00:21:10.380 "enable_ktls": false 00:21:10.380 } 00:21:10.380 } 00:21:10.380 ] 00:21:10.380 }, 00:21:10.380 { 00:21:10.380 "subsystem": "vmd", 00:21:10.380 "config": [] 00:21:10.380 }, 00:21:10.380 { 00:21:10.380 "subsystem": "accel", 00:21:10.380 "config": [ 00:21:10.380 { 00:21:10.380 "method": "accel_set_options", 00:21:10.380 "params": { 00:21:10.380 "small_cache_size": 128, 00:21:10.380 "large_cache_size": 16, 00:21:10.380 "task_count": 2048, 00:21:10.380 "sequence_count": 2048, 00:21:10.380 "buf_count": 2048 00:21:10.380 } 00:21:10.380 } 00:21:10.380 ] 00:21:10.380 }, 00:21:10.380 { 00:21:10.380 "subsystem": "bdev", 00:21:10.380 "config": [ 00:21:10.380 { 00:21:10.380 "method": "bdev_set_options", 00:21:10.380 "params": { 00:21:10.380 "bdev_io_pool_size": 65535, 00:21:10.380 "bdev_io_cache_size": 256, 00:21:10.380 "bdev_auto_examine": true, 00:21:10.380 "iobuf_small_cache_size": 128, 00:21:10.380 "iobuf_large_cache_size": 16 00:21:10.380 } 00:21:10.380 }, 00:21:10.380 { 00:21:10.380 "method": "bdev_raid_set_options", 00:21:10.380 "params": { 00:21:10.380 "process_window_size_kb": 1024, 00:21:10.380 "process_max_bandwidth_mb_sec": 0 00:21:10.380 } 00:21:10.380 }, 00:21:10.380 { 00:21:10.380 "method": "bdev_iscsi_set_options", 00:21:10.380 "params": { 00:21:10.380 "timeout_sec": 30 00:21:10.380 } 00:21:10.380 }, 00:21:10.380 { 00:21:10.380 "method": "bdev_nvme_set_options", 00:21:10.380 "params": { 00:21:10.380 "action_on_timeout": "none", 00:21:10.380 "timeout_us": 0, 00:21:10.380 "timeout_admin_us": 0, 00:21:10.380 "keep_alive_timeout_ms": 10000, 00:21:10.380 "arbitration_burst": 0, 00:21:10.380 "low_priority_weight": 0, 00:21:10.380 "medium_priority_weight": 0, 00:21:10.380 "high_priority_weight": 0, 00:21:10.380 "nvme_adminq_poll_period_us": 10000, 00:21:10.380 "nvme_ioq_poll_period_us": 0, 00:21:10.380 "io_queue_requests": 0, 00:21:10.380 "delay_cmd_submit": true, 00:21:10.380 "transport_retry_count": 4, 00:21:10.380 "bdev_retry_count": 3, 00:21:10.380 "transport_ack_timeout": 0, 00:21:10.380 "ctrlr_loss_timeout_sec": 0, 00:21:10.380 "reconnect_delay_sec": 0, 00:21:10.380 "fast_io_fail_timeout_sec": 0, 00:21:10.380 "disable_auto_failback": false, 00:21:10.380 "generate_uuids": false, 00:21:10.380 "transport_tos": 0, 00:21:10.380 "nvme_error_stat": false, 00:21:10.380 "rdma_srq_size": 0, 00:21:10.380 "io_path_stat": false, 00:21:10.380 "allow_accel_sequence": false, 00:21:10.380 "rdma_max_cq_size": 0, 00:21:10.380 "rdma_cm_event_timeout_ms": 0, 00:21:10.380 "dhchap_digests": [ 00:21:10.380 "sha256", 00:21:10.380 "sha384", 00:21:10.380 "sha512" 00:21:10.380 ], 00:21:10.380 "dhchap_dhgroups": [ 00:21:10.380 "null", 00:21:10.380 "ffdhe2048", 00:21:10.380 "ffdhe3072", 00:21:10.380 "ffdhe4096", 00:21:10.380 "ffdhe6144", 00:21:10.380 "ffdhe8192" 00:21:10.380 ] 00:21:10.380 } 00:21:10.380 }, 00:21:10.380 { 00:21:10.380 "method": "bdev_nvme_set_hotplug", 00:21:10.380 "params": { 00:21:10.380 "period_us": 100000, 00:21:10.380 "enable": false 00:21:10.380 } 00:21:10.380 }, 00:21:10.380 { 00:21:10.380 "method": "bdev_malloc_create", 00:21:10.380 "params": { 00:21:10.380 "name": "malloc0", 00:21:10.380 "num_blocks": 8192, 00:21:10.380 "block_size": 4096, 00:21:10.380 "physical_block_size": 4096, 00:21:10.380 "uuid": "f08c766e-95dc-440b-935d-ee37a6239502", 00:21:10.380 "optimal_io_boundary": 0, 00:21:10.380 "md_size": 0, 00:21:10.380 "dif_type": 0, 00:21:10.380 "dif_is_head_of_md": false, 00:21:10.380 "dif_pi_format": 0 00:21:10.380 } 00:21:10.380 }, 00:21:10.380 { 00:21:10.380 "method": "bdev_wait_for_examine" 00:21:10.380 } 00:21:10.380 ] 00:21:10.380 }, 00:21:10.380 { 00:21:10.380 "subsystem": "nbd", 00:21:10.380 "config": [] 00:21:10.380 }, 00:21:10.380 { 00:21:10.380 "subsystem": "scheduler", 00:21:10.380 "config": [ 00:21:10.380 { 00:21:10.380 "method": "framework_set_scheduler", 00:21:10.380 "params": { 00:21:10.380 "name": "static" 00:21:10.380 } 00:21:10.380 } 00:21:10.380 ] 00:21:10.380 }, 00:21:10.380 { 00:21:10.380 "subsystem": "nvmf", 00:21:10.380 "config": [ 00:21:10.380 { 00:21:10.380 "method": "nvmf_set_config", 00:21:10.380 "params": { 00:21:10.380 "discovery_filter": "match_any", 00:21:10.380 "admin_cmd_passthru": { 00:21:10.380 "identify_ctrlr": false 00:21:10.380 }, 00:21:10.380 "dhchap_digests": [ 00:21:10.380 "sha256", 00:21:10.380 "sha384", 00:21:10.380 "sha512" 00:21:10.380 ], 00:21:10.380 "dhchap_dhgroups": [ 00:21:10.380 "null", 00:21:10.380 "ffdhe2048", 00:21:10.380 "ffdhe3072", 00:21:10.380 "ffdhe4096", 00:21:10.380 "ffdhe6144", 00:21:10.380 "ffdhe8192" 00:21:10.380 ] 00:21:10.380 } 00:21:10.380 }, 00:21:10.380 { 00:21:10.380 "method": "nvmf_set_max_subsystems", 00:21:10.380 "params": { 00:21:10.380 "max_subsystems": 1024 00:21:10.380 } 00:21:10.380 }, 00:21:10.380 { 00:21:10.380 "method": "nvmf_set_crdt", 00:21:10.380 "params": { 00:21:10.380 "crdt1": 0, 00:21:10.380 "crdt2": 0, 00:21:10.380 "crdt3": 0 00:21:10.380 } 00:21:10.380 }, 00:21:10.380 { 00:21:10.380 "method": "nvmf_create_transport", 00:21:10.380 "params": { 00:21:10.380 "trtype": "TCP", 00:21:10.380 "max_queue_depth": 128, 00:21:10.380 "max_io_qpairs_per_ctrlr": 127, 00:21:10.380 "in_capsule_data_size": 4096, 00:21:10.380 "max_io_size": 131072, 00:21:10.380 "io_unit_size": 131072, 00:21:10.380 "max_aq_depth": 128, 00:21:10.380 "num_shared_buffers": 511, 00:21:10.380 "buf_cache_size": 4294967295, 00:21:10.380 "dif_insert_or_strip": false, 00:21:10.380 "zcopy": false, 00:21:10.380 "c2h_success": false, 00:21:10.380 "sock_priority": 0, 00:21:10.380 "abort_timeout_sec": 1, 00:21:10.380 "ack_timeout": 0, 00:21:10.380 "data_wr_pool_size": 0 00:21:10.380 } 00:21:10.380 }, 00:21:10.380 { 00:21:10.380 "method": "nvmf_create_subsystem", 00:21:10.380 "params": { 00:21:10.380 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:10.381 "allow_any_host": false, 00:21:10.381 "serial_number": "SPDK00000000000001", 00:21:10.381 "model_number": "SPDK bdev Controller", 00:21:10.381 "max_namespaces": 10, 00:21:10.381 "min_cntlid": 1, 00:21:10.381 "max_cntlid": 65519, 00:21:10.381 "ana_reporting": false 00:21:10.381 } 00:21:10.381 }, 00:21:10.381 { 00:21:10.381 "method": "nvmf_subsystem_add_host", 00:21:10.381 "params": { 00:21:10.381 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:10.381 "host": "nqn.2016-06.io.spdk:host1", 00:21:10.381 "psk": "key0" 00:21:10.381 } 00:21:10.381 }, 00:21:10.381 { 00:21:10.381 "method": "nvmf_subsystem_add_ns", 00:21:10.381 "params": { 00:21:10.381 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:10.381 "namespace": { 00:21:10.381 "nsid": 1, 00:21:10.381 "bdev_name": "malloc0", 00:21:10.381 "nguid": "F08C766E95DC440B935DEE37A6239502", 00:21:10.381 "uuid": "f08c766e-95dc-440b-935d-ee37a6239502", 00:21:10.381 "no_auto_visible": false 00:21:10.381 } 00:21:10.381 } 00:21:10.381 }, 00:21:10.381 { 00:21:10.381 "method": "nvmf_subsystem_add_listener", 00:21:10.381 "params": { 00:21:10.381 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:10.381 "listen_address": { 00:21:10.381 "trtype": "TCP", 00:21:10.381 "adrfam": "IPv4", 00:21:10.381 "traddr": "10.0.0.2", 00:21:10.381 "trsvcid": "4420" 00:21:10.381 }, 00:21:10.381 "secure_channel": true 00:21:10.381 } 00:21:10.381 } 00:21:10.381 ] 00:21:10.381 } 00:21:10.381 ] 00:21:10.381 }' 00:21:10.381 09:54:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1395641 00:21:10.381 09:54:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1395641 00:21:10.381 09:54:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:21:10.381 09:54:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1395641 ']' 00:21:10.381 09:54:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:10.381 09:54:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:10.381 09:54:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:10.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:10.381 09:54:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:10.381 09:54:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:10.641 [2024-11-20 09:54:41.296336] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:21:10.641 [2024-11-20 09:54:41.296395] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:10.641 [2024-11-20 09:54:41.386536] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:10.641 [2024-11-20 09:54:41.416537] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:10.641 [2024-11-20 09:54:41.416565] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:10.641 [2024-11-20 09:54:41.416570] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:10.641 [2024-11-20 09:54:41.416575] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:10.641 [2024-11-20 09:54:41.416579] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:10.641 [2024-11-20 09:54:41.417061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:10.902 [2024-11-20 09:54:41.609587] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:10.902 [2024-11-20 09:54:41.641611] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:10.902 [2024-11-20 09:54:41.641818] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:11.162 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:11.162 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:11.162 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:11.162 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:11.162 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:11.423 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:11.423 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=1395667 00:21:11.423 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 1395667 /var/tmp/bdevperf.sock 00:21:11.423 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1395667 ']' 00:21:11.423 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:11.423 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:11.423 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:11.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:11.424 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:21:11.424 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:11.424 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:11.424 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:21:11.424 "subsystems": [ 00:21:11.424 { 00:21:11.424 "subsystem": "keyring", 00:21:11.424 "config": [ 00:21:11.424 { 00:21:11.424 "method": "keyring_file_add_key", 00:21:11.424 "params": { 00:21:11.424 "name": "key0", 00:21:11.424 "path": "/tmp/tmp.mSgh9swVDZ" 00:21:11.424 } 00:21:11.424 } 00:21:11.424 ] 00:21:11.424 }, 00:21:11.424 { 00:21:11.424 "subsystem": "iobuf", 00:21:11.424 "config": [ 00:21:11.424 { 00:21:11.424 "method": "iobuf_set_options", 00:21:11.424 "params": { 00:21:11.424 "small_pool_count": 8192, 00:21:11.424 "large_pool_count": 1024, 00:21:11.424 "small_bufsize": 8192, 00:21:11.424 "large_bufsize": 135168, 00:21:11.424 "enable_numa": false 00:21:11.424 } 00:21:11.424 } 00:21:11.424 ] 00:21:11.424 }, 00:21:11.424 { 00:21:11.424 "subsystem": "sock", 00:21:11.424 "config": [ 00:21:11.424 { 00:21:11.424 "method": "sock_set_default_impl", 00:21:11.424 "params": { 00:21:11.424 "impl_name": "posix" 00:21:11.424 } 00:21:11.424 }, 00:21:11.424 { 00:21:11.424 "method": "sock_impl_set_options", 00:21:11.424 "params": { 00:21:11.424 "impl_name": "ssl", 00:21:11.424 "recv_buf_size": 4096, 00:21:11.424 "send_buf_size": 4096, 00:21:11.424 "enable_recv_pipe": true, 00:21:11.424 "enable_quickack": false, 00:21:11.424 "enable_placement_id": 0, 00:21:11.424 "enable_zerocopy_send_server": true, 00:21:11.424 "enable_zerocopy_send_client": false, 00:21:11.424 "zerocopy_threshold": 0, 00:21:11.424 "tls_version": 0, 00:21:11.424 "enable_ktls": false 00:21:11.424 } 00:21:11.424 }, 00:21:11.424 { 00:21:11.424 "method": "sock_impl_set_options", 00:21:11.424 "params": { 00:21:11.424 "impl_name": "posix", 00:21:11.424 "recv_buf_size": 2097152, 00:21:11.424 "send_buf_size": 2097152, 00:21:11.424 "enable_recv_pipe": true, 00:21:11.424 "enable_quickack": false, 00:21:11.424 "enable_placement_id": 0, 00:21:11.424 "enable_zerocopy_send_server": true, 00:21:11.424 "enable_zerocopy_send_client": false, 00:21:11.424 "zerocopy_threshold": 0, 00:21:11.424 "tls_version": 0, 00:21:11.424 "enable_ktls": false 00:21:11.424 } 00:21:11.424 } 00:21:11.424 ] 00:21:11.424 }, 00:21:11.424 { 00:21:11.424 "subsystem": "vmd", 00:21:11.424 "config": [] 00:21:11.424 }, 00:21:11.424 { 00:21:11.424 "subsystem": "accel", 00:21:11.424 "config": [ 00:21:11.424 { 00:21:11.424 "method": "accel_set_options", 00:21:11.424 "params": { 00:21:11.424 "small_cache_size": 128, 00:21:11.424 "large_cache_size": 16, 00:21:11.424 "task_count": 2048, 00:21:11.424 "sequence_count": 2048, 00:21:11.424 "buf_count": 2048 00:21:11.424 } 00:21:11.424 } 00:21:11.424 ] 00:21:11.424 }, 00:21:11.424 { 00:21:11.424 "subsystem": "bdev", 00:21:11.424 "config": [ 00:21:11.424 { 00:21:11.424 "method": "bdev_set_options", 00:21:11.424 "params": { 00:21:11.424 "bdev_io_pool_size": 65535, 00:21:11.424 "bdev_io_cache_size": 256, 00:21:11.424 "bdev_auto_examine": true, 00:21:11.424 "iobuf_small_cache_size": 128, 00:21:11.424 "iobuf_large_cache_size": 16 00:21:11.424 } 00:21:11.424 }, 00:21:11.424 { 00:21:11.424 "method": "bdev_raid_set_options", 00:21:11.424 "params": { 00:21:11.424 "process_window_size_kb": 1024, 00:21:11.424 "process_max_bandwidth_mb_sec": 0 00:21:11.424 } 00:21:11.424 }, 00:21:11.424 { 00:21:11.424 "method": "bdev_iscsi_set_options", 00:21:11.424 "params": { 00:21:11.424 "timeout_sec": 30 00:21:11.424 } 00:21:11.424 }, 00:21:11.424 { 00:21:11.424 "method": "bdev_nvme_set_options", 00:21:11.424 "params": { 00:21:11.424 "action_on_timeout": "none", 00:21:11.424 "timeout_us": 0, 00:21:11.424 "timeout_admin_us": 0, 00:21:11.424 "keep_alive_timeout_ms": 10000, 00:21:11.424 "arbitration_burst": 0, 00:21:11.424 "low_priority_weight": 0, 00:21:11.424 "medium_priority_weight": 0, 00:21:11.424 "high_priority_weight": 0, 00:21:11.424 "nvme_adminq_poll_period_us": 10000, 00:21:11.424 "nvme_ioq_poll_period_us": 0, 00:21:11.424 "io_queue_requests": 512, 00:21:11.424 "delay_cmd_submit": true, 00:21:11.424 "transport_retry_count": 4, 00:21:11.424 "bdev_retry_count": 3, 00:21:11.424 "transport_ack_timeout": 0, 00:21:11.424 "ctrlr_loss_timeout_sec": 0, 00:21:11.424 "reconnect_delay_sec": 0, 00:21:11.424 "fast_io_fail_timeout_sec": 0, 00:21:11.424 "disable_auto_failback": false, 00:21:11.424 "generate_uuids": false, 00:21:11.424 "transport_tos": 0, 00:21:11.424 "nvme_error_stat": false, 00:21:11.424 "rdma_srq_size": 0, 00:21:11.424 "io_path_stat": false, 00:21:11.424 "allow_accel_sequence": false, 00:21:11.424 "rdma_max_cq_size": 0, 00:21:11.424 "rdma_cm_event_timeout_ms": 0, 00:21:11.424 "dhchap_digests": [ 00:21:11.424 "sha256", 00:21:11.424 "sha384", 00:21:11.424 "sha512" 00:21:11.424 ], 00:21:11.424 "dhchap_dhgroups": [ 00:21:11.424 "null", 00:21:11.424 "ffdhe2048", 00:21:11.424 "ffdhe3072", 00:21:11.424 "ffdhe4096", 00:21:11.424 "ffdhe6144", 00:21:11.424 "ffdhe8192" 00:21:11.424 ] 00:21:11.424 } 00:21:11.424 }, 00:21:11.424 { 00:21:11.424 "method": "bdev_nvme_attach_controller", 00:21:11.424 "params": { 00:21:11.424 "name": "TLSTEST", 00:21:11.424 "trtype": "TCP", 00:21:11.424 "adrfam": "IPv4", 00:21:11.424 "traddr": "10.0.0.2", 00:21:11.424 "trsvcid": "4420", 00:21:11.424 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:11.424 "prchk_reftag": false, 00:21:11.424 "prchk_guard": false, 00:21:11.424 "ctrlr_loss_timeout_sec": 0, 00:21:11.424 "reconnect_delay_sec": 0, 00:21:11.424 "fast_io_fail_timeout_sec": 0, 00:21:11.424 "psk": "key0", 00:21:11.424 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:11.424 "hdgst": false, 00:21:11.424 "ddgst": false, 00:21:11.424 "multipath": "multipath" 00:21:11.424 } 00:21:11.424 }, 00:21:11.424 { 00:21:11.424 "method": "bdev_nvme_set_hotplug", 00:21:11.424 "params": { 00:21:11.424 "period_us": 100000, 00:21:11.424 "enable": false 00:21:11.424 } 00:21:11.424 }, 00:21:11.424 { 00:21:11.424 "method": "bdev_wait_for_examine" 00:21:11.424 } 00:21:11.424 ] 00:21:11.424 }, 00:21:11.424 { 00:21:11.424 "subsystem": "nbd", 00:21:11.424 "config": [] 00:21:11.424 } 00:21:11.424 ] 00:21:11.424 }' 00:21:11.424 [2024-11-20 09:54:42.169082] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:21:11.424 [2024-11-20 09:54:42.169139] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1395667 ] 00:21:11.424 [2024-11-20 09:54:42.259248] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:11.424 [2024-11-20 09:54:42.294608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:11.686 [2024-11-20 09:54:42.433980] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:12.257 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:12.257 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:12.257 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:12.257 Running I/O for 10 seconds... 00:21:14.141 5897.00 IOPS, 23.04 MiB/s [2024-11-20T08:54:46.442Z] 4925.00 IOPS, 19.24 MiB/s [2024-11-20T08:54:47.384Z] 5002.67 IOPS, 19.54 MiB/s [2024-11-20T08:54:48.326Z] 5262.00 IOPS, 20.55 MiB/s [2024-11-20T08:54:49.266Z] 5435.40 IOPS, 21.23 MiB/s [2024-11-20T08:54:50.208Z] 5461.67 IOPS, 21.33 MiB/s [2024-11-20T08:54:51.149Z] 5524.14 IOPS, 21.58 MiB/s [2024-11-20T08:54:52.091Z] 5610.62 IOPS, 21.92 MiB/s [2024-11-20T08:54:53.474Z] 5697.56 IOPS, 22.26 MiB/s [2024-11-20T08:54:53.474Z] 5627.10 IOPS, 21.98 MiB/s 00:21:22.558 Latency(us) 00:21:22.558 [2024-11-20T08:54:53.474Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:22.558 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:22.558 Verification LBA range: start 0x0 length 0x2000 00:21:22.558 TLSTESTn1 : 10.01 5633.36 22.01 0.00 0.00 22682.97 4341.76 27088.21 00:21:22.558 [2024-11-20T08:54:53.474Z] =================================================================================================================== 00:21:22.558 [2024-11-20T08:54:53.474Z] Total : 5633.36 22.01 0.00 0.00 22682.97 4341.76 27088.21 00:21:22.558 { 00:21:22.558 "results": [ 00:21:22.558 { 00:21:22.558 "job": "TLSTESTn1", 00:21:22.558 "core_mask": "0x4", 00:21:22.558 "workload": "verify", 00:21:22.558 "status": "finished", 00:21:22.558 "verify_range": { 00:21:22.558 "start": 0, 00:21:22.558 "length": 8192 00:21:22.558 }, 00:21:22.559 "queue_depth": 128, 00:21:22.559 "io_size": 4096, 00:21:22.559 "runtime": 10.011078, 00:21:22.559 "iops": 5633.359364496011, 00:21:22.559 "mibps": 22.005310017562543, 00:21:22.559 "io_failed": 0, 00:21:22.559 "io_timeout": 0, 00:21:22.559 "avg_latency_us": 22682.974871976734, 00:21:22.559 "min_latency_us": 4341.76, 00:21:22.559 "max_latency_us": 27088.213333333333 00:21:22.559 } 00:21:22.559 ], 00:21:22.559 "core_count": 1 00:21:22.559 } 00:21:22.559 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:22.559 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 1395667 00:21:22.559 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1395667 ']' 00:21:22.559 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1395667 00:21:22.559 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:22.559 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:22.559 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1395667 00:21:22.559 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:22.559 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:22.559 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1395667' 00:21:22.559 killing process with pid 1395667 00:21:22.559 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1395667 00:21:22.559 Received shutdown signal, test time was about 10.000000 seconds 00:21:22.559 00:21:22.559 Latency(us) 00:21:22.559 [2024-11-20T08:54:53.475Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:22.559 [2024-11-20T08:54:53.475Z] =================================================================================================================== 00:21:22.559 [2024-11-20T08:54:53.475Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:22.559 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1395667 00:21:22.559 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 1395641 00:21:22.559 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1395641 ']' 00:21:22.559 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1395641 00:21:22.559 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:22.559 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:22.559 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1395641 00:21:22.559 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:22.559 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:22.559 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1395641' 00:21:22.559 killing process with pid 1395641 00:21:22.559 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1395641 00:21:22.559 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1395641 00:21:22.559 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:21:22.559 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:22.559 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:22.559 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:22.559 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1398009 00:21:22.559 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1398009 00:21:22.559 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:22.559 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1398009 ']' 00:21:22.559 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:22.559 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:22.559 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:22.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:22.559 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:22.559 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:22.820 [2024-11-20 09:54:53.511793] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:21:22.820 [2024-11-20 09:54:53.511853] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:22.820 [2024-11-20 09:54:53.606744] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:22.820 [2024-11-20 09:54:53.657910] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:22.820 [2024-11-20 09:54:53.657965] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:22.820 [2024-11-20 09:54:53.657974] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:22.820 [2024-11-20 09:54:53.657989] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:22.820 [2024-11-20 09:54:53.657996] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:22.820 [2024-11-20 09:54:53.658773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:23.763 09:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:23.763 09:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:23.763 09:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:23.763 09:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:23.763 09:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:23.763 09:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:23.763 09:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.mSgh9swVDZ 00:21:23.763 09:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.mSgh9swVDZ 00:21:23.763 09:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:23.763 [2024-11-20 09:54:54.528400] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:23.763 09:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:24.024 09:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:24.024 [2024-11-20 09:54:54.877284] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:24.024 [2024-11-20 09:54:54.877623] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:24.024 09:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:24.283 malloc0 00:21:24.283 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:24.544 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.mSgh9swVDZ 00:21:24.544 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:21:24.804 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:24.804 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=1398382 00:21:24.804 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:24.804 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 1398382 /var/tmp/bdevperf.sock 00:21:24.804 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1398382 ']' 00:21:24.804 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:24.804 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:24.804 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:24.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:24.804 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:24.804 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:24.804 [2024-11-20 09:54:55.636039] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:21:24.804 [2024-11-20 09:54:55.636106] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1398382 ] 00:21:25.063 [2024-11-20 09:54:55.722908] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:25.063 [2024-11-20 09:54:55.757496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:25.063 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:25.063 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:25.063 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.mSgh9swVDZ 00:21:25.326 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:25.326 [2024-11-20 09:54:56.145755] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:25.326 nvme0n1 00:21:25.584 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:25.584 Running I/O for 1 seconds... 00:21:26.522 5310.00 IOPS, 20.74 MiB/s 00:21:26.522 Latency(us) 00:21:26.522 [2024-11-20T08:54:57.438Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:26.522 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:26.522 Verification LBA range: start 0x0 length 0x2000 00:21:26.522 nvme0n1 : 1.01 5364.22 20.95 0.00 0.00 23690.84 4997.12 23156.05 00:21:26.522 [2024-11-20T08:54:57.438Z] =================================================================================================================== 00:21:26.522 [2024-11-20T08:54:57.438Z] Total : 5364.22 20.95 0.00 0.00 23690.84 4997.12 23156.05 00:21:26.522 { 00:21:26.522 "results": [ 00:21:26.522 { 00:21:26.522 "job": "nvme0n1", 00:21:26.522 "core_mask": "0x2", 00:21:26.522 "workload": "verify", 00:21:26.522 "status": "finished", 00:21:26.522 "verify_range": { 00:21:26.522 "start": 0, 00:21:26.522 "length": 8192 00:21:26.522 }, 00:21:26.522 "queue_depth": 128, 00:21:26.522 "io_size": 4096, 00:21:26.522 "runtime": 1.013755, 00:21:26.522 "iops": 5364.215219653664, 00:21:26.522 "mibps": 20.953965701772123, 00:21:26.522 "io_failed": 0, 00:21:26.522 "io_timeout": 0, 00:21:26.522 "avg_latency_us": 23690.840691430672, 00:21:26.522 "min_latency_us": 4997.12, 00:21:26.522 "max_latency_us": 23156.053333333333 00:21:26.522 } 00:21:26.522 ], 00:21:26.522 "core_count": 1 00:21:26.522 } 00:21:26.522 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 1398382 00:21:26.522 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1398382 ']' 00:21:26.522 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1398382 00:21:26.522 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:26.522 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:26.522 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1398382 00:21:26.783 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:26.783 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:26.783 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1398382' 00:21:26.783 killing process with pid 1398382 00:21:26.783 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1398382 00:21:26.783 Received shutdown signal, test time was about 1.000000 seconds 00:21:26.783 00:21:26.783 Latency(us) 00:21:26.783 [2024-11-20T08:54:57.699Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:26.783 [2024-11-20T08:54:57.699Z] =================================================================================================================== 00:21:26.783 [2024-11-20T08:54:57.699Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:26.783 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1398382 00:21:26.783 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 1398009 00:21:26.783 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1398009 ']' 00:21:26.783 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1398009 00:21:26.783 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:26.783 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:26.783 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1398009 00:21:26.783 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:26.783 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:26.783 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1398009' 00:21:26.783 killing process with pid 1398009 00:21:26.783 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1398009 00:21:26.783 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1398009 00:21:27.044 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:21:27.044 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:27.044 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:27.044 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:27.044 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1398736 00:21:27.044 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1398736 00:21:27.044 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:27.044 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1398736 ']' 00:21:27.044 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:27.044 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:27.044 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:27.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:27.044 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:27.044 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:27.044 [2024-11-20 09:54:57.797818] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:21:27.044 [2024-11-20 09:54:57.797871] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:27.044 [2024-11-20 09:54:57.891079] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:27.044 [2024-11-20 09:54:57.925028] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:27.044 [2024-11-20 09:54:57.925064] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:27.044 [2024-11-20 09:54:57.925073] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:27.044 [2024-11-20 09:54:57.925083] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:27.044 [2024-11-20 09:54:57.925089] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:27.044 [2024-11-20 09:54:57.925691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:27.987 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:27.987 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:27.987 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:27.987 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:27.987 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:27.987 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:27.987 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:21:27.987 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.987 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:27.987 [2024-11-20 09:54:58.663342] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:27.987 malloc0 00:21:27.987 [2024-11-20 09:54:58.693443] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:27.987 [2024-11-20 09:54:58.693790] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:27.987 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.987 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=1399081 00:21:27.987 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 1399081 /var/tmp/bdevperf.sock 00:21:27.987 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:27.987 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1399081 ']' 00:21:27.987 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:27.987 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:27.987 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:27.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:27.987 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:27.987 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:27.987 [2024-11-20 09:54:58.775573] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:21:27.987 [2024-11-20 09:54:58.775636] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1399081 ] 00:21:27.987 [2024-11-20 09:54:58.861658] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:27.987 [2024-11-20 09:54:58.895677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:28.926 09:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:28.926 09:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:28.926 09:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.mSgh9swVDZ 00:21:28.926 09:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:29.185 [2024-11-20 09:54:59.905471] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:29.185 nvme0n1 00:21:29.185 09:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:29.185 Running I/O for 1 seconds... 00:21:30.570 5824.00 IOPS, 22.75 MiB/s 00:21:30.570 Latency(us) 00:21:30.570 [2024-11-20T08:55:01.486Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:30.570 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:30.570 Verification LBA range: start 0x0 length 0x2000 00:21:30.570 nvme0n1 : 1.01 5877.47 22.96 0.00 0.00 21635.02 5133.65 29709.65 00:21:30.570 [2024-11-20T08:55:01.486Z] =================================================================================================================== 00:21:30.570 [2024-11-20T08:55:01.486Z] Total : 5877.47 22.96 0.00 0.00 21635.02 5133.65 29709.65 00:21:30.570 { 00:21:30.570 "results": [ 00:21:30.570 { 00:21:30.570 "job": "nvme0n1", 00:21:30.570 "core_mask": "0x2", 00:21:30.570 "workload": "verify", 00:21:30.570 "status": "finished", 00:21:30.570 "verify_range": { 00:21:30.570 "start": 0, 00:21:30.570 "length": 8192 00:21:30.570 }, 00:21:30.570 "queue_depth": 128, 00:21:30.570 "io_size": 4096, 00:21:30.570 "runtime": 1.01285, 00:21:30.570 "iops": 5877.474453275411, 00:21:30.570 "mibps": 22.958884583107075, 00:21:30.570 "io_failed": 0, 00:21:30.570 "io_timeout": 0, 00:21:30.570 "avg_latency_us": 21635.017414188926, 00:21:30.570 "min_latency_us": 5133.653333333334, 00:21:30.570 "max_latency_us": 29709.653333333332 00:21:30.570 } 00:21:30.570 ], 00:21:30.570 "core_count": 1 00:21:30.570 } 00:21:30.570 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:21:30.570 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.570 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:30.570 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.570 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:21:30.570 "subsystems": [ 00:21:30.570 { 00:21:30.570 "subsystem": "keyring", 00:21:30.570 "config": [ 00:21:30.570 { 00:21:30.570 "method": "keyring_file_add_key", 00:21:30.570 "params": { 00:21:30.570 "name": "key0", 00:21:30.570 "path": "/tmp/tmp.mSgh9swVDZ" 00:21:30.570 } 00:21:30.570 } 00:21:30.570 ] 00:21:30.570 }, 00:21:30.570 { 00:21:30.570 "subsystem": "iobuf", 00:21:30.570 "config": [ 00:21:30.570 { 00:21:30.570 "method": "iobuf_set_options", 00:21:30.570 "params": { 00:21:30.570 "small_pool_count": 8192, 00:21:30.570 "large_pool_count": 1024, 00:21:30.570 "small_bufsize": 8192, 00:21:30.570 "large_bufsize": 135168, 00:21:30.570 "enable_numa": false 00:21:30.570 } 00:21:30.570 } 00:21:30.570 ] 00:21:30.570 }, 00:21:30.570 { 00:21:30.570 "subsystem": "sock", 00:21:30.570 "config": [ 00:21:30.571 { 00:21:30.571 "method": "sock_set_default_impl", 00:21:30.571 "params": { 00:21:30.571 "impl_name": "posix" 00:21:30.571 } 00:21:30.571 }, 00:21:30.571 { 00:21:30.571 "method": "sock_impl_set_options", 00:21:30.571 "params": { 00:21:30.571 "impl_name": "ssl", 00:21:30.571 "recv_buf_size": 4096, 00:21:30.571 "send_buf_size": 4096, 00:21:30.571 "enable_recv_pipe": true, 00:21:30.571 "enable_quickack": false, 00:21:30.571 "enable_placement_id": 0, 00:21:30.571 "enable_zerocopy_send_server": true, 00:21:30.571 "enable_zerocopy_send_client": false, 00:21:30.571 "zerocopy_threshold": 0, 00:21:30.571 "tls_version": 0, 00:21:30.571 "enable_ktls": false 00:21:30.571 } 00:21:30.571 }, 00:21:30.571 { 00:21:30.571 "method": "sock_impl_set_options", 00:21:30.571 "params": { 00:21:30.571 "impl_name": "posix", 00:21:30.571 "recv_buf_size": 2097152, 00:21:30.571 "send_buf_size": 2097152, 00:21:30.571 "enable_recv_pipe": true, 00:21:30.571 "enable_quickack": false, 00:21:30.571 "enable_placement_id": 0, 00:21:30.571 "enable_zerocopy_send_server": true, 00:21:30.571 "enable_zerocopy_send_client": false, 00:21:30.571 "zerocopy_threshold": 0, 00:21:30.571 "tls_version": 0, 00:21:30.571 "enable_ktls": false 00:21:30.571 } 00:21:30.571 } 00:21:30.571 ] 00:21:30.571 }, 00:21:30.571 { 00:21:30.571 "subsystem": "vmd", 00:21:30.571 "config": [] 00:21:30.571 }, 00:21:30.571 { 00:21:30.571 "subsystem": "accel", 00:21:30.571 "config": [ 00:21:30.571 { 00:21:30.571 "method": "accel_set_options", 00:21:30.571 "params": { 00:21:30.571 "small_cache_size": 128, 00:21:30.571 "large_cache_size": 16, 00:21:30.571 "task_count": 2048, 00:21:30.571 "sequence_count": 2048, 00:21:30.571 "buf_count": 2048 00:21:30.571 } 00:21:30.571 } 00:21:30.571 ] 00:21:30.571 }, 00:21:30.571 { 00:21:30.571 "subsystem": "bdev", 00:21:30.571 "config": [ 00:21:30.571 { 00:21:30.571 "method": "bdev_set_options", 00:21:30.571 "params": { 00:21:30.571 "bdev_io_pool_size": 65535, 00:21:30.571 "bdev_io_cache_size": 256, 00:21:30.571 "bdev_auto_examine": true, 00:21:30.571 "iobuf_small_cache_size": 128, 00:21:30.571 "iobuf_large_cache_size": 16 00:21:30.571 } 00:21:30.571 }, 00:21:30.571 { 00:21:30.571 "method": "bdev_raid_set_options", 00:21:30.571 "params": { 00:21:30.571 "process_window_size_kb": 1024, 00:21:30.571 "process_max_bandwidth_mb_sec": 0 00:21:30.571 } 00:21:30.571 }, 00:21:30.571 { 00:21:30.571 "method": "bdev_iscsi_set_options", 00:21:30.571 "params": { 00:21:30.571 "timeout_sec": 30 00:21:30.571 } 00:21:30.571 }, 00:21:30.571 { 00:21:30.571 "method": "bdev_nvme_set_options", 00:21:30.571 "params": { 00:21:30.571 "action_on_timeout": "none", 00:21:30.571 "timeout_us": 0, 00:21:30.571 "timeout_admin_us": 0, 00:21:30.571 "keep_alive_timeout_ms": 10000, 00:21:30.571 "arbitration_burst": 0, 00:21:30.571 "low_priority_weight": 0, 00:21:30.571 "medium_priority_weight": 0, 00:21:30.571 "high_priority_weight": 0, 00:21:30.571 "nvme_adminq_poll_period_us": 10000, 00:21:30.571 "nvme_ioq_poll_period_us": 0, 00:21:30.571 "io_queue_requests": 0, 00:21:30.571 "delay_cmd_submit": true, 00:21:30.571 "transport_retry_count": 4, 00:21:30.571 "bdev_retry_count": 3, 00:21:30.571 "transport_ack_timeout": 0, 00:21:30.571 "ctrlr_loss_timeout_sec": 0, 00:21:30.571 "reconnect_delay_sec": 0, 00:21:30.571 "fast_io_fail_timeout_sec": 0, 00:21:30.571 "disable_auto_failback": false, 00:21:30.571 "generate_uuids": false, 00:21:30.571 "transport_tos": 0, 00:21:30.571 "nvme_error_stat": false, 00:21:30.571 "rdma_srq_size": 0, 00:21:30.571 "io_path_stat": false, 00:21:30.571 "allow_accel_sequence": false, 00:21:30.571 "rdma_max_cq_size": 0, 00:21:30.571 "rdma_cm_event_timeout_ms": 0, 00:21:30.571 "dhchap_digests": [ 00:21:30.571 "sha256", 00:21:30.571 "sha384", 00:21:30.571 "sha512" 00:21:30.571 ], 00:21:30.571 "dhchap_dhgroups": [ 00:21:30.571 "null", 00:21:30.571 "ffdhe2048", 00:21:30.571 "ffdhe3072", 00:21:30.571 "ffdhe4096", 00:21:30.571 "ffdhe6144", 00:21:30.571 "ffdhe8192" 00:21:30.571 ] 00:21:30.571 } 00:21:30.571 }, 00:21:30.571 { 00:21:30.571 "method": "bdev_nvme_set_hotplug", 00:21:30.571 "params": { 00:21:30.571 "period_us": 100000, 00:21:30.571 "enable": false 00:21:30.571 } 00:21:30.571 }, 00:21:30.571 { 00:21:30.571 "method": "bdev_malloc_create", 00:21:30.571 "params": { 00:21:30.571 "name": "malloc0", 00:21:30.571 "num_blocks": 8192, 00:21:30.571 "block_size": 4096, 00:21:30.571 "physical_block_size": 4096, 00:21:30.571 "uuid": "ce0a7804-c941-4909-86a5-2e09d4b9b48c", 00:21:30.571 "optimal_io_boundary": 0, 00:21:30.571 "md_size": 0, 00:21:30.571 "dif_type": 0, 00:21:30.571 "dif_is_head_of_md": false, 00:21:30.571 "dif_pi_format": 0 00:21:30.571 } 00:21:30.571 }, 00:21:30.571 { 00:21:30.571 "method": "bdev_wait_for_examine" 00:21:30.571 } 00:21:30.571 ] 00:21:30.571 }, 00:21:30.571 { 00:21:30.571 "subsystem": "nbd", 00:21:30.571 "config": [] 00:21:30.571 }, 00:21:30.571 { 00:21:30.571 "subsystem": "scheduler", 00:21:30.571 "config": [ 00:21:30.571 { 00:21:30.571 "method": "framework_set_scheduler", 00:21:30.571 "params": { 00:21:30.571 "name": "static" 00:21:30.571 } 00:21:30.571 } 00:21:30.571 ] 00:21:30.571 }, 00:21:30.571 { 00:21:30.571 "subsystem": "nvmf", 00:21:30.571 "config": [ 00:21:30.571 { 00:21:30.571 "method": "nvmf_set_config", 00:21:30.571 "params": { 00:21:30.571 "discovery_filter": "match_any", 00:21:30.571 "admin_cmd_passthru": { 00:21:30.571 "identify_ctrlr": false 00:21:30.572 }, 00:21:30.572 "dhchap_digests": [ 00:21:30.572 "sha256", 00:21:30.572 "sha384", 00:21:30.572 "sha512" 00:21:30.572 ], 00:21:30.572 "dhchap_dhgroups": [ 00:21:30.572 "null", 00:21:30.572 "ffdhe2048", 00:21:30.572 "ffdhe3072", 00:21:30.572 "ffdhe4096", 00:21:30.572 "ffdhe6144", 00:21:30.572 "ffdhe8192" 00:21:30.572 ] 00:21:30.572 } 00:21:30.572 }, 00:21:30.572 { 00:21:30.572 "method": "nvmf_set_max_subsystems", 00:21:30.572 "params": { 00:21:30.572 "max_subsystems": 1024 00:21:30.572 } 00:21:30.572 }, 00:21:30.572 { 00:21:30.572 "method": "nvmf_set_crdt", 00:21:30.572 "params": { 00:21:30.572 "crdt1": 0, 00:21:30.572 "crdt2": 0, 00:21:30.572 "crdt3": 0 00:21:30.572 } 00:21:30.572 }, 00:21:30.572 { 00:21:30.572 "method": "nvmf_create_transport", 00:21:30.572 "params": { 00:21:30.572 "trtype": "TCP", 00:21:30.572 "max_queue_depth": 128, 00:21:30.572 "max_io_qpairs_per_ctrlr": 127, 00:21:30.572 "in_capsule_data_size": 4096, 00:21:30.572 "max_io_size": 131072, 00:21:30.572 "io_unit_size": 131072, 00:21:30.572 "max_aq_depth": 128, 00:21:30.572 "num_shared_buffers": 511, 00:21:30.572 "buf_cache_size": 4294967295, 00:21:30.572 "dif_insert_or_strip": false, 00:21:30.572 "zcopy": false, 00:21:30.572 "c2h_success": false, 00:21:30.572 "sock_priority": 0, 00:21:30.572 "abort_timeout_sec": 1, 00:21:30.572 "ack_timeout": 0, 00:21:30.572 "data_wr_pool_size": 0 00:21:30.572 } 00:21:30.572 }, 00:21:30.572 { 00:21:30.572 "method": "nvmf_create_subsystem", 00:21:30.572 "params": { 00:21:30.572 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:30.572 "allow_any_host": false, 00:21:30.572 "serial_number": "00000000000000000000", 00:21:30.572 "model_number": "SPDK bdev Controller", 00:21:30.572 "max_namespaces": 32, 00:21:30.572 "min_cntlid": 1, 00:21:30.572 "max_cntlid": 65519, 00:21:30.572 "ana_reporting": false 00:21:30.572 } 00:21:30.572 }, 00:21:30.572 { 00:21:30.572 "method": "nvmf_subsystem_add_host", 00:21:30.572 "params": { 00:21:30.572 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:30.572 "host": "nqn.2016-06.io.spdk:host1", 00:21:30.572 "psk": "key0" 00:21:30.572 } 00:21:30.572 }, 00:21:30.572 { 00:21:30.572 "method": "nvmf_subsystem_add_ns", 00:21:30.572 "params": { 00:21:30.572 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:30.572 "namespace": { 00:21:30.572 "nsid": 1, 00:21:30.572 "bdev_name": "malloc0", 00:21:30.572 "nguid": "CE0A7804C941490986A52E09D4B9B48C", 00:21:30.572 "uuid": "ce0a7804-c941-4909-86a5-2e09d4b9b48c", 00:21:30.572 "no_auto_visible": false 00:21:30.572 } 00:21:30.572 } 00:21:30.572 }, 00:21:30.572 { 00:21:30.572 "method": "nvmf_subsystem_add_listener", 00:21:30.572 "params": { 00:21:30.572 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:30.572 "listen_address": { 00:21:30.572 "trtype": "TCP", 00:21:30.572 "adrfam": "IPv4", 00:21:30.572 "traddr": "10.0.0.2", 00:21:30.572 "trsvcid": "4420" 00:21:30.572 }, 00:21:30.572 "secure_channel": false, 00:21:30.572 "sock_impl": "ssl" 00:21:30.572 } 00:21:30.572 } 00:21:30.572 ] 00:21:30.572 } 00:21:30.572 ] 00:21:30.572 }' 00:21:30.572 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:30.833 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:21:30.833 "subsystems": [ 00:21:30.833 { 00:21:30.833 "subsystem": "keyring", 00:21:30.833 "config": [ 00:21:30.833 { 00:21:30.833 "method": "keyring_file_add_key", 00:21:30.833 "params": { 00:21:30.833 "name": "key0", 00:21:30.833 "path": "/tmp/tmp.mSgh9swVDZ" 00:21:30.833 } 00:21:30.833 } 00:21:30.833 ] 00:21:30.833 }, 00:21:30.833 { 00:21:30.833 "subsystem": "iobuf", 00:21:30.833 "config": [ 00:21:30.833 { 00:21:30.833 "method": "iobuf_set_options", 00:21:30.833 "params": { 00:21:30.833 "small_pool_count": 8192, 00:21:30.833 "large_pool_count": 1024, 00:21:30.833 "small_bufsize": 8192, 00:21:30.833 "large_bufsize": 135168, 00:21:30.833 "enable_numa": false 00:21:30.834 } 00:21:30.834 } 00:21:30.834 ] 00:21:30.834 }, 00:21:30.834 { 00:21:30.834 "subsystem": "sock", 00:21:30.834 "config": [ 00:21:30.834 { 00:21:30.834 "method": "sock_set_default_impl", 00:21:30.834 "params": { 00:21:30.834 "impl_name": "posix" 00:21:30.834 } 00:21:30.834 }, 00:21:30.834 { 00:21:30.834 "method": "sock_impl_set_options", 00:21:30.834 "params": { 00:21:30.834 "impl_name": "ssl", 00:21:30.834 "recv_buf_size": 4096, 00:21:30.834 "send_buf_size": 4096, 00:21:30.834 "enable_recv_pipe": true, 00:21:30.834 "enable_quickack": false, 00:21:30.834 "enable_placement_id": 0, 00:21:30.834 "enable_zerocopy_send_server": true, 00:21:30.834 "enable_zerocopy_send_client": false, 00:21:30.834 "zerocopy_threshold": 0, 00:21:30.834 "tls_version": 0, 00:21:30.834 "enable_ktls": false 00:21:30.834 } 00:21:30.834 }, 00:21:30.834 { 00:21:30.834 "method": "sock_impl_set_options", 00:21:30.834 "params": { 00:21:30.834 "impl_name": "posix", 00:21:30.834 "recv_buf_size": 2097152, 00:21:30.834 "send_buf_size": 2097152, 00:21:30.834 "enable_recv_pipe": true, 00:21:30.834 "enable_quickack": false, 00:21:30.834 "enable_placement_id": 0, 00:21:30.834 "enable_zerocopy_send_server": true, 00:21:30.834 "enable_zerocopy_send_client": false, 00:21:30.834 "zerocopy_threshold": 0, 00:21:30.834 "tls_version": 0, 00:21:30.834 "enable_ktls": false 00:21:30.834 } 00:21:30.834 } 00:21:30.834 ] 00:21:30.834 }, 00:21:30.834 { 00:21:30.834 "subsystem": "vmd", 00:21:30.834 "config": [] 00:21:30.834 }, 00:21:30.834 { 00:21:30.834 "subsystem": "accel", 00:21:30.834 "config": [ 00:21:30.834 { 00:21:30.834 "method": "accel_set_options", 00:21:30.834 "params": { 00:21:30.834 "small_cache_size": 128, 00:21:30.834 "large_cache_size": 16, 00:21:30.834 "task_count": 2048, 00:21:30.834 "sequence_count": 2048, 00:21:30.834 "buf_count": 2048 00:21:30.834 } 00:21:30.834 } 00:21:30.834 ] 00:21:30.834 }, 00:21:30.834 { 00:21:30.834 "subsystem": "bdev", 00:21:30.834 "config": [ 00:21:30.834 { 00:21:30.834 "method": "bdev_set_options", 00:21:30.834 "params": { 00:21:30.834 "bdev_io_pool_size": 65535, 00:21:30.834 "bdev_io_cache_size": 256, 00:21:30.834 "bdev_auto_examine": true, 00:21:30.834 "iobuf_small_cache_size": 128, 00:21:30.834 "iobuf_large_cache_size": 16 00:21:30.834 } 00:21:30.834 }, 00:21:30.834 { 00:21:30.834 "method": "bdev_raid_set_options", 00:21:30.834 "params": { 00:21:30.834 "process_window_size_kb": 1024, 00:21:30.834 "process_max_bandwidth_mb_sec": 0 00:21:30.834 } 00:21:30.834 }, 00:21:30.834 { 00:21:30.834 "method": "bdev_iscsi_set_options", 00:21:30.834 "params": { 00:21:30.834 "timeout_sec": 30 00:21:30.834 } 00:21:30.834 }, 00:21:30.834 { 00:21:30.834 "method": "bdev_nvme_set_options", 00:21:30.834 "params": { 00:21:30.834 "action_on_timeout": "none", 00:21:30.834 "timeout_us": 0, 00:21:30.834 "timeout_admin_us": 0, 00:21:30.834 "keep_alive_timeout_ms": 10000, 00:21:30.834 "arbitration_burst": 0, 00:21:30.834 "low_priority_weight": 0, 00:21:30.834 "medium_priority_weight": 0, 00:21:30.834 "high_priority_weight": 0, 00:21:30.834 "nvme_adminq_poll_period_us": 10000, 00:21:30.834 "nvme_ioq_poll_period_us": 0, 00:21:30.834 "io_queue_requests": 512, 00:21:30.834 "delay_cmd_submit": true, 00:21:30.834 "transport_retry_count": 4, 00:21:30.834 "bdev_retry_count": 3, 00:21:30.834 "transport_ack_timeout": 0, 00:21:30.834 "ctrlr_loss_timeout_sec": 0, 00:21:30.834 "reconnect_delay_sec": 0, 00:21:30.834 "fast_io_fail_timeout_sec": 0, 00:21:30.834 "disable_auto_failback": false, 00:21:30.834 "generate_uuids": false, 00:21:30.834 "transport_tos": 0, 00:21:30.834 "nvme_error_stat": false, 00:21:30.834 "rdma_srq_size": 0, 00:21:30.834 "io_path_stat": false, 00:21:30.834 "allow_accel_sequence": false, 00:21:30.834 "rdma_max_cq_size": 0, 00:21:30.834 "rdma_cm_event_timeout_ms": 0, 00:21:30.834 "dhchap_digests": [ 00:21:30.834 "sha256", 00:21:30.834 "sha384", 00:21:30.834 "sha512" 00:21:30.834 ], 00:21:30.834 "dhchap_dhgroups": [ 00:21:30.834 "null", 00:21:30.834 "ffdhe2048", 00:21:30.834 "ffdhe3072", 00:21:30.834 "ffdhe4096", 00:21:30.834 "ffdhe6144", 00:21:30.834 "ffdhe8192" 00:21:30.834 ] 00:21:30.834 } 00:21:30.834 }, 00:21:30.834 { 00:21:30.834 "method": "bdev_nvme_attach_controller", 00:21:30.834 "params": { 00:21:30.834 "name": "nvme0", 00:21:30.834 "trtype": "TCP", 00:21:30.834 "adrfam": "IPv4", 00:21:30.834 "traddr": "10.0.0.2", 00:21:30.834 "trsvcid": "4420", 00:21:30.834 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:30.834 "prchk_reftag": false, 00:21:30.834 "prchk_guard": false, 00:21:30.834 "ctrlr_loss_timeout_sec": 0, 00:21:30.834 "reconnect_delay_sec": 0, 00:21:30.834 "fast_io_fail_timeout_sec": 0, 00:21:30.834 "psk": "key0", 00:21:30.834 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:30.834 "hdgst": false, 00:21:30.834 "ddgst": false, 00:21:30.834 "multipath": "multipath" 00:21:30.834 } 00:21:30.834 }, 00:21:30.834 { 00:21:30.834 "method": "bdev_nvme_set_hotplug", 00:21:30.834 "params": { 00:21:30.834 "period_us": 100000, 00:21:30.834 "enable": false 00:21:30.834 } 00:21:30.834 }, 00:21:30.834 { 00:21:30.834 "method": "bdev_enable_histogram", 00:21:30.834 "params": { 00:21:30.834 "name": "nvme0n1", 00:21:30.834 "enable": true 00:21:30.834 } 00:21:30.834 }, 00:21:30.834 { 00:21:30.834 "method": "bdev_wait_for_examine" 00:21:30.834 } 00:21:30.834 ] 00:21:30.834 }, 00:21:30.834 { 00:21:30.834 "subsystem": "nbd", 00:21:30.834 "config": [] 00:21:30.834 } 00:21:30.834 ] 00:21:30.834 }' 00:21:30.834 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 1399081 00:21:30.834 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1399081 ']' 00:21:30.834 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1399081 00:21:30.834 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:30.834 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:30.834 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1399081 00:21:30.834 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:30.834 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:30.834 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1399081' 00:21:30.834 killing process with pid 1399081 00:21:30.834 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1399081 00:21:30.834 Received shutdown signal, test time was about 1.000000 seconds 00:21:30.834 00:21:30.834 Latency(us) 00:21:30.834 [2024-11-20T08:55:01.750Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:30.834 [2024-11-20T08:55:01.750Z] =================================================================================================================== 00:21:30.834 [2024-11-20T08:55:01.750Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:30.834 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1399081 00:21:30.834 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 1398736 00:21:30.834 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1398736 ']' 00:21:30.834 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1398736 00:21:30.834 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:30.834 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:30.834 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1398736 00:21:30.835 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:30.835 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:30.835 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1398736' 00:21:30.835 killing process with pid 1398736 00:21:30.835 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1398736 00:21:30.835 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1398736 00:21:31.095 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:21:31.095 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:31.095 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:31.095 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:31.095 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:21:31.095 "subsystems": [ 00:21:31.095 { 00:21:31.095 "subsystem": "keyring", 00:21:31.095 "config": [ 00:21:31.095 { 00:21:31.095 "method": "keyring_file_add_key", 00:21:31.095 "params": { 00:21:31.095 "name": "key0", 00:21:31.095 "path": "/tmp/tmp.mSgh9swVDZ" 00:21:31.095 } 00:21:31.095 } 00:21:31.095 ] 00:21:31.095 }, 00:21:31.095 { 00:21:31.095 "subsystem": "iobuf", 00:21:31.095 "config": [ 00:21:31.095 { 00:21:31.095 "method": "iobuf_set_options", 00:21:31.095 "params": { 00:21:31.095 "small_pool_count": 8192, 00:21:31.095 "large_pool_count": 1024, 00:21:31.095 "small_bufsize": 8192, 00:21:31.095 "large_bufsize": 135168, 00:21:31.095 "enable_numa": false 00:21:31.095 } 00:21:31.095 } 00:21:31.095 ] 00:21:31.095 }, 00:21:31.095 { 00:21:31.095 "subsystem": "sock", 00:21:31.095 "config": [ 00:21:31.095 { 00:21:31.095 "method": "sock_set_default_impl", 00:21:31.095 "params": { 00:21:31.095 "impl_name": "posix" 00:21:31.095 } 00:21:31.095 }, 00:21:31.095 { 00:21:31.095 "method": "sock_impl_set_options", 00:21:31.095 "params": { 00:21:31.095 "impl_name": "ssl", 00:21:31.095 "recv_buf_size": 4096, 00:21:31.095 "send_buf_size": 4096, 00:21:31.095 "enable_recv_pipe": true, 00:21:31.095 "enable_quickack": false, 00:21:31.095 "enable_placement_id": 0, 00:21:31.095 "enable_zerocopy_send_server": true, 00:21:31.095 "enable_zerocopy_send_client": false, 00:21:31.095 "zerocopy_threshold": 0, 00:21:31.095 "tls_version": 0, 00:21:31.095 "enable_ktls": false 00:21:31.095 } 00:21:31.095 }, 00:21:31.095 { 00:21:31.095 "method": "sock_impl_set_options", 00:21:31.095 "params": { 00:21:31.095 "impl_name": "posix", 00:21:31.095 "recv_buf_size": 2097152, 00:21:31.095 "send_buf_size": 2097152, 00:21:31.095 "enable_recv_pipe": true, 00:21:31.095 "enable_quickack": false, 00:21:31.095 "enable_placement_id": 0, 00:21:31.095 "enable_zerocopy_send_server": true, 00:21:31.095 "enable_zerocopy_send_client": false, 00:21:31.095 "zerocopy_threshold": 0, 00:21:31.095 "tls_version": 0, 00:21:31.095 "enable_ktls": false 00:21:31.095 } 00:21:31.095 } 00:21:31.095 ] 00:21:31.095 }, 00:21:31.095 { 00:21:31.095 "subsystem": "vmd", 00:21:31.095 "config": [] 00:21:31.095 }, 00:21:31.095 { 00:21:31.095 "subsystem": "accel", 00:21:31.095 "config": [ 00:21:31.095 { 00:21:31.095 "method": "accel_set_options", 00:21:31.095 "params": { 00:21:31.095 "small_cache_size": 128, 00:21:31.095 "large_cache_size": 16, 00:21:31.095 "task_count": 2048, 00:21:31.095 "sequence_count": 2048, 00:21:31.095 "buf_count": 2048 00:21:31.095 } 00:21:31.095 } 00:21:31.095 ] 00:21:31.095 }, 00:21:31.095 { 00:21:31.095 "subsystem": "bdev", 00:21:31.095 "config": [ 00:21:31.095 { 00:21:31.095 "method": "bdev_set_options", 00:21:31.095 "params": { 00:21:31.095 "bdev_io_pool_size": 65535, 00:21:31.095 "bdev_io_cache_size": 256, 00:21:31.095 "bdev_auto_examine": true, 00:21:31.095 "iobuf_small_cache_size": 128, 00:21:31.095 "iobuf_large_cache_size": 16 00:21:31.095 } 00:21:31.095 }, 00:21:31.095 { 00:21:31.095 "method": "bdev_raid_set_options", 00:21:31.095 "params": { 00:21:31.096 "process_window_size_kb": 1024, 00:21:31.096 "process_max_bandwidth_mb_sec": 0 00:21:31.096 } 00:21:31.096 }, 00:21:31.096 { 00:21:31.096 "method": "bdev_iscsi_set_options", 00:21:31.096 "params": { 00:21:31.096 "timeout_sec": 30 00:21:31.096 } 00:21:31.096 }, 00:21:31.096 { 00:21:31.096 "method": "bdev_nvme_set_options", 00:21:31.096 "params": { 00:21:31.096 "action_on_timeout": "none", 00:21:31.096 "timeout_us": 0, 00:21:31.096 "timeout_admin_us": 0, 00:21:31.096 "keep_alive_timeout_ms": 10000, 00:21:31.096 "arbitration_burst": 0, 00:21:31.096 "low_priority_weight": 0, 00:21:31.096 "medium_priority_weight": 0, 00:21:31.096 "high_priority_weight": 0, 00:21:31.096 "nvme_adminq_poll_period_us": 10000, 00:21:31.096 "nvme_ioq_poll_period_us": 0, 00:21:31.096 "io_queue_requests": 0, 00:21:31.096 "delay_cmd_submit": true, 00:21:31.096 "transport_retry_count": 4, 00:21:31.096 "bdev_retry_count": 3, 00:21:31.096 "transport_ack_timeout": 0, 00:21:31.096 "ctrlr_loss_timeout_sec": 0, 00:21:31.096 "reconnect_delay_sec": 0, 00:21:31.096 "fast_io_fail_timeout_sec": 0, 00:21:31.096 "disable_auto_failback": false, 00:21:31.096 "generate_uuids": false, 00:21:31.096 "transport_tos": 0, 00:21:31.096 "nvme_error_stat": false, 00:21:31.096 "rdma_srq_size": 0, 00:21:31.096 "io_path_stat": false, 00:21:31.096 "allow_accel_sequence": false, 00:21:31.096 "rdma_max_cq_size": 0, 00:21:31.096 "rdma_cm_event_timeout_ms": 0, 00:21:31.096 "dhchap_digests": [ 00:21:31.096 "sha256", 00:21:31.096 "sha384", 00:21:31.096 "sha512" 00:21:31.096 ], 00:21:31.096 "dhchap_dhgroups": [ 00:21:31.096 "null", 00:21:31.096 "ffdhe2048", 00:21:31.096 "ffdhe3072", 00:21:31.096 "ffdhe4096", 00:21:31.096 "ffdhe6144", 00:21:31.096 "ffdhe8192" 00:21:31.096 ] 00:21:31.096 } 00:21:31.096 }, 00:21:31.096 { 00:21:31.096 "method": "bdev_nvme_set_hotplug", 00:21:31.096 "params": { 00:21:31.096 "period_us": 100000, 00:21:31.096 "enable": false 00:21:31.096 } 00:21:31.096 }, 00:21:31.096 { 00:21:31.096 "method": "bdev_malloc_create", 00:21:31.096 "params": { 00:21:31.096 "name": "malloc0", 00:21:31.096 "num_blocks": 8192, 00:21:31.096 "block_size": 4096, 00:21:31.096 "physical_block_size": 4096, 00:21:31.096 "uuid": "ce0a7804-c941-4909-86a5-2e09d4b9b48c", 00:21:31.096 "optimal_io_boundary": 0, 00:21:31.096 "md_size": 0, 00:21:31.096 "dif_type": 0, 00:21:31.096 "dif_is_head_of_md": false, 00:21:31.096 "dif_pi_format": 0 00:21:31.096 } 00:21:31.096 }, 00:21:31.096 { 00:21:31.096 "method": "bdev_wait_for_examine" 00:21:31.096 } 00:21:31.096 ] 00:21:31.096 }, 00:21:31.096 { 00:21:31.096 "subsystem": "nbd", 00:21:31.096 "config": [] 00:21:31.096 }, 00:21:31.096 { 00:21:31.096 "subsystem": "scheduler", 00:21:31.096 "config": [ 00:21:31.096 { 00:21:31.096 "method": "framework_set_scheduler", 00:21:31.096 "params": { 00:21:31.096 "name": "static" 00:21:31.096 } 00:21:31.096 } 00:21:31.096 ] 00:21:31.096 }, 00:21:31.096 { 00:21:31.096 "subsystem": "nvmf", 00:21:31.096 "config": [ 00:21:31.096 { 00:21:31.096 "method": "nvmf_set_config", 00:21:31.096 "params": { 00:21:31.096 "discovery_filter": "match_any", 00:21:31.096 "admin_cmd_passthru": { 00:21:31.096 "identify_ctrlr": false 00:21:31.096 }, 00:21:31.096 "dhchap_digests": [ 00:21:31.096 "sha256", 00:21:31.096 "sha384", 00:21:31.096 "sha512" 00:21:31.096 ], 00:21:31.096 "dhchap_dhgroups": [ 00:21:31.096 "null", 00:21:31.096 "ffdhe2048", 00:21:31.096 "ffdhe3072", 00:21:31.096 "ffdhe4096", 00:21:31.096 "ffdhe6144", 00:21:31.096 "ffdhe8192" 00:21:31.096 ] 00:21:31.096 } 00:21:31.096 }, 00:21:31.096 { 00:21:31.096 "method": "nvmf_set_max_subsystems", 00:21:31.096 "params": { 00:21:31.096 "max_subsystems": 1024 00:21:31.096 } 00:21:31.096 }, 00:21:31.096 { 00:21:31.096 "method": "nvmf_set_crdt", 00:21:31.096 "params": { 00:21:31.096 "crdt1": 0, 00:21:31.096 "crdt2": 0, 00:21:31.096 "crdt3": 0 00:21:31.096 } 00:21:31.096 }, 00:21:31.096 { 00:21:31.096 "method": "nvmf_create_transport", 00:21:31.096 "params": { 00:21:31.096 "trtype": "TCP", 00:21:31.096 "max_queue_depth": 128, 00:21:31.096 "max_io_qpairs_per_ctrlr": 127, 00:21:31.096 "in_capsule_data_size": 4096, 00:21:31.096 "max_io_size": 131072, 00:21:31.096 "io_unit_size": 131072, 00:21:31.096 "max_aq_depth": 128, 00:21:31.096 "num_shared_buffers": 511, 00:21:31.096 "buf_cache_size": 4294967295, 00:21:31.096 "dif_insert_or_strip": false, 00:21:31.096 "zcopy": false, 00:21:31.096 "c2h_success": false, 00:21:31.096 "sock_priority": 0, 00:21:31.096 "abort_timeout_sec": 1, 00:21:31.096 "ack_timeout": 0, 00:21:31.096 "data_wr_pool_size": 0 00:21:31.096 } 00:21:31.096 }, 00:21:31.096 { 00:21:31.096 "method": "nvmf_create_subsystem", 00:21:31.096 "params": { 00:21:31.096 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:31.096 "allow_any_host": false, 00:21:31.096 "serial_number": "00000000000000000000", 00:21:31.096 "model_number": "SPDK bdev Controller", 00:21:31.096 "max_namespaces": 32, 00:21:31.096 "min_cntlid": 1, 00:21:31.096 "max_cntlid": 65519, 00:21:31.096 "ana_reporting": false 00:21:31.096 } 00:21:31.096 }, 00:21:31.096 { 00:21:31.096 "method": "nvmf_subsystem_add_host", 00:21:31.096 "params": { 00:21:31.096 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:31.096 "host": "nqn.2016-06.io.spdk:host1", 00:21:31.096 "psk": "key0" 00:21:31.096 } 00:21:31.096 }, 00:21:31.096 { 00:21:31.096 "method": "nvmf_subsystem_add_ns", 00:21:31.096 "params": { 00:21:31.096 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:31.096 "namespace": { 00:21:31.096 "nsid": 1, 00:21:31.096 "bdev_name": "malloc0", 00:21:31.096 "nguid": "CE0A7804C941490986A52E09D4B9B48C", 00:21:31.096 "uuid": "ce0a7804-c941-4909-86a5-2e09d4b9b48c", 00:21:31.096 "no_auto_visible": false 00:21:31.096 } 00:21:31.096 } 00:21:31.096 }, 00:21:31.096 { 00:21:31.096 "method": "nvmf_subsystem_add_listener", 00:21:31.096 "params": { 00:21:31.096 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:31.096 "listen_address": { 00:21:31.096 "trtype": "TCP", 00:21:31.096 "adrfam": "IPv4", 00:21:31.096 "traddr": "10.0.0.2", 00:21:31.096 "trsvcid": "4420" 00:21:31.096 }, 00:21:31.096 "secure_channel": false, 00:21:31.096 "sock_impl": "ssl" 00:21:31.096 } 00:21:31.096 } 00:21:31.096 ] 00:21:31.096 } 00:21:31.096 ] 00:21:31.096 }' 00:21:31.096 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1399659 00:21:31.096 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:21:31.096 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1399659 00:21:31.096 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1399659 ']' 00:21:31.096 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:31.096 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:31.096 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:31.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:31.096 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:31.096 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:31.096 [2024-11-20 09:55:01.902136] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:21:31.096 [2024-11-20 09:55:01.902200] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:31.096 [2024-11-20 09:55:01.989868] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:31.357 [2024-11-20 09:55:02.018803] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:31.357 [2024-11-20 09:55:02.018829] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:31.357 [2024-11-20 09:55:02.018835] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:31.357 [2024-11-20 09:55:02.018840] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:31.357 [2024-11-20 09:55:02.018844] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:31.357 [2024-11-20 09:55:02.019328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:31.357 [2024-11-20 09:55:02.212125] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:31.357 [2024-11-20 09:55:02.244154] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:31.357 [2024-11-20 09:55:02.244363] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:31.930 09:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:31.930 09:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:31.930 09:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:31.930 09:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:31.930 09:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:31.930 09:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:31.930 09:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=1399792 00:21:31.930 09:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 1399792 /var/tmp/bdevperf.sock 00:21:31.930 09:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1399792 ']' 00:21:31.930 09:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:31.930 09:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:31.930 09:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:31.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:31.930 09:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:21:31.930 09:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:31.930 09:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:31.930 09:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:21:31.930 "subsystems": [ 00:21:31.930 { 00:21:31.930 "subsystem": "keyring", 00:21:31.930 "config": [ 00:21:31.930 { 00:21:31.930 "method": "keyring_file_add_key", 00:21:31.930 "params": { 00:21:31.930 "name": "key0", 00:21:31.930 "path": "/tmp/tmp.mSgh9swVDZ" 00:21:31.930 } 00:21:31.930 } 00:21:31.930 ] 00:21:31.930 }, 00:21:31.930 { 00:21:31.930 "subsystem": "iobuf", 00:21:31.930 "config": [ 00:21:31.930 { 00:21:31.930 "method": "iobuf_set_options", 00:21:31.930 "params": { 00:21:31.930 "small_pool_count": 8192, 00:21:31.930 "large_pool_count": 1024, 00:21:31.930 "small_bufsize": 8192, 00:21:31.930 "large_bufsize": 135168, 00:21:31.930 "enable_numa": false 00:21:31.930 } 00:21:31.930 } 00:21:31.930 ] 00:21:31.930 }, 00:21:31.930 { 00:21:31.930 "subsystem": "sock", 00:21:31.930 "config": [ 00:21:31.930 { 00:21:31.930 "method": "sock_set_default_impl", 00:21:31.930 "params": { 00:21:31.930 "impl_name": "posix" 00:21:31.930 } 00:21:31.930 }, 00:21:31.930 { 00:21:31.930 "method": "sock_impl_set_options", 00:21:31.930 "params": { 00:21:31.930 "impl_name": "ssl", 00:21:31.930 "recv_buf_size": 4096, 00:21:31.930 "send_buf_size": 4096, 00:21:31.930 "enable_recv_pipe": true, 00:21:31.930 "enable_quickack": false, 00:21:31.930 "enable_placement_id": 0, 00:21:31.930 "enable_zerocopy_send_server": true, 00:21:31.930 "enable_zerocopy_send_client": false, 00:21:31.930 "zerocopy_threshold": 0, 00:21:31.930 "tls_version": 0, 00:21:31.930 "enable_ktls": false 00:21:31.930 } 00:21:31.930 }, 00:21:31.930 { 00:21:31.930 "method": "sock_impl_set_options", 00:21:31.930 "params": { 00:21:31.930 "impl_name": "posix", 00:21:31.930 "recv_buf_size": 2097152, 00:21:31.930 "send_buf_size": 2097152, 00:21:31.930 "enable_recv_pipe": true, 00:21:31.930 "enable_quickack": false, 00:21:31.930 "enable_placement_id": 0, 00:21:31.930 "enable_zerocopy_send_server": true, 00:21:31.930 "enable_zerocopy_send_client": false, 00:21:31.930 "zerocopy_threshold": 0, 00:21:31.930 "tls_version": 0, 00:21:31.930 "enable_ktls": false 00:21:31.930 } 00:21:31.930 } 00:21:31.930 ] 00:21:31.930 }, 00:21:31.930 { 00:21:31.930 "subsystem": "vmd", 00:21:31.930 "config": [] 00:21:31.930 }, 00:21:31.930 { 00:21:31.930 "subsystem": "accel", 00:21:31.930 "config": [ 00:21:31.930 { 00:21:31.930 "method": "accel_set_options", 00:21:31.930 "params": { 00:21:31.930 "small_cache_size": 128, 00:21:31.930 "large_cache_size": 16, 00:21:31.930 "task_count": 2048, 00:21:31.930 "sequence_count": 2048, 00:21:31.930 "buf_count": 2048 00:21:31.930 } 00:21:31.930 } 00:21:31.930 ] 00:21:31.930 }, 00:21:31.930 { 00:21:31.930 "subsystem": "bdev", 00:21:31.930 "config": [ 00:21:31.930 { 00:21:31.930 "method": "bdev_set_options", 00:21:31.930 "params": { 00:21:31.930 "bdev_io_pool_size": 65535, 00:21:31.930 "bdev_io_cache_size": 256, 00:21:31.930 "bdev_auto_examine": true, 00:21:31.930 "iobuf_small_cache_size": 128, 00:21:31.930 "iobuf_large_cache_size": 16 00:21:31.930 } 00:21:31.930 }, 00:21:31.930 { 00:21:31.930 "method": "bdev_raid_set_options", 00:21:31.930 "params": { 00:21:31.930 "process_window_size_kb": 1024, 00:21:31.930 "process_max_bandwidth_mb_sec": 0 00:21:31.930 } 00:21:31.930 }, 00:21:31.930 { 00:21:31.930 "method": "bdev_iscsi_set_options", 00:21:31.930 "params": { 00:21:31.930 "timeout_sec": 30 00:21:31.930 } 00:21:31.930 }, 00:21:31.930 { 00:21:31.930 "method": "bdev_nvme_set_options", 00:21:31.930 "params": { 00:21:31.930 "action_on_timeout": "none", 00:21:31.930 "timeout_us": 0, 00:21:31.930 "timeout_admin_us": 0, 00:21:31.930 "keep_alive_timeout_ms": 10000, 00:21:31.930 "arbitration_burst": 0, 00:21:31.930 "low_priority_weight": 0, 00:21:31.930 "medium_priority_weight": 0, 00:21:31.930 "high_priority_weight": 0, 00:21:31.930 "nvme_adminq_poll_period_us": 10000, 00:21:31.930 "nvme_ioq_poll_period_us": 0, 00:21:31.930 "io_queue_requests": 512, 00:21:31.930 "delay_cmd_submit": true, 00:21:31.930 "transport_retry_count": 4, 00:21:31.930 "bdev_retry_count": 3, 00:21:31.930 "transport_ack_timeout": 0, 00:21:31.930 "ctrlr_loss_timeout_sec": 0, 00:21:31.930 "reconnect_delay_sec": 0, 00:21:31.930 "fast_io_fail_timeout_sec": 0, 00:21:31.930 "disable_auto_failback": false, 00:21:31.930 "generate_uuids": false, 00:21:31.930 "transport_tos": 0, 00:21:31.930 "nvme_error_stat": false, 00:21:31.930 "rdma_srq_size": 0, 00:21:31.930 "io_path_stat": false, 00:21:31.930 "allow_accel_sequence": false, 00:21:31.930 "rdma_max_cq_size": 0, 00:21:31.930 "rdma_cm_event_timeout_ms": 0, 00:21:31.930 "dhchap_digests": [ 00:21:31.930 "sha256", 00:21:31.930 "sha384", 00:21:31.930 "sha512" 00:21:31.930 ], 00:21:31.930 "dhchap_dhgroups": [ 00:21:31.930 "null", 00:21:31.930 "ffdhe2048", 00:21:31.930 "ffdhe3072", 00:21:31.930 "ffdhe4096", 00:21:31.930 "ffdhe6144", 00:21:31.930 "ffdhe8192" 00:21:31.930 ] 00:21:31.930 } 00:21:31.930 }, 00:21:31.930 { 00:21:31.930 "method": "bdev_nvme_attach_controller", 00:21:31.930 "params": { 00:21:31.931 "name": "nvme0", 00:21:31.931 "trtype": "TCP", 00:21:31.931 "adrfam": "IPv4", 00:21:31.931 "traddr": "10.0.0.2", 00:21:31.931 "trsvcid": "4420", 00:21:31.931 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:31.931 "prchk_reftag": false, 00:21:31.931 "prchk_guard": false, 00:21:31.931 "ctrlr_loss_timeout_sec": 0, 00:21:31.931 "reconnect_delay_sec": 0, 00:21:31.931 "fast_io_fail_timeout_sec": 0, 00:21:31.931 "psk": "key0", 00:21:31.931 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:31.931 "hdgst": false, 00:21:31.931 "ddgst": false, 00:21:31.931 "multipath": "multipath" 00:21:31.931 } 00:21:31.931 }, 00:21:31.931 { 00:21:31.931 "method": "bdev_nvme_set_hotplug", 00:21:31.931 "params": { 00:21:31.931 "period_us": 100000, 00:21:31.931 "enable": false 00:21:31.931 } 00:21:31.931 }, 00:21:31.931 { 00:21:31.931 "method": "bdev_enable_histogram", 00:21:31.931 "params": { 00:21:31.931 "name": "nvme0n1", 00:21:31.931 "enable": true 00:21:31.931 } 00:21:31.931 }, 00:21:31.931 { 00:21:31.931 "method": "bdev_wait_for_examine" 00:21:31.931 } 00:21:31.931 ] 00:21:31.931 }, 00:21:31.931 { 00:21:31.931 "subsystem": "nbd", 00:21:31.931 "config": [] 00:21:31.931 } 00:21:31.931 ] 00:21:31.931 }' 00:21:31.931 [2024-11-20 09:55:02.778462] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:21:31.931 [2024-11-20 09:55:02.778513] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1399792 ] 00:21:32.191 [2024-11-20 09:55:02.859844] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:32.191 [2024-11-20 09:55:02.889379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:32.191 [2024-11-20 09:55:03.023988] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:32.764 09:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:32.764 09:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:32.764 09:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:32.764 09:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:21:33.024 09:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:33.024 09:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:33.024 Running I/O for 1 seconds... 00:21:33.965 5177.00 IOPS, 20.22 MiB/s 00:21:33.965 Latency(us) 00:21:33.965 [2024-11-20T08:55:04.881Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:33.965 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:33.965 Verification LBA range: start 0x0 length 0x2000 00:21:33.965 nvme0n1 : 1.05 5055.34 19.75 0.00 0.00 24828.79 6580.91 48278.19 00:21:33.965 [2024-11-20T08:55:04.881Z] =================================================================================================================== 00:21:33.965 [2024-11-20T08:55:04.881Z] Total : 5055.34 19.75 0.00 0.00 24828.79 6580.91 48278.19 00:21:33.965 { 00:21:33.965 "results": [ 00:21:33.965 { 00:21:33.965 "job": "nvme0n1", 00:21:33.965 "core_mask": "0x2", 00:21:33.965 "workload": "verify", 00:21:33.965 "status": "finished", 00:21:33.965 "verify_range": { 00:21:33.965 "start": 0, 00:21:33.965 "length": 8192 00:21:33.965 }, 00:21:33.965 "queue_depth": 128, 00:21:33.965 "io_size": 4096, 00:21:33.965 "runtime": 1.049386, 00:21:33.965 "iops": 5055.33712094501, 00:21:33.965 "mibps": 19.747410628691444, 00:21:33.965 "io_failed": 0, 00:21:33.965 "io_timeout": 0, 00:21:33.965 "avg_latency_us": 24828.78612629595, 00:21:33.965 "min_latency_us": 6580.906666666667, 00:21:33.965 "max_latency_us": 48278.18666666667 00:21:33.965 } 00:21:33.965 ], 00:21:33.965 "core_count": 1 00:21:33.965 } 00:21:34.226 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:21:34.226 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:21:34.226 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:21:34.226 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:21:34.226 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:21:34.226 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:21:34.226 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:34.226 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:21:34.226 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:21:34.226 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:21:34.226 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:34.226 nvmf_trace.0 00:21:34.226 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:21:34.226 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 1399792 00:21:34.226 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1399792 ']' 00:21:34.226 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1399792 00:21:34.226 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:34.226 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:34.226 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1399792 00:21:34.226 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:34.226 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:34.226 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1399792' 00:21:34.226 killing process with pid 1399792 00:21:34.226 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1399792 00:21:34.226 Received shutdown signal, test time was about 1.000000 seconds 00:21:34.226 00:21:34.226 Latency(us) 00:21:34.226 [2024-11-20T08:55:05.142Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:34.226 [2024-11-20T08:55:05.142Z] =================================================================================================================== 00:21:34.226 [2024-11-20T08:55:05.142Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:34.226 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1399792 00:21:34.486 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:21:34.486 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:34.486 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:21:34.486 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:34.486 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:21:34.486 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:34.486 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:34.486 rmmod nvme_tcp 00:21:34.486 rmmod nvme_fabrics 00:21:34.486 rmmod nvme_keyring 00:21:34.486 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:34.486 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:21:34.486 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:21:34.486 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 1399659 ']' 00:21:34.486 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 1399659 00:21:34.486 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1399659 ']' 00:21:34.486 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1399659 00:21:34.486 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:34.486 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:34.486 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1399659 00:21:34.486 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:34.486 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:34.486 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1399659' 00:21:34.486 killing process with pid 1399659 00:21:34.486 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1399659 00:21:34.486 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1399659 00:21:34.746 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:34.746 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:34.746 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:34.746 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:21:34.746 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:21:34.746 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:21:34.746 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:34.746 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:34.746 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:34.746 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:34.746 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:34.746 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:36.658 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:36.658 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.BAilOcvkZX /tmp/tmp.5PyJJX7lEM /tmp/tmp.mSgh9swVDZ 00:21:36.658 00:21:36.658 real 1m27.167s 00:21:36.658 user 2m18.647s 00:21:36.658 sys 0m26.234s 00:21:36.658 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:36.658 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:36.658 ************************************ 00:21:36.658 END TEST nvmf_tls 00:21:36.658 ************************************ 00:21:36.658 09:55:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:36.658 09:55:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:36.658 09:55:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:36.658 09:55:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:36.921 ************************************ 00:21:36.921 START TEST nvmf_fips 00:21:36.921 ************************************ 00:21:36.921 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:36.921 * Looking for test storage... 00:21:36.921 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:21:36.921 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:36.921 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:21:36.921 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:36.921 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:36.921 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:36.921 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:36.921 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:36.921 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:21:36.921 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:21:36.921 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:21:36.921 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:21:36.921 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:21:36.921 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:21:36.921 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:21:36.921 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:36.921 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:21:36.921 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:21:36.921 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:36.921 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:36.921 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:21:36.921 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:21:36.921 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:36.921 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:21:36.921 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:21:36.921 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:21:36.921 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:21:36.921 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:36.921 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:21:36.921 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:21:36.921 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:36.921 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:36.921 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:21:36.921 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:36.921 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:36.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:36.921 --rc genhtml_branch_coverage=1 00:21:36.921 --rc genhtml_function_coverage=1 00:21:36.921 --rc genhtml_legend=1 00:21:36.921 --rc geninfo_all_blocks=1 00:21:36.921 --rc geninfo_unexecuted_blocks=1 00:21:36.921 00:21:36.921 ' 00:21:36.921 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:36.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:36.921 --rc genhtml_branch_coverage=1 00:21:36.921 --rc genhtml_function_coverage=1 00:21:36.921 --rc genhtml_legend=1 00:21:36.921 --rc geninfo_all_blocks=1 00:21:36.921 --rc geninfo_unexecuted_blocks=1 00:21:36.921 00:21:36.921 ' 00:21:36.921 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:36.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:36.921 --rc genhtml_branch_coverage=1 00:21:36.921 --rc genhtml_function_coverage=1 00:21:36.921 --rc genhtml_legend=1 00:21:36.921 --rc geninfo_all_blocks=1 00:21:36.921 --rc geninfo_unexecuted_blocks=1 00:21:36.921 00:21:36.921 ' 00:21:36.921 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:36.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:36.921 --rc genhtml_branch_coverage=1 00:21:36.921 --rc genhtml_function_coverage=1 00:21:36.921 --rc genhtml_legend=1 00:21:36.921 --rc geninfo_all_blocks=1 00:21:36.921 --rc geninfo_unexecuted_blocks=1 00:21:36.921 00:21:36.921 ' 00:21:36.921 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:36.921 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:21:36.921 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:36.921 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:36.921 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:36.921 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:36.921 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:36.921 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:36.921 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:36.921 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:36.921 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:36.922 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:36.922 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:36.922 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:36.922 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:36.922 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:36.922 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:36.922 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:36.922 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:36.922 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:21:36.922 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:36.922 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:36.922 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:36.922 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.922 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.922 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.922 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:21:36.922 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.922 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:21:36.922 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:36.922 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:36.922 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:36.922 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:36.922 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:36.922 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:36.922 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:36.922 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:36.922 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:36.922 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:36.922 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:36.922 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:21:36.922 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:21:36.922 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:21:36.922 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:21:36.922 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:21:36.922 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:21:36.922 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:36.922 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:36.922 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:21:36.922 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:21:36.922 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:21:36.922 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:21:36.922 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:21:36.922 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:21:36.922 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:21:36.922 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:36.922 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:21:36.922 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:21:36.922 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:36.922 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:36.922 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:21:36.922 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:21:36.922 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:36.922 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:21:37.184 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:21:37.184 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:21:37.184 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:21:37.184 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:37.184 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:21:37.184 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:21:37.184 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:37.184 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:37.184 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:21:37.184 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:37.184 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:21:37.184 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:21:37.184 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:37.184 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:21:37.184 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:21:37.184 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:21:37.184 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:21:37.184 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:37.184 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:21:37.184 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:21:37.184 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:37.184 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:21:37.184 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:21:37.184 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:21:37.184 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:21:37.184 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:21:37.184 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:21:37.184 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:21:37.184 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:21:37.184 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:21:37.184 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:21:37.184 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:21:37.184 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:21:37.184 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:21:37.184 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:21:37.184 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:21:37.184 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:21:37.184 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:21:37.184 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:21:37.184 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:21:37.184 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:21:37.184 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:21:37.184 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:21:37.184 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:21:37.184 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:21:37.184 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:21:37.184 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:37.184 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:21:37.184 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:37.184 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:21:37.184 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:37.184 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:21:37.184 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:21:37.184 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:21:37.184 Error setting digest 00:21:37.184 40E26BC31A7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:21:37.184 40E26BC31A7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:21:37.184 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:21:37.184 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:37.184 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:37.184 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:37.184 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:21:37.184 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:37.184 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:37.184 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:37.184 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:37.184 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:37.184 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:37.184 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:37.184 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:37.184 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:37.184 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:37.184 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:21:37.184 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:45.329 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:45.329 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:21:45.329 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:45.329 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:45.329 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:45.329 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:45.329 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:45.329 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:21:45.329 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:45.329 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:21:45.329 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:21:45.329 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:21:45.329 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:21:45.329 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:21:45.329 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:21:45.329 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:45.329 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:45.329 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:45.329 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:45.329 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:45.329 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:45.329 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:45.329 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:45.329 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:45.329 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:45.329 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:45.329 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:45.329 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:45.329 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:45.329 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:45.329 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:45.329 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:45.329 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:45.329 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:45.329 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:45.329 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:45.329 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:45.329 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:45.329 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:45.329 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:45.329 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:45.329 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:45.329 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:45.329 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:45.329 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:45.329 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:45.329 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:45.329 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:45.329 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:45.329 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:45.329 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:45.329 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:45.329 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:45.329 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:45.329 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:45.329 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:45.329 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:45.329 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:45.329 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:45.329 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:45.329 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:45.329 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:45.329 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:45.329 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:45.329 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:45.329 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:45.329 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:45.329 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:45.329 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:45.329 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:45.329 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:45.329 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:45.329 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:45.329 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:21:45.329 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:45.329 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:45.329 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:45.329 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:45.329 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:45.329 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:45.329 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:45.329 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:45.329 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:45.329 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:45.330 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:45.330 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:45.330 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:45.330 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:45.330 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:45.330 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:45.330 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:45.330 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:45.330 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:45.330 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:45.330 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:45.330 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:45.330 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:45.330 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:45.330 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:45.330 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:45.330 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:45.330 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.624 ms 00:21:45.330 00:21:45.330 --- 10.0.0.2 ping statistics --- 00:21:45.330 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:45.330 rtt min/avg/max/mdev = 0.624/0.624/0.624/0.000 ms 00:21:45.330 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:45.330 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:45.330 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:21:45.330 00:21:45.330 --- 10.0.0.1 ping statistics --- 00:21:45.330 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:45.330 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:21:45.330 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:45.330 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:21:45.330 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:45.330 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:45.330 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:45.330 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:45.330 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:45.330 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:45.330 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:45.330 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:21:45.330 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:45.330 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:45.330 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:45.330 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=1404500 00:21:45.330 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 1404500 00:21:45.330 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:45.330 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 1404500 ']' 00:21:45.330 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:45.330 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:45.330 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:45.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:45.330 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:45.330 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:45.330 [2024-11-20 09:55:15.552293] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:21:45.330 [2024-11-20 09:55:15.552366] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:45.330 [2024-11-20 09:55:15.653018] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:45.330 [2024-11-20 09:55:15.704313] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:45.330 [2024-11-20 09:55:15.704362] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:45.330 [2024-11-20 09:55:15.704371] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:45.330 [2024-11-20 09:55:15.704379] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:45.330 [2024-11-20 09:55:15.704385] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:45.330 [2024-11-20 09:55:15.705188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:45.591 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:45.591 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:21:45.591 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:45.591 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:45.591 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:45.591 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:45.591 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:21:45.591 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:45.591 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:21:45.591 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.ByH 00:21:45.591 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:45.591 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.ByH 00:21:45.591 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.ByH 00:21:45.591 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.ByH 00:21:45.591 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:45.853 [2024-11-20 09:55:16.564983] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:45.853 [2024-11-20 09:55:16.580976] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:45.853 [2024-11-20 09:55:16.581297] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:45.853 malloc0 00:21:45.853 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:45.853 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=1404854 00:21:45.853 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 1404854 /var/tmp/bdevperf.sock 00:21:45.853 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:45.853 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 1404854 ']' 00:21:45.853 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:45.853 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:45.853 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:45.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:45.853 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:45.853 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:45.853 [2024-11-20 09:55:16.724595] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:21:45.853 [2024-11-20 09:55:16.724670] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1404854 ] 00:21:46.114 [2024-11-20 09:55:16.817323] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:46.114 [2024-11-20 09:55:16.868361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:46.686 09:55:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:46.686 09:55:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:21:46.686 09:55:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.ByH 00:21:46.948 09:55:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:46.948 [2024-11-20 09:55:17.849363] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:47.209 TLSTESTn1 00:21:47.209 09:55:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:47.209 Running I/O for 10 seconds... 00:21:49.535 4542.00 IOPS, 17.74 MiB/s [2024-11-20T08:55:21.395Z] 5376.50 IOPS, 21.00 MiB/s [2024-11-20T08:55:22.337Z] 5308.67 IOPS, 20.74 MiB/s [2024-11-20T08:55:23.453Z] 5414.25 IOPS, 21.15 MiB/s [2024-11-20T08:55:24.427Z] 5334.60 IOPS, 20.84 MiB/s [2024-11-20T08:55:25.368Z] 5463.50 IOPS, 21.34 MiB/s [2024-11-20T08:55:26.309Z] 5359.71 IOPS, 20.94 MiB/s [2024-11-20T08:55:27.251Z] 5282.00 IOPS, 20.63 MiB/s [2024-11-20T08:55:28.198Z] 5282.56 IOPS, 20.63 MiB/s [2024-11-20T08:55:28.198Z] 5395.60 IOPS, 21.08 MiB/s 00:21:57.282 Latency(us) 00:21:57.282 [2024-11-20T08:55:28.198Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:57.282 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:57.282 Verification LBA range: start 0x0 length 0x2000 00:21:57.282 TLSTESTn1 : 10.02 5395.84 21.08 0.00 0.00 23680.69 6062.08 50462.72 00:21:57.282 [2024-11-20T08:55:28.198Z] =================================================================================================================== 00:21:57.282 [2024-11-20T08:55:28.198Z] Total : 5395.84 21.08 0.00 0.00 23680.69 6062.08 50462.72 00:21:57.282 { 00:21:57.282 "results": [ 00:21:57.282 { 00:21:57.282 "job": "TLSTESTn1", 00:21:57.282 "core_mask": "0x4", 00:21:57.282 "workload": "verify", 00:21:57.282 "status": "finished", 00:21:57.282 "verify_range": { 00:21:57.282 "start": 0, 00:21:57.282 "length": 8192 00:21:57.282 }, 00:21:57.282 "queue_depth": 128, 00:21:57.282 "io_size": 4096, 00:21:57.282 "runtime": 10.023084, 00:21:57.282 "iops": 5395.844233172145, 00:21:57.282 "mibps": 21.077516535828693, 00:21:57.282 "io_failed": 0, 00:21:57.282 "io_timeout": 0, 00:21:57.282 "avg_latency_us": 23680.68688891765, 00:21:57.282 "min_latency_us": 6062.08, 00:21:57.282 "max_latency_us": 50462.72 00:21:57.282 } 00:21:57.282 ], 00:21:57.282 "core_count": 1 00:21:57.282 } 00:21:57.282 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:21:57.282 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:21:57.282 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:21:57.282 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:21:57.282 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:21:57.282 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:57.282 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:21:57.282 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:21:57.282 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:21:57.282 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:57.282 nvmf_trace.0 00:21:57.543 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:21:57.543 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1404854 00:21:57.543 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 1404854 ']' 00:21:57.543 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 1404854 00:21:57.543 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:21:57.543 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:57.543 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1404854 00:21:57.543 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:57.543 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:57.543 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1404854' 00:21:57.543 killing process with pid 1404854 00:21:57.543 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 1404854 00:21:57.543 Received shutdown signal, test time was about 10.000000 seconds 00:21:57.543 00:21:57.543 Latency(us) 00:21:57.543 [2024-11-20T08:55:28.459Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:57.543 [2024-11-20T08:55:28.459Z] =================================================================================================================== 00:21:57.543 [2024-11-20T08:55:28.459Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:57.543 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 1404854 00:21:57.543 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:21:57.543 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:57.543 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:21:57.543 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:57.543 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:21:57.543 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:57.543 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:57.543 rmmod nvme_tcp 00:21:57.543 rmmod nvme_fabrics 00:21:57.543 rmmod nvme_keyring 00:21:57.543 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:57.805 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:21:57.805 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:21:57.805 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 1404500 ']' 00:21:57.805 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 1404500 00:21:57.805 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 1404500 ']' 00:21:57.805 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 1404500 00:21:57.805 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:21:57.805 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:57.805 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1404500 00:21:57.805 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:57.805 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:57.805 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1404500' 00:21:57.805 killing process with pid 1404500 00:21:57.805 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 1404500 00:21:57.805 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 1404500 00:21:57.805 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:57.805 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:57.805 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:57.805 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:21:57.805 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:21:57.805 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:57.805 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:21:57.805 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:57.805 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:57.805 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:57.805 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:57.805 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:00.352 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:00.352 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.ByH 00:22:00.352 00:22:00.352 real 0m23.150s 00:22:00.352 user 0m24.801s 00:22:00.352 sys 0m9.630s 00:22:00.352 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:00.352 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:00.352 ************************************ 00:22:00.352 END TEST nvmf_fips 00:22:00.352 ************************************ 00:22:00.352 09:55:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:22:00.352 09:55:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:00.352 09:55:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:00.352 09:55:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:00.352 ************************************ 00:22:00.352 START TEST nvmf_control_msg_list 00:22:00.352 ************************************ 00:22:00.353 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:22:00.353 * Looking for test storage... 00:22:00.353 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:00.353 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:00.353 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:22:00.353 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:00.353 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:00.353 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:00.353 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:00.353 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:00.353 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:22:00.353 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:22:00.353 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:22:00.353 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:22:00.353 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:22:00.353 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:22:00.353 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:22:00.353 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:00.353 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:22:00.353 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:22:00.353 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:00.353 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:00.353 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:22:00.353 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:22:00.353 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:00.353 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:22:00.353 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:22:00.353 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:22:00.353 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:22:00.353 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:00.353 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:22:00.353 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:22:00.353 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:00.353 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:00.353 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:22:00.353 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:00.353 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:00.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:00.353 --rc genhtml_branch_coverage=1 00:22:00.353 --rc genhtml_function_coverage=1 00:22:00.353 --rc genhtml_legend=1 00:22:00.353 --rc geninfo_all_blocks=1 00:22:00.353 --rc geninfo_unexecuted_blocks=1 00:22:00.353 00:22:00.353 ' 00:22:00.353 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:00.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:00.353 --rc genhtml_branch_coverage=1 00:22:00.353 --rc genhtml_function_coverage=1 00:22:00.353 --rc genhtml_legend=1 00:22:00.353 --rc geninfo_all_blocks=1 00:22:00.353 --rc geninfo_unexecuted_blocks=1 00:22:00.353 00:22:00.353 ' 00:22:00.353 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:00.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:00.353 --rc genhtml_branch_coverage=1 00:22:00.353 --rc genhtml_function_coverage=1 00:22:00.353 --rc genhtml_legend=1 00:22:00.353 --rc geninfo_all_blocks=1 00:22:00.353 --rc geninfo_unexecuted_blocks=1 00:22:00.353 00:22:00.353 ' 00:22:00.353 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:00.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:00.353 --rc genhtml_branch_coverage=1 00:22:00.353 --rc genhtml_function_coverage=1 00:22:00.353 --rc genhtml_legend=1 00:22:00.353 --rc geninfo_all_blocks=1 00:22:00.353 --rc geninfo_unexecuted_blocks=1 00:22:00.353 00:22:00.353 ' 00:22:00.353 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:00.353 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:22:00.353 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:00.353 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:00.353 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:00.353 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:00.353 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:00.353 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:00.353 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:00.353 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:00.353 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:00.353 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:00.353 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:00.353 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:00.353 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:00.353 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:00.353 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:00.353 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:00.353 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:00.353 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:22:00.353 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:00.353 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:00.353 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:00.353 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.353 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.353 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.353 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:22:00.353 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.353 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:22:00.353 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:00.353 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:00.353 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:00.354 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:00.354 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:00.354 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:00.354 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:00.354 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:00.354 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:00.354 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:00.354 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:22:00.354 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:00.354 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:00.354 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:00.354 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:00.354 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:00.354 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:00.354 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:00.354 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:00.354 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:00.354 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:00.354 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:22:00.354 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:08.494 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:08.494 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:08.494 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:08.494 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:08.494 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:08.494 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.618 ms 00:22:08.494 00:22:08.494 --- 10.0.0.2 ping statistics --- 00:22:08.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:08.494 rtt min/avg/max/mdev = 0.618/0.618/0.618/0.000 ms 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:08.494 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:08.494 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:22:08.494 00:22:08.494 --- 10.0.0.1 ping statistics --- 00:22:08.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:08.494 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=1411223 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 1411223 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 1411223 ']' 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:08.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:08.494 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:08.494 [2024-11-20 09:55:38.633613] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:22:08.494 [2024-11-20 09:55:38.633686] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:08.494 [2024-11-20 09:55:38.733106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:08.494 [2024-11-20 09:55:38.784177] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:08.494 [2024-11-20 09:55:38.784226] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:08.494 [2024-11-20 09:55:38.784235] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:08.494 [2024-11-20 09:55:38.784243] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:08.494 [2024-11-20 09:55:38.784249] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:08.494 [2024-11-20 09:55:38.785049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:08.757 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:08.757 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:22:08.757 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:08.757 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:08.757 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:08.757 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:08.757 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:22:08.757 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:08.757 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:22:08.757 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.757 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:08.757 [2024-11-20 09:55:39.499166] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:08.757 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.757 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:22:08.757 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.757 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:08.757 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.757 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:22:08.757 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.757 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:08.757 Malloc0 00:22:08.757 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.757 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:22:08.757 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.757 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:08.757 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.757 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:08.757 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.757 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:08.757 [2024-11-20 09:55:39.553485] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:08.757 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.757 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=1411560 00:22:08.757 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:08.757 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=1411561 00:22:08.757 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:08.757 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=1411562 00:22:08.757 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 1411560 00:22:08.757 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:08.758 [2024-11-20 09:55:39.643980] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:08.758 [2024-11-20 09:55:39.653811] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:08.758 [2024-11-20 09:55:39.664074] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:10.142 Initializing NVMe Controllers 00:22:10.142 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:22:10.142 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:22:10.142 Initialization complete. Launching workers. 00:22:10.142 ======================================================== 00:22:10.142 Latency(us) 00:22:10.142 Device Information : IOPS MiB/s Average min max 00:22:10.142 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 25.00 0.10 40911.51 40862.08 41108.34 00:22:10.142 ======================================================== 00:22:10.142 Total : 25.00 0.10 40911.51 40862.08 41108.34 00:22:10.142 00:22:10.142 Initializing NVMe Controllers 00:22:10.142 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:22:10.142 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:22:10.142 Initialization complete. Launching workers. 00:22:10.142 ======================================================== 00:22:10.142 Latency(us) 00:22:10.142 Device Information : IOPS MiB/s Average min max 00:22:10.142 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 25.00 0.10 40897.98 40731.64 40959.86 00:22:10.142 ======================================================== 00:22:10.142 Total : 25.00 0.10 40897.98 40731.64 40959.86 00:22:10.142 00:22:10.142 Initializing NVMe Controllers 00:22:10.142 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:22:10.142 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:22:10.142 Initialization complete. Launching workers. 00:22:10.142 ======================================================== 00:22:10.142 Latency(us) 00:22:10.142 Device Information : IOPS MiB/s Average min max 00:22:10.142 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 40895.21 40695.52 40995.04 00:22:10.142 ======================================================== 00:22:10.142 Total : 25.00 0.10 40895.21 40695.52 40995.04 00:22:10.142 00:22:10.142 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 1411561 00:22:10.142 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 1411562 00:22:10.142 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:22:10.142 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:22:10.142 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:10.142 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:22:10.142 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:10.142 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:22:10.142 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:10.142 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:10.142 rmmod nvme_tcp 00:22:10.142 rmmod nvme_fabrics 00:22:10.142 rmmod nvme_keyring 00:22:10.142 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:10.143 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:22:10.143 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:22:10.143 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 1411223 ']' 00:22:10.143 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 1411223 00:22:10.143 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 1411223 ']' 00:22:10.143 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 1411223 00:22:10.143 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:22:10.143 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:10.143 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1411223 00:22:10.143 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:10.143 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:10.143 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1411223' 00:22:10.143 killing process with pid 1411223 00:22:10.143 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 1411223 00:22:10.143 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 1411223 00:22:10.403 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:10.403 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:10.403 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:10.403 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:22:10.403 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:22:10.403 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:10.403 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:22:10.403 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:10.403 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:10.403 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:10.403 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:10.403 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:12.947 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:12.947 00:22:12.947 real 0m12.491s 00:22:12.947 user 0m8.194s 00:22:12.947 sys 0m6.524s 00:22:12.947 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:12.947 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:12.947 ************************************ 00:22:12.947 END TEST nvmf_control_msg_list 00:22:12.947 ************************************ 00:22:12.947 09:55:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:22:12.947 09:55:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:12.947 09:55:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:12.947 09:55:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:12.947 ************************************ 00:22:12.947 START TEST nvmf_wait_for_buf 00:22:12.947 ************************************ 00:22:12.947 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:22:12.947 * Looking for test storage... 00:22:12.947 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:12.947 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:12.948 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:22:12.948 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:12.948 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:12.948 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:12.948 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:12.948 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:12.948 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:22:12.948 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:22:12.948 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:22:12.948 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:22:12.948 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:22:12.948 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:22:12.948 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:22:12.948 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:12.948 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:22:12.948 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:22:12.948 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:12.948 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:12.948 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:22:12.948 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:22:12.948 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:12.948 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:22:12.948 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:22:12.948 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:22:12.948 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:22:12.948 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:12.948 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:22:12.948 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:22:12.948 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:12.948 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:12.948 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:22:12.948 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:12.948 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:12.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:12.948 --rc genhtml_branch_coverage=1 00:22:12.948 --rc genhtml_function_coverage=1 00:22:12.948 --rc genhtml_legend=1 00:22:12.948 --rc geninfo_all_blocks=1 00:22:12.948 --rc geninfo_unexecuted_blocks=1 00:22:12.948 00:22:12.948 ' 00:22:12.948 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:12.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:12.948 --rc genhtml_branch_coverage=1 00:22:12.948 --rc genhtml_function_coverage=1 00:22:12.948 --rc genhtml_legend=1 00:22:12.948 --rc geninfo_all_blocks=1 00:22:12.948 --rc geninfo_unexecuted_blocks=1 00:22:12.948 00:22:12.948 ' 00:22:12.948 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:12.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:12.948 --rc genhtml_branch_coverage=1 00:22:12.948 --rc genhtml_function_coverage=1 00:22:12.948 --rc genhtml_legend=1 00:22:12.948 --rc geninfo_all_blocks=1 00:22:12.948 --rc geninfo_unexecuted_blocks=1 00:22:12.948 00:22:12.948 ' 00:22:12.948 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:12.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:12.948 --rc genhtml_branch_coverage=1 00:22:12.948 --rc genhtml_function_coverage=1 00:22:12.948 --rc genhtml_legend=1 00:22:12.948 --rc geninfo_all_blocks=1 00:22:12.948 --rc geninfo_unexecuted_blocks=1 00:22:12.948 00:22:12.948 ' 00:22:12.948 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:12.948 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:22:12.948 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:12.948 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:12.948 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:12.948 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:12.948 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:12.948 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:12.948 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:12.948 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:12.948 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:12.948 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:12.948 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:12.948 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:12.948 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:12.948 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:12.948 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:12.948 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:12.948 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:12.948 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:22:12.948 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:12.948 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:12.948 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:12.948 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.948 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.948 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.948 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:22:12.948 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.948 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:22:12.948 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:12.948 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:12.948 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:12.948 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:12.948 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:12.948 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:12.948 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:12.948 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:12.948 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:12.948 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:12.948 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:22:12.948 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:12.948 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:12.949 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:12.949 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:12.949 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:12.949 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:12.949 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:12.949 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:12.949 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:12.949 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:12.949 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:22:12.949 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:21.092 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:21.092 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:22:21.092 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:21.092 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:21.092 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:21.093 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:21.093 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:21.093 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:22:21.093 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:21.093 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:22:21.093 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:22:21.093 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:22:21.093 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:22:21.093 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:22:21.093 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:22:21.093 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:21.093 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:21.093 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:21.093 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:21.093 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:21.093 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:21.093 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:21.093 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:21.093 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:21.093 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:21.093 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:21.093 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:21.093 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:21.093 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:21.093 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:21.093 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:21.093 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:21.093 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:21.093 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:21.093 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:21.093 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:21.093 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:21.093 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:21.093 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:21.093 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:21.093 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:21.093 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:21.093 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:21.093 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:21.093 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:21.093 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:21.093 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:21.093 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:21.093 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:21.093 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:21.093 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:21.093 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:21.093 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:21.093 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:21.093 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:21.093 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:21.093 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:21.093 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:21.093 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:21.093 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:21.093 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:21.093 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:21.093 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:21.093 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:21.093 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:21.093 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:21.093 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:21.093 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:21.093 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:21.093 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:21.093 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:21.093 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:21.093 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:21.093 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:22:21.093 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:21.093 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:21.093 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:21.093 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:21.093 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:21.093 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:21.093 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:21.093 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:21.093 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:21.093 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:21.093 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:21.093 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:21.093 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:21.093 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:21.093 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:21.093 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:21.093 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:21.093 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:21.093 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:21.093 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:21.093 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:21.093 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:21.093 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:21.093 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:21.093 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:21.093 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:21.093 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:21.093 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.625 ms 00:22:21.093 00:22:21.093 --- 10.0.0.2 ping statistics --- 00:22:21.093 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:21.093 rtt min/avg/max/mdev = 0.625/0.625/0.625/0.000 ms 00:22:21.093 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:21.093 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:21.093 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:22:21.093 00:22:21.093 --- 10.0.0.1 ping statistics --- 00:22:21.093 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:21.093 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:22:21.093 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:21.093 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:22:21.093 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:21.094 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:21.094 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:21.094 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:21.094 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:21.094 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:21.094 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:21.094 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:22:21.094 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:21.094 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:21.094 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:21.094 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=1415900 00:22:21.094 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 1415900 00:22:21.094 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:22:21.094 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 1415900 ']' 00:22:21.094 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:21.094 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:21.094 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:21.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:21.094 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:21.094 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:21.094 [2024-11-20 09:55:51.180782] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:22:21.094 [2024-11-20 09:55:51.180848] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:21.094 [2024-11-20 09:55:51.281123] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:21.094 [2024-11-20 09:55:51.332367] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:21.094 [2024-11-20 09:55:51.332420] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:21.094 [2024-11-20 09:55:51.332429] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:21.094 [2024-11-20 09:55:51.332437] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:21.094 [2024-11-20 09:55:51.332443] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:21.094 [2024-11-20 09:55:51.333195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:21.094 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:21.094 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:22:21.094 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:21.094 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:21.094 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:21.355 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:21.355 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:22:21.355 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:21.355 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:22:21.355 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.355 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:21.355 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.355 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:22:21.355 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.355 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:21.355 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.355 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:22:21.355 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.355 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:21.355 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.355 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:22:21.355 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.355 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:21.355 Malloc0 00:22:21.355 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.355 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:22:21.355 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.355 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:21.355 [2024-11-20 09:55:52.157838] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:21.355 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.355 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:22:21.355 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.355 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:21.355 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.355 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:22:21.355 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.355 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:21.355 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.355 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:21.355 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.355 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:21.355 [2024-11-20 09:55:52.194147] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:21.355 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.355 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:21.619 [2024-11-20 09:55:52.301305] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:23.004 Initializing NVMe Controllers 00:22:23.004 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:22:23.004 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:22:23.004 Initialization complete. Launching workers. 00:22:23.004 ======================================================== 00:22:23.004 Latency(us) 00:22:23.004 Device Information : IOPS MiB/s Average min max 00:22:23.004 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 24.98 3.12 166475.00 47861.17 194545.74 00:22:23.004 ======================================================== 00:22:23.004 Total : 24.98 3.12 166475.00 47861.17 194545.74 00:22:23.004 00:22:23.004 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:22:23.004 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:22:23.004 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.004 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:23.004 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.004 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=374 00:22:23.004 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 374 -eq 0 ]] 00:22:23.004 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:22:23.004 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:22:23.004 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:23.004 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:22:23.004 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:23.004 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:22:23.004 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:23.004 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:23.004 rmmod nvme_tcp 00:22:23.004 rmmod nvme_fabrics 00:22:23.004 rmmod nvme_keyring 00:22:23.004 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:23.004 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:22:23.004 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:22:23.004 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 1415900 ']' 00:22:23.004 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 1415900 00:22:23.004 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 1415900 ']' 00:22:23.004 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 1415900 00:22:23.004 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:22:23.004 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:23.004 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1415900 00:22:23.266 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:23.266 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:23.266 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1415900' 00:22:23.266 killing process with pid 1415900 00:22:23.266 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 1415900 00:22:23.266 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 1415900 00:22:23.266 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:23.266 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:23.266 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:23.266 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:22:23.266 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:22:23.266 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:23.266 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:22:23.266 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:23.266 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:23.266 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:23.266 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:23.266 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:25.814 09:55:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:25.814 00:22:25.814 real 0m12.771s 00:22:25.814 user 0m5.188s 00:22:25.814 sys 0m6.169s 00:22:25.814 09:55:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:25.814 09:55:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:25.814 ************************************ 00:22:25.814 END TEST nvmf_wait_for_buf 00:22:25.814 ************************************ 00:22:25.814 09:55:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:22:25.814 09:55:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:22:25.814 09:55:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:22:25.814 09:55:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:22:25.814 09:55:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:22:25.814 09:55:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:33.957 09:56:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:33.957 09:56:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:22:33.957 09:56:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:33.957 09:56:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:33.957 09:56:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:33.957 09:56:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:33.957 09:56:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:33.957 09:56:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:22:33.957 09:56:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:33.957 09:56:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:22:33.957 09:56:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:22:33.957 09:56:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:22:33.957 09:56:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:22:33.957 09:56:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:22:33.957 09:56:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:22:33.957 09:56:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:33.957 09:56:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:33.957 09:56:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:33.957 09:56:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:33.957 09:56:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:33.957 09:56:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:33.957 09:56:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:33.957 09:56:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:33.957 09:56:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:33.957 09:56:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:33.957 09:56:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:33.957 09:56:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:33.957 09:56:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:33.957 09:56:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:33.957 09:56:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:33.957 09:56:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:33.957 09:56:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:33.957 09:56:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:33.957 09:56:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:33.957 09:56:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:33.957 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:33.957 09:56:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:33.957 09:56:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:33.957 09:56:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:33.957 09:56:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:33.957 09:56:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:33.957 09:56:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:33.957 09:56:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:33.957 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:33.957 09:56:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:33.957 09:56:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:33.957 09:56:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:33.958 09:56:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:33.958 09:56:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:33.958 09:56:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:33.958 09:56:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:33.958 09:56:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:33.958 09:56:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:33.958 09:56:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:33.958 09:56:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:33.958 09:56:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:33.958 09:56:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:33.958 09:56:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:33.958 09:56:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:33.958 09:56:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:33.958 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:33.958 09:56:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:33.958 09:56:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:33.958 09:56:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:33.958 09:56:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:33.958 09:56:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:33.958 09:56:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:33.958 09:56:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:33.958 09:56:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:33.958 09:56:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:33.958 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:33.958 09:56:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:33.958 09:56:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:33.958 09:56:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:33.958 09:56:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:22:33.958 09:56:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:33.958 09:56:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:33.958 09:56:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:33.958 09:56:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:33.958 ************************************ 00:22:33.958 START TEST nvmf_perf_adq 00:22:33.958 ************************************ 00:22:33.958 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:33.958 * Looking for test storage... 00:22:33.958 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:33.958 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:33.958 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:33.958 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lcov --version 00:22:33.958 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:33.958 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:33.958 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:33.958 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:33.958 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:22:33.958 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:22:33.958 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:22:33.958 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:22:33.958 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:22:33.958 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:22:33.958 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:22:33.958 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:33.958 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:22:33.958 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:22:33.958 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:33.958 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:33.958 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:22:33.958 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:22:33.958 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:33.958 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:22:33.958 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:22:33.958 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:22:33.958 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:22:33.958 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:33.958 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:22:33.958 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:22:33.958 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:33.958 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:33.958 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:22:33.958 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:33.958 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:33.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:33.958 --rc genhtml_branch_coverage=1 00:22:33.958 --rc genhtml_function_coverage=1 00:22:33.958 --rc genhtml_legend=1 00:22:33.958 --rc geninfo_all_blocks=1 00:22:33.958 --rc geninfo_unexecuted_blocks=1 00:22:33.958 00:22:33.958 ' 00:22:33.958 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:33.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:33.958 --rc genhtml_branch_coverage=1 00:22:33.958 --rc genhtml_function_coverage=1 00:22:33.958 --rc genhtml_legend=1 00:22:33.958 --rc geninfo_all_blocks=1 00:22:33.958 --rc geninfo_unexecuted_blocks=1 00:22:33.958 00:22:33.958 ' 00:22:33.958 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:33.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:33.958 --rc genhtml_branch_coverage=1 00:22:33.958 --rc genhtml_function_coverage=1 00:22:33.958 --rc genhtml_legend=1 00:22:33.958 --rc geninfo_all_blocks=1 00:22:33.958 --rc geninfo_unexecuted_blocks=1 00:22:33.958 00:22:33.958 ' 00:22:33.958 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:33.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:33.958 --rc genhtml_branch_coverage=1 00:22:33.958 --rc genhtml_function_coverage=1 00:22:33.958 --rc genhtml_legend=1 00:22:33.958 --rc geninfo_all_blocks=1 00:22:33.958 --rc geninfo_unexecuted_blocks=1 00:22:33.958 00:22:33.958 ' 00:22:33.958 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:33.958 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:22:33.958 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:33.958 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:33.958 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:33.958 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:33.958 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:33.958 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:33.958 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:33.958 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:33.958 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:33.958 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:33.958 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:33.958 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:33.958 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:33.958 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:33.958 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:33.958 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:33.958 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:33.958 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:22:33.958 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:33.958 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:33.958 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:33.959 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.959 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.959 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.959 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:22:33.959 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.959 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:22:33.959 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:33.959 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:33.959 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:33.959 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:33.959 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:33.959 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:33.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:33.959 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:33.959 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:33.959 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:33.959 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:22:33.959 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:22:33.959 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:40.549 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:40.549 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:22:40.549 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:40.549 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:40.549 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:40.549 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:40.549 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:40.549 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:22:40.549 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:40.549 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:22:40.549 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:22:40.549 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:22:40.549 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:22:40.549 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:22:40.549 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:22:40.549 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:40.549 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:40.549 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:40.549 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:40.549 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:40.549 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:40.549 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:40.549 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:40.549 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:40.549 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:40.549 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:40.549 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:40.549 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:40.549 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:40.549 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:40.549 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:40.549 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:40.549 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:40.549 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:40.549 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:40.549 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:40.549 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:40.549 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:40.549 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:40.549 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:40.549 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:40.549 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:40.549 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:40.549 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:40.549 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:40.549 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:40.549 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:40.549 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:40.549 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:40.549 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:40.549 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:40.549 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:40.549 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:40.549 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:40.549 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:40.549 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:40.549 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:40.549 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:40.549 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:40.549 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:40.549 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:40.549 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:40.549 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:40.549 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:40.549 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:40.549 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:40.549 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:40.549 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:40.549 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:40.549 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:40.549 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:40.549 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:40.549 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:40.549 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:40.549 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:22:40.549 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:40.549 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:22:40.549 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:22:40.549 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:22:41.935 09:56:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:22:43.847 09:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:22:49.135 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:49.136 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:49.136 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:49.136 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:49.136 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:49.136 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:49.137 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:49.137 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:49.137 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:49.137 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:49.137 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:49.137 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.706 ms 00:22:49.137 00:22:49.137 --- 10.0.0.2 ping statistics --- 00:22:49.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:49.137 rtt min/avg/max/mdev = 0.706/0.706/0.706/0.000 ms 00:22:49.137 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:49.137 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:49.137 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.329 ms 00:22:49.137 00:22:49.137 --- 10.0.0.1 ping statistics --- 00:22:49.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:49.137 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:22:49.137 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:49.137 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:22:49.137 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:49.137 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:49.137 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:49.137 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:49.137 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:49.137 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:49.137 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:49.137 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:49.137 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:49.137 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:49.137 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:49.137 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=1426137 00:22:49.137 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 1426137 00:22:49.137 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:49.137 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 1426137 ']' 00:22:49.137 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:49.137 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:49.137 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:49.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:49.137 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:49.137 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:49.137 [2024-11-20 09:56:19.991814] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:22:49.137 [2024-11-20 09:56:19.991880] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:49.398 [2024-11-20 09:56:20.092563] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:49.398 [2024-11-20 09:56:20.147356] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:49.398 [2024-11-20 09:56:20.147407] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:49.398 [2024-11-20 09:56:20.147417] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:49.398 [2024-11-20 09:56:20.147425] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:49.398 [2024-11-20 09:56:20.147432] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:49.398 [2024-11-20 09:56:20.149493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:49.398 [2024-11-20 09:56:20.149759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:49.398 [2024-11-20 09:56:20.149923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:49.398 [2024-11-20 09:56:20.149924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:49.970 09:56:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:49.970 09:56:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:22:49.970 09:56:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:49.970 09:56:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:49.970 09:56:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:49.970 09:56:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:49.970 09:56:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:22:49.970 09:56:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:49.970 09:56:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:49.970 09:56:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.970 09:56:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:49.970 09:56:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.232 09:56:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:50.232 09:56:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:22:50.232 09:56:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.232 09:56:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:50.232 09:56:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.232 09:56:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:50.232 09:56:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.232 09:56:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:50.232 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.232 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:22:50.232 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.232 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:50.232 [2024-11-20 09:56:21.026329] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:50.232 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.232 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:50.232 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.232 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:50.232 Malloc1 00:22:50.232 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.232 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:50.232 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.232 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:50.232 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.232 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:50.232 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.232 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:50.232 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.232 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:50.232 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.232 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:50.232 [2024-11-20 09:56:21.101365] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:50.232 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.232 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=1426487 00:22:50.232 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:22:50.232 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:52.778 09:56:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:22:52.778 09:56:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.778 09:56:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:52.778 09:56:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.778 09:56:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:22:52.778 "tick_rate": 2400000000, 00:22:52.778 "poll_groups": [ 00:22:52.778 { 00:22:52.778 "name": "nvmf_tgt_poll_group_000", 00:22:52.778 "admin_qpairs": 1, 00:22:52.778 "io_qpairs": 1, 00:22:52.778 "current_admin_qpairs": 1, 00:22:52.778 "current_io_qpairs": 1, 00:22:52.778 "pending_bdev_io": 0, 00:22:52.778 "completed_nvme_io": 16041, 00:22:52.778 "transports": [ 00:22:52.778 { 00:22:52.778 "trtype": "TCP" 00:22:52.778 } 00:22:52.778 ] 00:22:52.778 }, 00:22:52.778 { 00:22:52.778 "name": "nvmf_tgt_poll_group_001", 00:22:52.778 "admin_qpairs": 0, 00:22:52.778 "io_qpairs": 1, 00:22:52.778 "current_admin_qpairs": 0, 00:22:52.778 "current_io_qpairs": 1, 00:22:52.778 "pending_bdev_io": 0, 00:22:52.778 "completed_nvme_io": 16412, 00:22:52.778 "transports": [ 00:22:52.778 { 00:22:52.778 "trtype": "TCP" 00:22:52.778 } 00:22:52.778 ] 00:22:52.778 }, 00:22:52.778 { 00:22:52.778 "name": "nvmf_tgt_poll_group_002", 00:22:52.778 "admin_qpairs": 0, 00:22:52.778 "io_qpairs": 1, 00:22:52.778 "current_admin_qpairs": 0, 00:22:52.778 "current_io_qpairs": 1, 00:22:52.778 "pending_bdev_io": 0, 00:22:52.778 "completed_nvme_io": 17839, 00:22:52.778 "transports": [ 00:22:52.778 { 00:22:52.778 "trtype": "TCP" 00:22:52.778 } 00:22:52.778 ] 00:22:52.778 }, 00:22:52.778 { 00:22:52.778 "name": "nvmf_tgt_poll_group_003", 00:22:52.778 "admin_qpairs": 0, 00:22:52.778 "io_qpairs": 1, 00:22:52.778 "current_admin_qpairs": 0, 00:22:52.778 "current_io_qpairs": 1, 00:22:52.778 "pending_bdev_io": 0, 00:22:52.778 "completed_nvme_io": 15950, 00:22:52.778 "transports": [ 00:22:52.778 { 00:22:52.778 "trtype": "TCP" 00:22:52.778 } 00:22:52.778 ] 00:22:52.778 } 00:22:52.778 ] 00:22:52.778 }' 00:22:52.778 09:56:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:22:52.778 09:56:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:22:52.778 09:56:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:22:52.778 09:56:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:22:52.778 09:56:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 1426487 00:23:00.917 Initializing NVMe Controllers 00:23:00.917 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:00.917 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:23:00.917 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:23:00.917 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:23:00.917 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:23:00.917 Initialization complete. Launching workers. 00:23:00.917 ======================================================== 00:23:00.917 Latency(us) 00:23:00.917 Device Information : IOPS MiB/s Average min max 00:23:00.917 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 12190.30 47.62 5265.58 1244.51 44046.47 00:23:00.917 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 12841.20 50.16 4983.11 1213.08 12538.08 00:23:00.917 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 12603.60 49.23 5089.98 1226.56 45383.80 00:23:00.917 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 13215.60 51.62 4842.09 1312.06 13610.52 00:23:00.917 ======================================================== 00:23:00.917 Total : 50850.70 198.64 5040.66 1213.08 45383.80 00:23:00.917 00:23:00.917 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:23:00.917 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:00.917 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:23:00.917 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:00.917 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:23:00.917 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:00.917 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:00.917 rmmod nvme_tcp 00:23:00.917 rmmod nvme_fabrics 00:23:00.917 rmmod nvme_keyring 00:23:00.917 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:00.917 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:23:00.917 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:23:00.917 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 1426137 ']' 00:23:00.917 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 1426137 00:23:00.917 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 1426137 ']' 00:23:00.917 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 1426137 00:23:00.917 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:23:00.917 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:00.917 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1426137 00:23:00.917 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:00.917 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:00.917 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1426137' 00:23:00.917 killing process with pid 1426137 00:23:00.917 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 1426137 00:23:00.917 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 1426137 00:23:00.917 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:00.917 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:00.917 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:00.917 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:23:00.917 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:23:00.917 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:00.917 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:23:00.917 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:00.917 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:00.917 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:00.917 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:00.917 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:02.831 09:56:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:02.831 09:56:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:23:02.831 09:56:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:23:02.831 09:56:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:23:04.742 09:56:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:23:06.786 09:56:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:23:12.081 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:23:12.081 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:12.081 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:12.081 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:12.081 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:12.081 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:12.082 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:12.082 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:12.082 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:12.082 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:12.082 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:12.082 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:23:12.082 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:12.082 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:12.082 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:23:12.082 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:12.082 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:12.082 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:12.082 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:12.082 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:12.082 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:23:12.082 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:12.082 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:23:12.082 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:23:12.082 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:23:12.082 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:23:12.082 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:23:12.082 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:23:12.082 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:12.082 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:12.082 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:12.082 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:12.082 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:12.082 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:12.082 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:12.082 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:12.082 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:12.082 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:12.082 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:12.082 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:12.082 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:12.082 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:12.082 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:12.082 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:12.082 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:12.082 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:12.082 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:12.082 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:12.082 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:12.082 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:12.082 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:12.082 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:12.082 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:12.082 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:12.082 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:12.082 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:12.082 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:12.082 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:12.082 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:12.082 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:12.082 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:12.082 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:12.082 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:12.082 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:12.082 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:12.082 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:12.082 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:12.082 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:12.082 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:12.082 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:12.082 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:12.082 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:12.082 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:12.082 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:12.082 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:12.082 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:12.082 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:12.082 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:12.082 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:12.082 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:12.082 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:12.082 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:12.082 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:12.082 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:12.082 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:12.082 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:12.082 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:23:12.082 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:12.082 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:12.082 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:12.082 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:12.082 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:12.082 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:12.082 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:12.082 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:12.082 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:12.082 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:12.082 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:12.082 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:12.082 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:12.082 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:12.082 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:12.082 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:12.083 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:12.083 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:12.083 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:12.083 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:12.083 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:12.083 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:12.083 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:12.083 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:12.083 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:12.083 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:12.083 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:12.083 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.637 ms 00:23:12.083 00:23:12.083 --- 10.0.0.2 ping statistics --- 00:23:12.083 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:12.083 rtt min/avg/max/mdev = 0.637/0.637/0.637/0.000 ms 00:23:12.083 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:12.083 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:12.083 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.299 ms 00:23:12.083 00:23:12.083 --- 10.0.0.1 ping statistics --- 00:23:12.083 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:12.083 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:23:12.083 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:12.083 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:23:12.083 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:12.083 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:12.083 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:12.083 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:12.083 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:12.083 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:12.083 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:12.083 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:23:12.083 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:23:12.083 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:23:12.083 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:23:12.083 net.core.busy_poll = 1 00:23:12.083 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:23:12.083 net.core.busy_read = 1 00:23:12.083 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:23:12.083 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:23:12.083 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:23:12.083 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:23:12.083 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:23:12.083 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:23:12.083 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:12.083 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:12.083 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:12.083 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=1430978 00:23:12.083 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 1430978 00:23:12.083 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:23:12.083 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 1430978 ']' 00:23:12.083 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:12.083 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:12.083 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:12.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:12.083 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:12.083 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:12.344 [2024-11-20 09:56:42.995054] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:23:12.344 [2024-11-20 09:56:42.995120] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:12.344 [2024-11-20 09:56:43.095999] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:12.344 [2024-11-20 09:56:43.149147] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:12.344 [2024-11-20 09:56:43.149209] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:12.344 [2024-11-20 09:56:43.149219] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:12.344 [2024-11-20 09:56:43.149226] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:12.344 [2024-11-20 09:56:43.149233] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:12.344 [2024-11-20 09:56:43.151281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:12.344 [2024-11-20 09:56:43.151500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:12.344 [2024-11-20 09:56:43.151334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:12.344 [2024-11-20 09:56:43.151501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:12.915 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:12.915 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:23:12.915 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:12.915 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:12.915 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:13.175 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:13.175 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:23:13.175 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:23:13.175 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:23:13.175 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.175 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:13.175 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.175 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:23:13.175 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:23:13.175 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.175 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:13.175 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.175 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:23:13.175 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.175 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:13.175 09:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.175 09:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:23:13.175 09:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.175 09:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:13.175 [2024-11-20 09:56:44.015821] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:13.175 09:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.176 09:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:13.176 09:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.176 09:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:13.176 Malloc1 00:23:13.176 09:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.176 09:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:13.176 09:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.176 09:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:13.176 09:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.176 09:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:13.176 09:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.176 09:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:13.436 09:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.436 09:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:13.436 09:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.436 09:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:13.436 [2024-11-20 09:56:44.097607] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:13.436 09:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.436 09:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=1431314 00:23:13.436 09:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:23:13.436 09:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:23:15.395 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:23:15.395 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.395 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:15.395 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.395 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:23:15.395 "tick_rate": 2400000000, 00:23:15.395 "poll_groups": [ 00:23:15.395 { 00:23:15.395 "name": "nvmf_tgt_poll_group_000", 00:23:15.395 "admin_qpairs": 1, 00:23:15.395 "io_qpairs": 2, 00:23:15.395 "current_admin_qpairs": 1, 00:23:15.395 "current_io_qpairs": 2, 00:23:15.395 "pending_bdev_io": 0, 00:23:15.395 "completed_nvme_io": 25672, 00:23:15.395 "transports": [ 00:23:15.395 { 00:23:15.395 "trtype": "TCP" 00:23:15.395 } 00:23:15.395 ] 00:23:15.395 }, 00:23:15.395 { 00:23:15.395 "name": "nvmf_tgt_poll_group_001", 00:23:15.395 "admin_qpairs": 0, 00:23:15.395 "io_qpairs": 2, 00:23:15.395 "current_admin_qpairs": 0, 00:23:15.395 "current_io_qpairs": 2, 00:23:15.395 "pending_bdev_io": 0, 00:23:15.395 "completed_nvme_io": 28513, 00:23:15.395 "transports": [ 00:23:15.395 { 00:23:15.395 "trtype": "TCP" 00:23:15.395 } 00:23:15.395 ] 00:23:15.395 }, 00:23:15.395 { 00:23:15.395 "name": "nvmf_tgt_poll_group_002", 00:23:15.395 "admin_qpairs": 0, 00:23:15.395 "io_qpairs": 0, 00:23:15.395 "current_admin_qpairs": 0, 00:23:15.395 "current_io_qpairs": 0, 00:23:15.395 "pending_bdev_io": 0, 00:23:15.395 "completed_nvme_io": 0, 00:23:15.395 "transports": [ 00:23:15.395 { 00:23:15.395 "trtype": "TCP" 00:23:15.395 } 00:23:15.395 ] 00:23:15.395 }, 00:23:15.395 { 00:23:15.395 "name": "nvmf_tgt_poll_group_003", 00:23:15.395 "admin_qpairs": 0, 00:23:15.395 "io_qpairs": 0, 00:23:15.395 "current_admin_qpairs": 0, 00:23:15.395 "current_io_qpairs": 0, 00:23:15.395 "pending_bdev_io": 0, 00:23:15.395 "completed_nvme_io": 0, 00:23:15.395 "transports": [ 00:23:15.395 { 00:23:15.395 "trtype": "TCP" 00:23:15.395 } 00:23:15.395 ] 00:23:15.395 } 00:23:15.395 ] 00:23:15.395 }' 00:23:15.395 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:23:15.395 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:23:15.395 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:23:15.395 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:23:15.395 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 1431314 00:23:23.535 Initializing NVMe Controllers 00:23:23.535 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:23.535 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:23:23.535 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:23:23.535 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:23:23.535 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:23:23.535 Initialization complete. Launching workers. 00:23:23.535 ======================================================== 00:23:23.535 Latency(us) 00:23:23.535 Device Information : IOPS MiB/s Average min max 00:23:23.535 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10640.70 41.57 6025.57 956.35 53770.93 00:23:23.535 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 9976.90 38.97 6414.75 1172.18 52535.10 00:23:23.535 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 8972.30 35.05 7132.57 1280.53 52863.72 00:23:23.535 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 8377.50 32.72 7648.27 1170.71 53044.42 00:23:23.535 ======================================================== 00:23:23.535 Total : 37967.40 148.31 6747.49 956.35 53770.93 00:23:23.535 00:23:23.535 09:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:23:23.535 09:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:23.535 09:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:23:23.535 09:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:23.535 09:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:23:23.535 09:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:23.535 09:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:23.535 rmmod nvme_tcp 00:23:23.535 rmmod nvme_fabrics 00:23:23.535 rmmod nvme_keyring 00:23:23.535 09:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:23.535 09:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:23:23.535 09:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:23:23.535 09:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 1430978 ']' 00:23:23.535 09:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 1430978 00:23:23.535 09:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 1430978 ']' 00:23:23.535 09:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 1430978 00:23:23.535 09:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:23:23.535 09:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:23.535 09:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1430978 00:23:23.535 09:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:23.535 09:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:23.535 09:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1430978' 00:23:23.535 killing process with pid 1430978 00:23:23.535 09:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 1430978 00:23:23.535 09:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 1430978 00:23:23.795 09:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:23.795 09:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:23.795 09:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:23.795 09:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:23:23.795 09:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:23:23.795 09:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:23:23.795 09:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:23.795 09:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:23.795 09:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:23.795 09:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:23.795 09:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:23.795 09:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:27.093 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:27.093 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:23:27.093 00:23:27.093 real 0m54.223s 00:23:27.093 user 2m50.248s 00:23:27.093 sys 0m11.465s 00:23:27.093 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:27.093 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:27.093 ************************************ 00:23:27.093 END TEST nvmf_perf_adq 00:23:27.093 ************************************ 00:23:27.093 09:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:27.093 09:56:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:27.093 09:56:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:27.093 09:56:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:27.093 ************************************ 00:23:27.093 START TEST nvmf_shutdown 00:23:27.093 ************************************ 00:23:27.093 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:27.093 * Looking for test storage... 00:23:27.093 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:27.093 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:27.093 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:23:27.093 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:27.093 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:27.093 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:27.093 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:27.093 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:27.093 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:23:27.093 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:23:27.093 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:23:27.093 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:23:27.093 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:23:27.093 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:23:27.093 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:23:27.093 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:27.093 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:23:27.093 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:23:27.093 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:27.093 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:27.093 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:23:27.093 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:23:27.093 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:27.093 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:23:27.093 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:23:27.093 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:23:27.093 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:23:27.093 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:27.093 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:23:27.093 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:23:27.093 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:27.093 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:27.093 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:23:27.093 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:27.093 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:27.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:27.093 --rc genhtml_branch_coverage=1 00:23:27.093 --rc genhtml_function_coverage=1 00:23:27.093 --rc genhtml_legend=1 00:23:27.093 --rc geninfo_all_blocks=1 00:23:27.093 --rc geninfo_unexecuted_blocks=1 00:23:27.093 00:23:27.093 ' 00:23:27.093 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:27.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:27.093 --rc genhtml_branch_coverage=1 00:23:27.093 --rc genhtml_function_coverage=1 00:23:27.093 --rc genhtml_legend=1 00:23:27.093 --rc geninfo_all_blocks=1 00:23:27.093 --rc geninfo_unexecuted_blocks=1 00:23:27.093 00:23:27.093 ' 00:23:27.093 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:27.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:27.093 --rc genhtml_branch_coverage=1 00:23:27.093 --rc genhtml_function_coverage=1 00:23:27.093 --rc genhtml_legend=1 00:23:27.093 --rc geninfo_all_blocks=1 00:23:27.093 --rc geninfo_unexecuted_blocks=1 00:23:27.093 00:23:27.093 ' 00:23:27.093 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:27.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:27.093 --rc genhtml_branch_coverage=1 00:23:27.093 --rc genhtml_function_coverage=1 00:23:27.093 --rc genhtml_legend=1 00:23:27.093 --rc geninfo_all_blocks=1 00:23:27.093 --rc geninfo_unexecuted_blocks=1 00:23:27.093 00:23:27.093 ' 00:23:27.093 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:27.093 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:23:27.093 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:27.093 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:27.093 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:27.093 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:27.093 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:27.093 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:27.093 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:27.093 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:27.093 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:27.093 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:27.093 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:27.093 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:27.093 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:27.093 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:27.093 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:27.093 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:27.093 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:27.093 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:23:27.093 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:27.093 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:27.093 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:27.093 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.093 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.093 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.093 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:23:27.093 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.093 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:23:27.093 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:27.093 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:27.093 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:27.093 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:27.093 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:27.093 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:27.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:27.093 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:27.093 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:27.093 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:27.094 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:27.094 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:27.094 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:23:27.094 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:27.094 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:27.094 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:27.094 ************************************ 00:23:27.094 START TEST nvmf_shutdown_tc1 00:23:27.094 ************************************ 00:23:27.094 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:23:27.094 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:23:27.094 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:27.094 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:27.094 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:27.094 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:27.094 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:27.094 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:27.094 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:27.094 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:27.094 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:27.094 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:27.094 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:27.094 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:27.094 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:35.232 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:35.232 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:35.232 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:35.232 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:35.232 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:35.232 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:35.232 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:35.232 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:23:35.232 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:35.232 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:23:35.232 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:23:35.232 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:23:35.232 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:23:35.232 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:23:35.232 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:35.232 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:35.232 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:35.232 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:35.232 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:35.232 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:35.232 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:35.232 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:35.232 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:35.232 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:35.232 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:35.232 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:35.232 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:35.232 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:35.232 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:35.232 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:35.232 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:35.232 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:35.232 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:35.232 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:35.232 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:35.232 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:35.232 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:35.232 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:35.232 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:35.232 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:35.232 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:35.232 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:35.232 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:35.232 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:35.232 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:35.232 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:35.232 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:35.232 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:35.232 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:35.232 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:35.232 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:35.233 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:35.233 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:35.233 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:35.233 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:35.233 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:35.233 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:35.233 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:35.233 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:35.233 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:35.233 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:35.233 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:35.233 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:35.233 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:35.233 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:35.233 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:35.233 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:35.233 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:35.233 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:35.233 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:35.233 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:35.233 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:35.233 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:35.233 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:23:35.233 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:35.233 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:35.233 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:35.233 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:35.233 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:35.233 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:35.233 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:35.233 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:35.233 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:35.233 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:35.233 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:35.233 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:35.233 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:35.233 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:35.233 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:35.233 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:35.233 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:35.233 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:35.233 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:35.233 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:35.233 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:35.233 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:35.233 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:35.233 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:35.233 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:35.233 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:35.233 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:35.233 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.636 ms 00:23:35.233 00:23:35.233 --- 10.0.0.2 ping statistics --- 00:23:35.233 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:35.233 rtt min/avg/max/mdev = 0.636/0.636/0.636/0.000 ms 00:23:35.233 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:35.233 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:35.233 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:23:35.233 00:23:35.233 --- 10.0.0.1 ping statistics --- 00:23:35.233 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:35.233 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:23:35.233 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:35.233 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:23:35.233 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:35.233 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:35.233 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:35.233 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:35.233 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:35.233 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:35.233 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:35.233 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:35.233 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:35.233 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:35.233 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:35.233 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=1437884 00:23:35.233 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 1437884 00:23:35.233 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:35.233 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 1437884 ']' 00:23:35.233 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:35.233 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:35.233 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:35.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:35.233 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:35.233 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:35.233 [2024-11-20 09:57:05.667165] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:23:35.233 [2024-11-20 09:57:05.667247] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:35.233 [2024-11-20 09:57:05.767697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:35.233 [2024-11-20 09:57:05.819031] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:35.233 [2024-11-20 09:57:05.819079] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:35.233 [2024-11-20 09:57:05.819090] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:35.233 [2024-11-20 09:57:05.819098] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:35.233 [2024-11-20 09:57:05.819105] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:35.233 [2024-11-20 09:57:05.821124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:35.233 [2024-11-20 09:57:05.821287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:35.233 [2024-11-20 09:57:05.821558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:35.233 [2024-11-20 09:57:05.821563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:35.808 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:35.808 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:23:35.808 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:35.808 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:35.808 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:35.808 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:35.808 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:35.808 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.808 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:35.808 [2024-11-20 09:57:06.536806] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:35.808 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.808 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:35.808 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:35.808 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:35.808 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:35.808 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:35.808 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:35.808 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:35.808 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:35.808 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:35.808 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:35.808 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:35.808 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:35.808 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:35.808 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:35.808 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:35.808 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:35.808 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:35.808 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:35.808 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:35.808 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:35.808 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:35.808 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:35.808 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:35.808 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:35.808 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:35.808 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:35.808 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.808 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:35.808 Malloc1 00:23:35.808 [2024-11-20 09:57:06.667449] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:35.808 Malloc2 00:23:36.069 Malloc3 00:23:36.069 Malloc4 00:23:36.069 Malloc5 00:23:36.069 Malloc6 00:23:36.069 Malloc7 00:23:36.069 Malloc8 00:23:36.332 Malloc9 00:23:36.332 Malloc10 00:23:36.332 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.332 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:36.332 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:36.332 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:36.332 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=1438262 00:23:36.332 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 1438262 /var/tmp/bdevperf.sock 00:23:36.332 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 1438262 ']' 00:23:36.332 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:36.332 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:36.332 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:23:36.332 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:36.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:36.332 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:36.332 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:36.332 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:36.332 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:23:36.332 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:23:36.332 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:36.332 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:36.332 { 00:23:36.332 "params": { 00:23:36.332 "name": "Nvme$subsystem", 00:23:36.332 "trtype": "$TEST_TRANSPORT", 00:23:36.332 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:36.332 "adrfam": "ipv4", 00:23:36.332 "trsvcid": "$NVMF_PORT", 00:23:36.332 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:36.332 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:36.332 "hdgst": ${hdgst:-false}, 00:23:36.332 "ddgst": ${ddgst:-false} 00:23:36.332 }, 00:23:36.332 "method": "bdev_nvme_attach_controller" 00:23:36.332 } 00:23:36.332 EOF 00:23:36.332 )") 00:23:36.332 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:36.332 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:36.332 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:36.332 { 00:23:36.332 "params": { 00:23:36.332 "name": "Nvme$subsystem", 00:23:36.332 "trtype": "$TEST_TRANSPORT", 00:23:36.332 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:36.332 "adrfam": "ipv4", 00:23:36.332 "trsvcid": "$NVMF_PORT", 00:23:36.332 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:36.332 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:36.332 "hdgst": ${hdgst:-false}, 00:23:36.332 "ddgst": ${ddgst:-false} 00:23:36.332 }, 00:23:36.332 "method": "bdev_nvme_attach_controller" 00:23:36.332 } 00:23:36.332 EOF 00:23:36.332 )") 00:23:36.332 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:36.332 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:36.332 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:36.332 { 00:23:36.332 "params": { 00:23:36.332 "name": "Nvme$subsystem", 00:23:36.332 "trtype": "$TEST_TRANSPORT", 00:23:36.332 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:36.332 "adrfam": "ipv4", 00:23:36.332 "trsvcid": "$NVMF_PORT", 00:23:36.332 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:36.332 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:36.332 "hdgst": ${hdgst:-false}, 00:23:36.332 "ddgst": ${ddgst:-false} 00:23:36.332 }, 00:23:36.332 "method": "bdev_nvme_attach_controller" 00:23:36.332 } 00:23:36.332 EOF 00:23:36.332 )") 00:23:36.332 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:36.332 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:36.332 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:36.332 { 00:23:36.332 "params": { 00:23:36.332 "name": "Nvme$subsystem", 00:23:36.332 "trtype": "$TEST_TRANSPORT", 00:23:36.332 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:36.332 "adrfam": "ipv4", 00:23:36.332 "trsvcid": "$NVMF_PORT", 00:23:36.332 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:36.332 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:36.332 "hdgst": ${hdgst:-false}, 00:23:36.332 "ddgst": ${ddgst:-false} 00:23:36.332 }, 00:23:36.332 "method": "bdev_nvme_attach_controller" 00:23:36.332 } 00:23:36.332 EOF 00:23:36.332 )") 00:23:36.332 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:36.332 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:36.332 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:36.332 { 00:23:36.332 "params": { 00:23:36.332 "name": "Nvme$subsystem", 00:23:36.332 "trtype": "$TEST_TRANSPORT", 00:23:36.332 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:36.332 "adrfam": "ipv4", 00:23:36.332 "trsvcid": "$NVMF_PORT", 00:23:36.332 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:36.332 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:36.332 "hdgst": ${hdgst:-false}, 00:23:36.332 "ddgst": ${ddgst:-false} 00:23:36.332 }, 00:23:36.332 "method": "bdev_nvme_attach_controller" 00:23:36.332 } 00:23:36.332 EOF 00:23:36.332 )") 00:23:36.332 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:36.332 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:36.332 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:36.332 { 00:23:36.332 "params": { 00:23:36.332 "name": "Nvme$subsystem", 00:23:36.332 "trtype": "$TEST_TRANSPORT", 00:23:36.332 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:36.332 "adrfam": "ipv4", 00:23:36.332 "trsvcid": "$NVMF_PORT", 00:23:36.332 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:36.332 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:36.332 "hdgst": ${hdgst:-false}, 00:23:36.332 "ddgst": ${ddgst:-false} 00:23:36.332 }, 00:23:36.333 "method": "bdev_nvme_attach_controller" 00:23:36.333 } 00:23:36.333 EOF 00:23:36.333 )") 00:23:36.333 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:36.333 [2024-11-20 09:57:07.185564] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:23:36.333 [2024-11-20 09:57:07.185638] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:23:36.333 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:36.333 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:36.333 { 00:23:36.333 "params": { 00:23:36.333 "name": "Nvme$subsystem", 00:23:36.333 "trtype": "$TEST_TRANSPORT", 00:23:36.333 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:36.333 "adrfam": "ipv4", 00:23:36.333 "trsvcid": "$NVMF_PORT", 00:23:36.333 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:36.333 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:36.333 "hdgst": ${hdgst:-false}, 00:23:36.333 "ddgst": ${ddgst:-false} 00:23:36.333 }, 00:23:36.333 "method": "bdev_nvme_attach_controller" 00:23:36.333 } 00:23:36.333 EOF 00:23:36.333 )") 00:23:36.333 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:36.333 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:36.333 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:36.333 { 00:23:36.333 "params": { 00:23:36.333 "name": "Nvme$subsystem", 00:23:36.333 "trtype": "$TEST_TRANSPORT", 00:23:36.333 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:36.333 "adrfam": "ipv4", 00:23:36.333 "trsvcid": "$NVMF_PORT", 00:23:36.333 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:36.333 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:36.333 "hdgst": ${hdgst:-false}, 00:23:36.333 "ddgst": ${ddgst:-false} 00:23:36.333 }, 00:23:36.333 "method": "bdev_nvme_attach_controller" 00:23:36.333 } 00:23:36.333 EOF 00:23:36.333 )") 00:23:36.333 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:36.333 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:36.333 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:36.333 { 00:23:36.333 "params": { 00:23:36.333 "name": "Nvme$subsystem", 00:23:36.333 "trtype": "$TEST_TRANSPORT", 00:23:36.333 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:36.333 "adrfam": "ipv4", 00:23:36.333 "trsvcid": "$NVMF_PORT", 00:23:36.333 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:36.333 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:36.333 "hdgst": ${hdgst:-false}, 00:23:36.333 "ddgst": ${ddgst:-false} 00:23:36.333 }, 00:23:36.333 "method": "bdev_nvme_attach_controller" 00:23:36.333 } 00:23:36.333 EOF 00:23:36.333 )") 00:23:36.333 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:36.333 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:36.333 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:36.333 { 00:23:36.333 "params": { 00:23:36.333 "name": "Nvme$subsystem", 00:23:36.333 "trtype": "$TEST_TRANSPORT", 00:23:36.333 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:36.333 "adrfam": "ipv4", 00:23:36.333 "trsvcid": "$NVMF_PORT", 00:23:36.333 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:36.333 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:36.333 "hdgst": ${hdgst:-false}, 00:23:36.333 "ddgst": ${ddgst:-false} 00:23:36.333 }, 00:23:36.333 "method": "bdev_nvme_attach_controller" 00:23:36.333 } 00:23:36.333 EOF 00:23:36.333 )") 00:23:36.333 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:36.333 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:23:36.333 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:23:36.333 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:36.333 "params": { 00:23:36.333 "name": "Nvme1", 00:23:36.333 "trtype": "tcp", 00:23:36.333 "traddr": "10.0.0.2", 00:23:36.333 "adrfam": "ipv4", 00:23:36.333 "trsvcid": "4420", 00:23:36.333 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:36.333 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:36.333 "hdgst": false, 00:23:36.333 "ddgst": false 00:23:36.333 }, 00:23:36.333 "method": "bdev_nvme_attach_controller" 00:23:36.333 },{ 00:23:36.333 "params": { 00:23:36.333 "name": "Nvme2", 00:23:36.333 "trtype": "tcp", 00:23:36.333 "traddr": "10.0.0.2", 00:23:36.333 "adrfam": "ipv4", 00:23:36.333 "trsvcid": "4420", 00:23:36.333 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:36.333 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:36.333 "hdgst": false, 00:23:36.333 "ddgst": false 00:23:36.333 }, 00:23:36.333 "method": "bdev_nvme_attach_controller" 00:23:36.333 },{ 00:23:36.333 "params": { 00:23:36.333 "name": "Nvme3", 00:23:36.333 "trtype": "tcp", 00:23:36.333 "traddr": "10.0.0.2", 00:23:36.333 "adrfam": "ipv4", 00:23:36.333 "trsvcid": "4420", 00:23:36.333 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:36.333 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:36.333 "hdgst": false, 00:23:36.333 "ddgst": false 00:23:36.333 }, 00:23:36.333 "method": "bdev_nvme_attach_controller" 00:23:36.333 },{ 00:23:36.333 "params": { 00:23:36.333 "name": "Nvme4", 00:23:36.333 "trtype": "tcp", 00:23:36.333 "traddr": "10.0.0.2", 00:23:36.333 "adrfam": "ipv4", 00:23:36.333 "trsvcid": "4420", 00:23:36.333 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:36.333 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:36.333 "hdgst": false, 00:23:36.333 "ddgst": false 00:23:36.333 }, 00:23:36.333 "method": "bdev_nvme_attach_controller" 00:23:36.333 },{ 00:23:36.333 "params": { 00:23:36.333 "name": "Nvme5", 00:23:36.333 "trtype": "tcp", 00:23:36.333 "traddr": "10.0.0.2", 00:23:36.333 "adrfam": "ipv4", 00:23:36.333 "trsvcid": "4420", 00:23:36.333 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:36.333 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:36.333 "hdgst": false, 00:23:36.333 "ddgst": false 00:23:36.333 }, 00:23:36.333 "method": "bdev_nvme_attach_controller" 00:23:36.333 },{ 00:23:36.333 "params": { 00:23:36.333 "name": "Nvme6", 00:23:36.333 "trtype": "tcp", 00:23:36.333 "traddr": "10.0.0.2", 00:23:36.333 "adrfam": "ipv4", 00:23:36.333 "trsvcid": "4420", 00:23:36.333 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:36.333 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:36.333 "hdgst": false, 00:23:36.333 "ddgst": false 00:23:36.333 }, 00:23:36.333 "method": "bdev_nvme_attach_controller" 00:23:36.333 },{ 00:23:36.333 "params": { 00:23:36.333 "name": "Nvme7", 00:23:36.333 "trtype": "tcp", 00:23:36.333 "traddr": "10.0.0.2", 00:23:36.333 "adrfam": "ipv4", 00:23:36.333 "trsvcid": "4420", 00:23:36.333 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:36.333 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:36.333 "hdgst": false, 00:23:36.333 "ddgst": false 00:23:36.333 }, 00:23:36.333 "method": "bdev_nvme_attach_controller" 00:23:36.333 },{ 00:23:36.333 "params": { 00:23:36.333 "name": "Nvme8", 00:23:36.333 "trtype": "tcp", 00:23:36.333 "traddr": "10.0.0.2", 00:23:36.333 "adrfam": "ipv4", 00:23:36.333 "trsvcid": "4420", 00:23:36.333 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:36.333 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:36.333 "hdgst": false, 00:23:36.333 "ddgst": false 00:23:36.333 }, 00:23:36.333 "method": "bdev_nvme_attach_controller" 00:23:36.333 },{ 00:23:36.333 "params": { 00:23:36.333 "name": "Nvme9", 00:23:36.333 "trtype": "tcp", 00:23:36.333 "traddr": "10.0.0.2", 00:23:36.333 "adrfam": "ipv4", 00:23:36.333 "trsvcid": "4420", 00:23:36.333 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:36.333 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:36.333 "hdgst": false, 00:23:36.333 "ddgst": false 00:23:36.333 }, 00:23:36.333 "method": "bdev_nvme_attach_controller" 00:23:36.333 },{ 00:23:36.333 "params": { 00:23:36.333 "name": "Nvme10", 00:23:36.333 "trtype": "tcp", 00:23:36.333 "traddr": "10.0.0.2", 00:23:36.333 "adrfam": "ipv4", 00:23:36.333 "trsvcid": "4420", 00:23:36.333 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:36.333 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:36.333 "hdgst": false, 00:23:36.333 "ddgst": false 00:23:36.333 }, 00:23:36.333 "method": "bdev_nvme_attach_controller" 00:23:36.333 }' 00:23:36.595 [2024-11-20 09:57:07.282263] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:36.595 [2024-11-20 09:57:07.336015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:37.983 09:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:37.983 09:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:23:37.983 09:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:37.983 09:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.983 09:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:37.983 09:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.983 09:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 1438262 00:23:37.983 09:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:23:37.983 09:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:23:38.925 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 1438262 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:23:38.925 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 1437884 00:23:38.925 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:23:38.925 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:38.925 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:23:38.925 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:23:38.925 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:38.925 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:38.925 { 00:23:38.925 "params": { 00:23:38.925 "name": "Nvme$subsystem", 00:23:38.925 "trtype": "$TEST_TRANSPORT", 00:23:38.925 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:38.925 "adrfam": "ipv4", 00:23:38.925 "trsvcid": "$NVMF_PORT", 00:23:38.925 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:38.925 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:38.925 "hdgst": ${hdgst:-false}, 00:23:38.925 "ddgst": ${ddgst:-false} 00:23:38.925 }, 00:23:38.925 "method": "bdev_nvme_attach_controller" 00:23:38.925 } 00:23:38.925 EOF 00:23:38.925 )") 00:23:38.925 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:38.925 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:38.925 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:38.925 { 00:23:38.925 "params": { 00:23:38.925 "name": "Nvme$subsystem", 00:23:38.925 "trtype": "$TEST_TRANSPORT", 00:23:38.925 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:38.925 "adrfam": "ipv4", 00:23:38.925 "trsvcid": "$NVMF_PORT", 00:23:38.925 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:38.925 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:38.925 "hdgst": ${hdgst:-false}, 00:23:38.925 "ddgst": ${ddgst:-false} 00:23:38.925 }, 00:23:38.925 "method": "bdev_nvme_attach_controller" 00:23:38.925 } 00:23:38.925 EOF 00:23:38.925 )") 00:23:38.925 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:38.925 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:38.925 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:38.925 { 00:23:38.925 "params": { 00:23:38.925 "name": "Nvme$subsystem", 00:23:38.925 "trtype": "$TEST_TRANSPORT", 00:23:38.925 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:38.925 "adrfam": "ipv4", 00:23:38.925 "trsvcid": "$NVMF_PORT", 00:23:38.925 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:38.925 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:38.925 "hdgst": ${hdgst:-false}, 00:23:38.926 "ddgst": ${ddgst:-false} 00:23:38.926 }, 00:23:38.926 "method": "bdev_nvme_attach_controller" 00:23:38.926 } 00:23:38.926 EOF 00:23:38.926 )") 00:23:38.926 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:38.926 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:38.926 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:38.926 { 00:23:38.926 "params": { 00:23:38.926 "name": "Nvme$subsystem", 00:23:38.926 "trtype": "$TEST_TRANSPORT", 00:23:38.926 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:38.926 "adrfam": "ipv4", 00:23:38.926 "trsvcid": "$NVMF_PORT", 00:23:38.926 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:38.926 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:38.926 "hdgst": ${hdgst:-false}, 00:23:38.926 "ddgst": ${ddgst:-false} 00:23:38.926 }, 00:23:38.926 "method": "bdev_nvme_attach_controller" 00:23:38.926 } 00:23:38.926 EOF 00:23:38.926 )") 00:23:38.926 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:38.926 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:38.926 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:38.926 { 00:23:38.926 "params": { 00:23:38.926 "name": "Nvme$subsystem", 00:23:38.926 "trtype": "$TEST_TRANSPORT", 00:23:38.926 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:38.926 "adrfam": "ipv4", 00:23:38.926 "trsvcid": "$NVMF_PORT", 00:23:38.926 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:38.926 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:38.926 "hdgst": ${hdgst:-false}, 00:23:38.926 "ddgst": ${ddgst:-false} 00:23:38.926 }, 00:23:38.926 "method": "bdev_nvme_attach_controller" 00:23:38.926 } 00:23:38.926 EOF 00:23:38.926 )") 00:23:38.926 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:38.926 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:38.926 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:38.926 { 00:23:38.926 "params": { 00:23:38.926 "name": "Nvme$subsystem", 00:23:38.926 "trtype": "$TEST_TRANSPORT", 00:23:38.926 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:38.926 "adrfam": "ipv4", 00:23:38.926 "trsvcid": "$NVMF_PORT", 00:23:38.926 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:38.926 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:38.926 "hdgst": ${hdgst:-false}, 00:23:38.926 "ddgst": ${ddgst:-false} 00:23:38.926 }, 00:23:38.926 "method": "bdev_nvme_attach_controller" 00:23:38.926 } 00:23:38.926 EOF 00:23:38.926 )") 00:23:38.926 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:38.926 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:38.926 [2024-11-20 09:57:09.651384] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:23:38.926 [2024-11-20 09:57:09.651437] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1438726 ] 00:23:38.926 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:38.926 { 00:23:38.926 "params": { 00:23:38.926 "name": "Nvme$subsystem", 00:23:38.926 "trtype": "$TEST_TRANSPORT", 00:23:38.926 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:38.926 "adrfam": "ipv4", 00:23:38.926 "trsvcid": "$NVMF_PORT", 00:23:38.926 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:38.926 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:38.926 "hdgst": ${hdgst:-false}, 00:23:38.926 "ddgst": ${ddgst:-false} 00:23:38.926 }, 00:23:38.926 "method": "bdev_nvme_attach_controller" 00:23:38.926 } 00:23:38.926 EOF 00:23:38.926 )") 00:23:38.926 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:38.926 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:38.926 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:38.926 { 00:23:38.926 "params": { 00:23:38.926 "name": "Nvme$subsystem", 00:23:38.926 "trtype": "$TEST_TRANSPORT", 00:23:38.926 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:38.926 "adrfam": "ipv4", 00:23:38.926 "trsvcid": "$NVMF_PORT", 00:23:38.926 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:38.926 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:38.926 "hdgst": ${hdgst:-false}, 00:23:38.926 "ddgst": ${ddgst:-false} 00:23:38.926 }, 00:23:38.926 "method": "bdev_nvme_attach_controller" 00:23:38.926 } 00:23:38.926 EOF 00:23:38.926 )") 00:23:38.926 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:38.926 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:38.926 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:38.926 { 00:23:38.926 "params": { 00:23:38.926 "name": "Nvme$subsystem", 00:23:38.926 "trtype": "$TEST_TRANSPORT", 00:23:38.926 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:38.926 "adrfam": "ipv4", 00:23:38.926 "trsvcid": "$NVMF_PORT", 00:23:38.926 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:38.926 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:38.926 "hdgst": ${hdgst:-false}, 00:23:38.926 "ddgst": ${ddgst:-false} 00:23:38.926 }, 00:23:38.926 "method": "bdev_nvme_attach_controller" 00:23:38.926 } 00:23:38.926 EOF 00:23:38.926 )") 00:23:38.926 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:38.926 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:38.926 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:38.926 { 00:23:38.926 "params": { 00:23:38.926 "name": "Nvme$subsystem", 00:23:38.926 "trtype": "$TEST_TRANSPORT", 00:23:38.926 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:38.926 "adrfam": "ipv4", 00:23:38.926 "trsvcid": "$NVMF_PORT", 00:23:38.926 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:38.926 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:38.926 "hdgst": ${hdgst:-false}, 00:23:38.926 "ddgst": ${ddgst:-false} 00:23:38.926 }, 00:23:38.926 "method": "bdev_nvme_attach_controller" 00:23:38.926 } 00:23:38.926 EOF 00:23:38.926 )") 00:23:38.926 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:38.926 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:23:38.926 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:23:38.926 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:38.926 "params": { 00:23:38.926 "name": "Nvme1", 00:23:38.926 "trtype": "tcp", 00:23:38.926 "traddr": "10.0.0.2", 00:23:38.926 "adrfam": "ipv4", 00:23:38.926 "trsvcid": "4420", 00:23:38.926 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:38.926 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:38.926 "hdgst": false, 00:23:38.926 "ddgst": false 00:23:38.926 }, 00:23:38.926 "method": "bdev_nvme_attach_controller" 00:23:38.926 },{ 00:23:38.926 "params": { 00:23:38.926 "name": "Nvme2", 00:23:38.926 "trtype": "tcp", 00:23:38.926 "traddr": "10.0.0.2", 00:23:38.926 "adrfam": "ipv4", 00:23:38.926 "trsvcid": "4420", 00:23:38.926 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:38.926 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:38.926 "hdgst": false, 00:23:38.926 "ddgst": false 00:23:38.926 }, 00:23:38.926 "method": "bdev_nvme_attach_controller" 00:23:38.926 },{ 00:23:38.926 "params": { 00:23:38.926 "name": "Nvme3", 00:23:38.926 "trtype": "tcp", 00:23:38.926 "traddr": "10.0.0.2", 00:23:38.926 "adrfam": "ipv4", 00:23:38.926 "trsvcid": "4420", 00:23:38.926 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:38.926 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:38.926 "hdgst": false, 00:23:38.926 "ddgst": false 00:23:38.926 }, 00:23:38.926 "method": "bdev_nvme_attach_controller" 00:23:38.926 },{ 00:23:38.926 "params": { 00:23:38.926 "name": "Nvme4", 00:23:38.926 "trtype": "tcp", 00:23:38.926 "traddr": "10.0.0.2", 00:23:38.926 "adrfam": "ipv4", 00:23:38.926 "trsvcid": "4420", 00:23:38.926 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:38.926 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:38.926 "hdgst": false, 00:23:38.926 "ddgst": false 00:23:38.926 }, 00:23:38.926 "method": "bdev_nvme_attach_controller" 00:23:38.926 },{ 00:23:38.926 "params": { 00:23:38.926 "name": "Nvme5", 00:23:38.926 "trtype": "tcp", 00:23:38.926 "traddr": "10.0.0.2", 00:23:38.926 "adrfam": "ipv4", 00:23:38.926 "trsvcid": "4420", 00:23:38.926 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:38.926 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:38.926 "hdgst": false, 00:23:38.926 "ddgst": false 00:23:38.926 }, 00:23:38.926 "method": "bdev_nvme_attach_controller" 00:23:38.926 },{ 00:23:38.927 "params": { 00:23:38.927 "name": "Nvme6", 00:23:38.927 "trtype": "tcp", 00:23:38.927 "traddr": "10.0.0.2", 00:23:38.927 "adrfam": "ipv4", 00:23:38.927 "trsvcid": "4420", 00:23:38.927 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:38.927 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:38.927 "hdgst": false, 00:23:38.927 "ddgst": false 00:23:38.927 }, 00:23:38.927 "method": "bdev_nvme_attach_controller" 00:23:38.927 },{ 00:23:38.927 "params": { 00:23:38.927 "name": "Nvme7", 00:23:38.927 "trtype": "tcp", 00:23:38.927 "traddr": "10.0.0.2", 00:23:38.927 "adrfam": "ipv4", 00:23:38.927 "trsvcid": "4420", 00:23:38.927 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:38.927 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:38.927 "hdgst": false, 00:23:38.927 "ddgst": false 00:23:38.927 }, 00:23:38.927 "method": "bdev_nvme_attach_controller" 00:23:38.927 },{ 00:23:38.927 "params": { 00:23:38.927 "name": "Nvme8", 00:23:38.927 "trtype": "tcp", 00:23:38.927 "traddr": "10.0.0.2", 00:23:38.927 "adrfam": "ipv4", 00:23:38.927 "trsvcid": "4420", 00:23:38.927 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:38.927 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:38.927 "hdgst": false, 00:23:38.927 "ddgst": false 00:23:38.927 }, 00:23:38.927 "method": "bdev_nvme_attach_controller" 00:23:38.927 },{ 00:23:38.927 "params": { 00:23:38.927 "name": "Nvme9", 00:23:38.927 "trtype": "tcp", 00:23:38.927 "traddr": "10.0.0.2", 00:23:38.927 "adrfam": "ipv4", 00:23:38.927 "trsvcid": "4420", 00:23:38.927 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:38.927 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:38.927 "hdgst": false, 00:23:38.927 "ddgst": false 00:23:38.927 }, 00:23:38.927 "method": "bdev_nvme_attach_controller" 00:23:38.927 },{ 00:23:38.927 "params": { 00:23:38.927 "name": "Nvme10", 00:23:38.927 "trtype": "tcp", 00:23:38.927 "traddr": "10.0.0.2", 00:23:38.927 "adrfam": "ipv4", 00:23:38.927 "trsvcid": "4420", 00:23:38.927 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:38.927 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:38.927 "hdgst": false, 00:23:38.927 "ddgst": false 00:23:38.927 }, 00:23:38.927 "method": "bdev_nvme_attach_controller" 00:23:38.927 }' 00:23:38.927 [2024-11-20 09:57:09.743571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:38.927 [2024-11-20 09:57:09.779278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:40.309 Running I/O for 1 seconds... 00:23:41.248 1865.00 IOPS, 116.56 MiB/s 00:23:41.248 Latency(us) 00:23:41.248 [2024-11-20T08:57:12.164Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:41.248 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:41.248 Verification LBA range: start 0x0 length 0x400 00:23:41.249 Nvme1n1 : 1.16 219.94 13.75 0.00 0.00 288120.11 21408.43 248162.99 00:23:41.249 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:41.249 Verification LBA range: start 0x0 length 0x400 00:23:41.249 Nvme2n1 : 1.13 226.76 14.17 0.00 0.00 274598.83 17803.95 251658.24 00:23:41.249 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:41.249 Verification LBA range: start 0x0 length 0x400 00:23:41.249 Nvme3n1 : 1.10 236.32 14.77 0.00 0.00 256322.90 4232.53 262144.00 00:23:41.249 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:41.249 Verification LBA range: start 0x0 length 0x400 00:23:41.249 Nvme4n1 : 1.10 233.57 14.60 0.00 0.00 256522.88 12124.16 255153.49 00:23:41.249 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:41.249 Verification LBA range: start 0x0 length 0x400 00:23:41.249 Nvme5n1 : 1.13 225.86 14.12 0.00 0.00 261183.36 22282.24 246415.36 00:23:41.249 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:41.249 Verification LBA range: start 0x0 length 0x400 00:23:41.249 Nvme6n1 : 1.14 225.46 14.09 0.00 0.00 256685.01 19223.89 246415.36 00:23:41.249 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:41.249 Verification LBA range: start 0x0 length 0x400 00:23:41.249 Nvme7n1 : 1.17 274.14 17.13 0.00 0.00 207718.83 11250.35 269134.51 00:23:41.249 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:41.249 Verification LBA range: start 0x0 length 0x400 00:23:41.249 Nvme8n1 : 1.18 271.02 16.94 0.00 0.00 206759.08 15728.64 249910.61 00:23:41.249 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:41.249 Verification LBA range: start 0x0 length 0x400 00:23:41.249 Nvme9n1 : 1.17 218.64 13.66 0.00 0.00 251009.49 21954.56 286610.77 00:23:41.249 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:41.249 Verification LBA range: start 0x0 length 0x400 00:23:41.249 Nvme10n1 : 1.18 273.59 17.10 0.00 0.00 197086.90 894.29 263891.63 00:23:41.249 [2024-11-20T08:57:12.165Z] =================================================================================================================== 00:23:41.249 [2024-11-20T08:57:12.165Z] Total : 2405.30 150.33 0.00 0.00 242641.91 894.29 286610.77 00:23:41.508 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:23:41.508 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:41.508 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:41.508 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:41.508 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:41.508 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:41.508 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:23:41.508 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:41.508 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:23:41.508 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:41.508 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:41.508 rmmod nvme_tcp 00:23:41.508 rmmod nvme_fabrics 00:23:41.508 rmmod nvme_keyring 00:23:41.508 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:41.508 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:23:41.508 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:23:41.508 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 1437884 ']' 00:23:41.508 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 1437884 00:23:41.508 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 1437884 ']' 00:23:41.508 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 1437884 00:23:41.508 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:23:41.508 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:41.508 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1437884 00:23:41.508 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:41.508 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:41.508 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1437884' 00:23:41.508 killing process with pid 1437884 00:23:41.508 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 1437884 00:23:41.508 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 1437884 00:23:41.768 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:41.768 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:41.768 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:41.768 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:23:41.768 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:23:41.768 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:41.768 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:23:41.768 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:41.768 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:41.768 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:41.768 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:41.768 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:44.306 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:44.306 00:23:44.306 real 0m16.738s 00:23:44.306 user 0m33.040s 00:23:44.306 sys 0m7.002s 00:23:44.306 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:44.306 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:44.306 ************************************ 00:23:44.306 END TEST nvmf_shutdown_tc1 00:23:44.306 ************************************ 00:23:44.306 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:23:44.306 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:44.306 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:44.306 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:44.306 ************************************ 00:23:44.306 START TEST nvmf_shutdown_tc2 00:23:44.306 ************************************ 00:23:44.306 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:23:44.306 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:23:44.306 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:44.306 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:44.306 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:44.306 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:44.306 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:44.306 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:44.306 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:44.306 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:44.306 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:44.306 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:44.306 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:44.306 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:44.306 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:44.306 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:44.306 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:44.306 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:44.306 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:44.306 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:44.306 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:44.306 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:44.306 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:23:44.306 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:44.306 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:23:44.306 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:23:44.306 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:23:44.306 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:23:44.306 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:23:44.306 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:44.306 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:44.306 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:44.306 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:44.306 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:44.306 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:44.306 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:44.306 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:44.306 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:44.306 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:44.306 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:44.306 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:44.306 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:44.306 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:44.306 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:44.306 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:44.306 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:44.306 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:44.306 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:44.306 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:44.306 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:44.306 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:44.306 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:44.306 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:44.306 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:44.306 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:44.306 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:44.306 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:44.306 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:44.306 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:44.306 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:44.306 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:44.306 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:44.306 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:44.306 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:44.306 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:44.306 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:44.306 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:44.306 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:44.307 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:44.307 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:44.307 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:44.307 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:44.307 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:44.307 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:44.307 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:44.307 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:44.307 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:44.307 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:44.307 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:44.307 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:44.307 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:44.307 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:44.307 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:44.307 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:44.307 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:44.307 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:44.307 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:44.307 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:44.307 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:23:44.307 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:44.307 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:44.307 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:44.307 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:44.307 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:44.307 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:44.307 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:44.307 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:44.307 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:44.307 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:44.307 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:44.307 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:44.307 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:44.307 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:44.307 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:44.307 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:44.307 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:44.307 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:44.307 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:44.307 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:44.307 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:44.307 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:44.307 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:44.307 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:44.307 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:44.307 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:44.307 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:44.307 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.682 ms 00:23:44.307 00:23:44.307 --- 10.0.0.2 ping statistics --- 00:23:44.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:44.307 rtt min/avg/max/mdev = 0.682/0.682/0.682/0.000 ms 00:23:44.307 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:44.307 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:44.307 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.290 ms 00:23:44.307 00:23:44.307 --- 10.0.0.1 ping statistics --- 00:23:44.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:44.307 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:23:44.307 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:44.307 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:23:44.307 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:44.307 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:44.307 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:44.307 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:44.307 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:44.307 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:44.307 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:44.307 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:44.307 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:44.307 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:44.307 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:44.307 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1440338 00:23:44.307 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1440338 00:23:44.307 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:44.307 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1440338 ']' 00:23:44.307 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:44.307 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:44.307 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:44.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:44.307 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:44.307 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:44.566 [2024-11-20 09:57:15.219240] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:23:44.566 [2024-11-20 09:57:15.219304] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:44.566 [2024-11-20 09:57:15.316650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:44.566 [2024-11-20 09:57:15.350649] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:44.566 [2024-11-20 09:57:15.350680] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:44.566 [2024-11-20 09:57:15.350686] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:44.566 [2024-11-20 09:57:15.350690] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:44.566 [2024-11-20 09:57:15.350694] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:44.566 [2024-11-20 09:57:15.352203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:44.566 [2024-11-20 09:57:15.352351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:44.566 [2024-11-20 09:57:15.352462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:44.566 [2024-11-20 09:57:15.352463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:45.137 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:45.137 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:23:45.137 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:45.137 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:45.137 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:45.398 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:45.398 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:45.398 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.398 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:45.398 [2024-11-20 09:57:16.070896] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:45.398 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.398 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:45.398 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:45.398 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:45.398 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:45.398 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:45.398 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:45.398 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:45.398 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:45.398 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:45.398 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:45.398 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:45.398 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:45.398 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:45.398 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:45.398 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:45.398 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:45.398 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:45.398 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:45.398 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:45.398 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:45.398 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:45.398 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:45.398 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:45.398 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:45.398 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:45.398 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:45.398 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.398 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:45.398 Malloc1 00:23:45.398 [2024-11-20 09:57:16.189862] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:45.398 Malloc2 00:23:45.398 Malloc3 00:23:45.398 Malloc4 00:23:45.660 Malloc5 00:23:45.660 Malloc6 00:23:45.660 Malloc7 00:23:45.660 Malloc8 00:23:45.660 Malloc9 00:23:45.660 Malloc10 00:23:45.660 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.660 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:45.660 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:45.660 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:45.923 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=1440592 00:23:45.923 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 1440592 /var/tmp/bdevperf.sock 00:23:45.923 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1440592 ']' 00:23:45.923 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:45.923 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:45.923 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:45.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:45.923 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:45.923 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:45.923 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:45.923 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:45.923 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:23:45.923 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:23:45.923 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:45.923 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:45.923 { 00:23:45.923 "params": { 00:23:45.923 "name": "Nvme$subsystem", 00:23:45.923 "trtype": "$TEST_TRANSPORT", 00:23:45.923 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:45.923 "adrfam": "ipv4", 00:23:45.923 "trsvcid": "$NVMF_PORT", 00:23:45.923 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:45.923 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:45.923 "hdgst": ${hdgst:-false}, 00:23:45.923 "ddgst": ${ddgst:-false} 00:23:45.923 }, 00:23:45.923 "method": "bdev_nvme_attach_controller" 00:23:45.923 } 00:23:45.923 EOF 00:23:45.923 )") 00:23:45.923 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:45.923 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:45.923 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:45.923 { 00:23:45.923 "params": { 00:23:45.923 "name": "Nvme$subsystem", 00:23:45.923 "trtype": "$TEST_TRANSPORT", 00:23:45.923 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:45.923 "adrfam": "ipv4", 00:23:45.923 "trsvcid": "$NVMF_PORT", 00:23:45.923 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:45.923 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:45.923 "hdgst": ${hdgst:-false}, 00:23:45.923 "ddgst": ${ddgst:-false} 00:23:45.923 }, 00:23:45.923 "method": "bdev_nvme_attach_controller" 00:23:45.923 } 00:23:45.923 EOF 00:23:45.923 )") 00:23:45.923 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:45.923 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:45.923 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:45.923 { 00:23:45.923 "params": { 00:23:45.923 "name": "Nvme$subsystem", 00:23:45.923 "trtype": "$TEST_TRANSPORT", 00:23:45.923 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:45.923 "adrfam": "ipv4", 00:23:45.923 "trsvcid": "$NVMF_PORT", 00:23:45.923 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:45.923 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:45.923 "hdgst": ${hdgst:-false}, 00:23:45.923 "ddgst": ${ddgst:-false} 00:23:45.923 }, 00:23:45.923 "method": "bdev_nvme_attach_controller" 00:23:45.923 } 00:23:45.923 EOF 00:23:45.923 )") 00:23:45.923 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:45.923 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:45.923 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:45.923 { 00:23:45.923 "params": { 00:23:45.923 "name": "Nvme$subsystem", 00:23:45.923 "trtype": "$TEST_TRANSPORT", 00:23:45.923 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:45.923 "adrfam": "ipv4", 00:23:45.923 "trsvcid": "$NVMF_PORT", 00:23:45.923 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:45.923 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:45.923 "hdgst": ${hdgst:-false}, 00:23:45.923 "ddgst": ${ddgst:-false} 00:23:45.923 }, 00:23:45.923 "method": "bdev_nvme_attach_controller" 00:23:45.923 } 00:23:45.923 EOF 00:23:45.923 )") 00:23:45.923 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:45.923 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:45.923 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:45.923 { 00:23:45.923 "params": { 00:23:45.923 "name": "Nvme$subsystem", 00:23:45.923 "trtype": "$TEST_TRANSPORT", 00:23:45.923 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:45.923 "adrfam": "ipv4", 00:23:45.923 "trsvcid": "$NVMF_PORT", 00:23:45.923 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:45.923 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:45.923 "hdgst": ${hdgst:-false}, 00:23:45.923 "ddgst": ${ddgst:-false} 00:23:45.923 }, 00:23:45.923 "method": "bdev_nvme_attach_controller" 00:23:45.923 } 00:23:45.923 EOF 00:23:45.923 )") 00:23:45.923 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:45.923 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:45.923 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:45.923 { 00:23:45.923 "params": { 00:23:45.923 "name": "Nvme$subsystem", 00:23:45.923 "trtype": "$TEST_TRANSPORT", 00:23:45.923 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:45.923 "adrfam": "ipv4", 00:23:45.923 "trsvcid": "$NVMF_PORT", 00:23:45.923 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:45.923 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:45.923 "hdgst": ${hdgst:-false}, 00:23:45.923 "ddgst": ${ddgst:-false} 00:23:45.923 }, 00:23:45.923 "method": "bdev_nvme_attach_controller" 00:23:45.923 } 00:23:45.923 EOF 00:23:45.923 )") 00:23:45.923 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:45.923 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:45.923 [2024-11-20 09:57:16.634702] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:23:45.923 [2024-11-20 09:57:16.634755] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1440592 ] 00:23:45.923 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:45.923 { 00:23:45.923 "params": { 00:23:45.923 "name": "Nvme$subsystem", 00:23:45.923 "trtype": "$TEST_TRANSPORT", 00:23:45.923 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:45.923 "adrfam": "ipv4", 00:23:45.923 "trsvcid": "$NVMF_PORT", 00:23:45.923 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:45.923 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:45.923 "hdgst": ${hdgst:-false}, 00:23:45.923 "ddgst": ${ddgst:-false} 00:23:45.923 }, 00:23:45.923 "method": "bdev_nvme_attach_controller" 00:23:45.923 } 00:23:45.923 EOF 00:23:45.923 )") 00:23:45.923 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:45.923 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:45.923 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:45.923 { 00:23:45.923 "params": { 00:23:45.923 "name": "Nvme$subsystem", 00:23:45.924 "trtype": "$TEST_TRANSPORT", 00:23:45.924 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:45.924 "adrfam": "ipv4", 00:23:45.924 "trsvcid": "$NVMF_PORT", 00:23:45.924 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:45.924 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:45.924 "hdgst": ${hdgst:-false}, 00:23:45.924 "ddgst": ${ddgst:-false} 00:23:45.924 }, 00:23:45.924 "method": "bdev_nvme_attach_controller" 00:23:45.924 } 00:23:45.924 EOF 00:23:45.924 )") 00:23:45.924 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:45.924 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:45.924 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:45.924 { 00:23:45.924 "params": { 00:23:45.924 "name": "Nvme$subsystem", 00:23:45.924 "trtype": "$TEST_TRANSPORT", 00:23:45.924 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:45.924 "adrfam": "ipv4", 00:23:45.924 "trsvcid": "$NVMF_PORT", 00:23:45.924 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:45.924 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:45.924 "hdgst": ${hdgst:-false}, 00:23:45.924 "ddgst": ${ddgst:-false} 00:23:45.924 }, 00:23:45.924 "method": "bdev_nvme_attach_controller" 00:23:45.924 } 00:23:45.924 EOF 00:23:45.924 )") 00:23:45.924 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:45.924 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:45.924 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:45.924 { 00:23:45.924 "params": { 00:23:45.924 "name": "Nvme$subsystem", 00:23:45.924 "trtype": "$TEST_TRANSPORT", 00:23:45.924 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:45.924 "adrfam": "ipv4", 00:23:45.924 "trsvcid": "$NVMF_PORT", 00:23:45.924 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:45.924 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:45.924 "hdgst": ${hdgst:-false}, 00:23:45.924 "ddgst": ${ddgst:-false} 00:23:45.924 }, 00:23:45.924 "method": "bdev_nvme_attach_controller" 00:23:45.924 } 00:23:45.924 EOF 00:23:45.924 )") 00:23:45.924 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:45.924 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:23:45.924 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:23:45.924 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:45.924 "params": { 00:23:45.924 "name": "Nvme1", 00:23:45.924 "trtype": "tcp", 00:23:45.924 "traddr": "10.0.0.2", 00:23:45.924 "adrfam": "ipv4", 00:23:45.924 "trsvcid": "4420", 00:23:45.924 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:45.924 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:45.924 "hdgst": false, 00:23:45.924 "ddgst": false 00:23:45.924 }, 00:23:45.924 "method": "bdev_nvme_attach_controller" 00:23:45.924 },{ 00:23:45.924 "params": { 00:23:45.924 "name": "Nvme2", 00:23:45.924 "trtype": "tcp", 00:23:45.924 "traddr": "10.0.0.2", 00:23:45.924 "adrfam": "ipv4", 00:23:45.924 "trsvcid": "4420", 00:23:45.924 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:45.924 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:45.924 "hdgst": false, 00:23:45.924 "ddgst": false 00:23:45.924 }, 00:23:45.924 "method": "bdev_nvme_attach_controller" 00:23:45.924 },{ 00:23:45.924 "params": { 00:23:45.924 "name": "Nvme3", 00:23:45.924 "trtype": "tcp", 00:23:45.924 "traddr": "10.0.0.2", 00:23:45.924 "adrfam": "ipv4", 00:23:45.924 "trsvcid": "4420", 00:23:45.924 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:45.924 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:45.924 "hdgst": false, 00:23:45.924 "ddgst": false 00:23:45.924 }, 00:23:45.924 "method": "bdev_nvme_attach_controller" 00:23:45.924 },{ 00:23:45.924 "params": { 00:23:45.924 "name": "Nvme4", 00:23:45.924 "trtype": "tcp", 00:23:45.924 "traddr": "10.0.0.2", 00:23:45.924 "adrfam": "ipv4", 00:23:45.924 "trsvcid": "4420", 00:23:45.924 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:45.924 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:45.924 "hdgst": false, 00:23:45.924 "ddgst": false 00:23:45.924 }, 00:23:45.924 "method": "bdev_nvme_attach_controller" 00:23:45.924 },{ 00:23:45.924 "params": { 00:23:45.924 "name": "Nvme5", 00:23:45.924 "trtype": "tcp", 00:23:45.924 "traddr": "10.0.0.2", 00:23:45.924 "adrfam": "ipv4", 00:23:45.924 "trsvcid": "4420", 00:23:45.924 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:45.924 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:45.924 "hdgst": false, 00:23:45.924 "ddgst": false 00:23:45.924 }, 00:23:45.924 "method": "bdev_nvme_attach_controller" 00:23:45.924 },{ 00:23:45.924 "params": { 00:23:45.924 "name": "Nvme6", 00:23:45.924 "trtype": "tcp", 00:23:45.924 "traddr": "10.0.0.2", 00:23:45.924 "adrfam": "ipv4", 00:23:45.924 "trsvcid": "4420", 00:23:45.924 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:45.924 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:45.924 "hdgst": false, 00:23:45.924 "ddgst": false 00:23:45.924 }, 00:23:45.924 "method": "bdev_nvme_attach_controller" 00:23:45.924 },{ 00:23:45.924 "params": { 00:23:45.924 "name": "Nvme7", 00:23:45.924 "trtype": "tcp", 00:23:45.924 "traddr": "10.0.0.2", 00:23:45.924 "adrfam": "ipv4", 00:23:45.924 "trsvcid": "4420", 00:23:45.924 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:45.924 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:45.924 "hdgst": false, 00:23:45.924 "ddgst": false 00:23:45.924 }, 00:23:45.924 "method": "bdev_nvme_attach_controller" 00:23:45.924 },{ 00:23:45.924 "params": { 00:23:45.924 "name": "Nvme8", 00:23:45.924 "trtype": "tcp", 00:23:45.924 "traddr": "10.0.0.2", 00:23:45.924 "adrfam": "ipv4", 00:23:45.924 "trsvcid": "4420", 00:23:45.924 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:45.924 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:45.924 "hdgst": false, 00:23:45.924 "ddgst": false 00:23:45.924 }, 00:23:45.924 "method": "bdev_nvme_attach_controller" 00:23:45.924 },{ 00:23:45.924 "params": { 00:23:45.924 "name": "Nvme9", 00:23:45.924 "trtype": "tcp", 00:23:45.924 "traddr": "10.0.0.2", 00:23:45.924 "adrfam": "ipv4", 00:23:45.924 "trsvcid": "4420", 00:23:45.924 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:45.924 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:45.924 "hdgst": false, 00:23:45.924 "ddgst": false 00:23:45.924 }, 00:23:45.924 "method": "bdev_nvme_attach_controller" 00:23:45.924 },{ 00:23:45.924 "params": { 00:23:45.924 "name": "Nvme10", 00:23:45.924 "trtype": "tcp", 00:23:45.924 "traddr": "10.0.0.2", 00:23:45.924 "adrfam": "ipv4", 00:23:45.924 "trsvcid": "4420", 00:23:45.924 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:45.924 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:45.924 "hdgst": false, 00:23:45.924 "ddgst": false 00:23:45.924 }, 00:23:45.924 "method": "bdev_nvme_attach_controller" 00:23:45.924 }' 00:23:45.924 [2024-11-20 09:57:16.724195] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:45.924 [2024-11-20 09:57:16.760567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:47.379 Running I/O for 10 seconds... 00:23:47.379 09:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:47.379 09:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:23:47.379 09:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:47.379 09:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.379 09:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:47.379 09:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.379 09:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:47.379 09:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:47.379 09:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:23:47.379 09:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:23:47.379 09:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:23:47.379 09:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:23:47.379 09:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:47.379 09:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:47.379 09:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:47.379 09:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.379 09:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:47.379 09:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.641 09:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:23:47.641 09:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:23:47.641 09:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:23:47.902 09:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:23:47.902 09:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:47.902 09:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:47.902 09:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:47.902 09:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.902 09:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:47.902 09:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.902 09:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:23:47.902 09:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:23:47.902 09:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:23:48.164 09:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:23:48.164 09:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:48.164 09:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:48.164 09:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:48.164 09:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.164 09:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:48.164 09:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.164 09:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=135 00:23:48.164 09:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 135 -ge 100 ']' 00:23:48.164 09:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:23:48.164 09:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:23:48.164 09:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:23:48.164 09:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 1440592 00:23:48.164 09:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 1440592 ']' 00:23:48.164 09:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 1440592 00:23:48.164 09:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:23:48.164 09:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:48.164 09:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1440592 00:23:48.164 09:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:48.164 09:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:48.164 09:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1440592' 00:23:48.164 killing process with pid 1440592 00:23:48.164 09:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 1440592 00:23:48.164 09:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 1440592 00:23:48.164 Received shutdown signal, test time was about 0.981838 seconds 00:23:48.164 00:23:48.164 Latency(us) 00:23:48.164 [2024-11-20T08:57:19.080Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:48.164 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:48.164 Verification LBA range: start 0x0 length 0x400 00:23:48.164 Nvme1n1 : 0.98 260.97 16.31 0.00 0.00 241749.33 4041.39 244667.73 00:23:48.164 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:48.164 Verification LBA range: start 0x0 length 0x400 00:23:48.164 Nvme2n1 : 0.97 264.06 16.50 0.00 0.00 234509.44 19442.35 223696.21 00:23:48.164 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:48.164 Verification LBA range: start 0x0 length 0x400 00:23:48.165 Nvme3n1 : 0.94 203.56 12.72 0.00 0.00 297759.29 18350.08 248162.99 00:23:48.165 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:48.165 Verification LBA range: start 0x0 length 0x400 00:23:48.165 Nvme4n1 : 0.96 266.51 16.66 0.00 0.00 222788.48 12178.77 255153.49 00:23:48.165 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:48.165 Verification LBA range: start 0x0 length 0x400 00:23:48.165 Nvme5n1 : 0.98 261.96 16.37 0.00 0.00 222103.25 15400.96 253405.87 00:23:48.165 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:48.165 Verification LBA range: start 0x0 length 0x400 00:23:48.165 Nvme6n1 : 0.95 203.06 12.69 0.00 0.00 279017.81 15947.09 253405.87 00:23:48.165 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:48.165 Verification LBA range: start 0x0 length 0x400 00:23:48.165 Nvme7n1 : 0.97 267.90 16.74 0.00 0.00 206527.57 4041.39 248162.99 00:23:48.165 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:48.165 Verification LBA range: start 0x0 length 0x400 00:23:48.165 Nvme8n1 : 0.97 262.91 16.43 0.00 0.00 206692.69 19333.12 251658.24 00:23:48.165 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:48.165 Verification LBA range: start 0x0 length 0x400 00:23:48.165 Nvme9n1 : 0.96 199.63 12.48 0.00 0.00 263895.32 15510.19 256901.12 00:23:48.165 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:48.165 Verification LBA range: start 0x0 length 0x400 00:23:48.165 Nvme10n1 : 0.96 199.19 12.45 0.00 0.00 259674.45 20316.16 270882.13 00:23:48.165 [2024-11-20T08:57:19.081Z] =================================================================================================================== 00:23:48.165 [2024-11-20T08:57:19.081Z] Total : 2389.74 149.36 0.00 0.00 239901.05 4041.39 270882.13 00:23:48.427 09:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:23:49.366 09:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 1440338 00:23:49.366 09:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:23:49.366 09:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:49.366 09:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:49.366 09:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:49.366 09:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:49.366 09:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:49.366 09:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:23:49.366 09:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:49.366 09:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:23:49.366 09:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:49.366 09:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:49.366 rmmod nvme_tcp 00:23:49.366 rmmod nvme_fabrics 00:23:49.366 rmmod nvme_keyring 00:23:49.366 09:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:49.366 09:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:23:49.366 09:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:23:49.366 09:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 1440338 ']' 00:23:49.366 09:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 1440338 00:23:49.366 09:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 1440338 ']' 00:23:49.366 09:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 1440338 00:23:49.366 09:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:23:49.626 09:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:49.626 09:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1440338 00:23:49.626 09:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:49.626 09:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:49.626 09:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1440338' 00:23:49.626 killing process with pid 1440338 00:23:49.626 09:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 1440338 00:23:49.626 09:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 1440338 00:23:49.887 09:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:49.887 09:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:49.887 09:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:49.887 09:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:23:49.887 09:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:23:49.887 09:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:49.887 09:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:23:49.887 09:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:49.887 09:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:49.887 09:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:49.887 09:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:49.887 09:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:51.801 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:51.801 00:23:51.801 real 0m7.857s 00:23:51.801 user 0m23.628s 00:23:51.801 sys 0m1.308s 00:23:51.801 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:51.801 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:51.801 ************************************ 00:23:51.801 END TEST nvmf_shutdown_tc2 00:23:51.801 ************************************ 00:23:51.801 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:23:51.801 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:51.801 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:51.801 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:52.063 ************************************ 00:23:52.063 START TEST nvmf_shutdown_tc3 00:23:52.063 ************************************ 00:23:52.063 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:23:52.063 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:23:52.063 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:52.063 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:52.063 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:52.063 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:52.063 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:52.063 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:52.063 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:52.063 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:52.063 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:52.063 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:52.063 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:52.063 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:52.063 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:52.063 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:52.063 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:52.063 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:52.063 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:52.063 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:52.063 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:52.063 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:52.063 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:23:52.063 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:52.063 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:23:52.063 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:23:52.063 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:23:52.063 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:23:52.063 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:23:52.063 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:52.063 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:52.063 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:52.063 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:52.063 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:52.063 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:52.063 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:52.063 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:52.063 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:52.063 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:52.063 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:52.063 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:52.063 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:52.063 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:52.063 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:52.063 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:52.063 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:52.063 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:52.063 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:52.063 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:52.063 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:52.063 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:52.063 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:52.063 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:52.063 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:52.063 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:52.063 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:52.063 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:52.063 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:52.063 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:52.063 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:52.063 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:52.063 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:52.063 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:52.063 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:52.063 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:52.063 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:52.063 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:52.063 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:52.063 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:52.063 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:52.063 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:52.063 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:52.063 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:52.063 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:52.063 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:52.063 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:52.063 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:52.063 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:52.063 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:52.063 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:52.063 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:52.063 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:52.063 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:52.063 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:52.063 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:52.063 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:52.063 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:52.063 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:52.063 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:23:52.063 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:52.063 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:52.064 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:52.064 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:52.064 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:52.064 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:52.064 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:52.064 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:52.064 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:52.064 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:52.064 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:52.064 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:52.064 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:52.064 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:52.064 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:52.064 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:52.064 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:52.064 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:52.064 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:52.064 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:52.064 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:52.064 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:52.325 09:57:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:52.325 09:57:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:52.325 09:57:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:52.325 09:57:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:52.325 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:52.325 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.610 ms 00:23:52.325 00:23:52.325 --- 10.0.0.2 ping statistics --- 00:23:52.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:52.325 rtt min/avg/max/mdev = 0.610/0.610/0.610/0.000 ms 00:23:52.325 09:57:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:52.325 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:52.325 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:23:52.325 00:23:52.325 --- 10.0.0.1 ping statistics --- 00:23:52.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:52.325 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:23:52.325 09:57:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:52.325 09:57:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:23:52.325 09:57:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:52.325 09:57:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:52.325 09:57:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:52.325 09:57:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:52.325 09:57:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:52.326 09:57:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:52.326 09:57:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:52.326 09:57:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:52.326 09:57:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:52.326 09:57:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:52.326 09:57:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:52.326 09:57:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=1442060 00:23:52.326 09:57:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 1442060 00:23:52.326 09:57:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:52.326 09:57:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 1442060 ']' 00:23:52.326 09:57:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:52.326 09:57:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:52.326 09:57:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:52.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:52.326 09:57:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:52.326 09:57:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:52.326 [2024-11-20 09:57:23.192776] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:23:52.326 [2024-11-20 09:57:23.192843] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:52.586 [2024-11-20 09:57:23.288342] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:52.587 [2024-11-20 09:57:23.322464] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:52.587 [2024-11-20 09:57:23.322494] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:52.587 [2024-11-20 09:57:23.322500] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:52.587 [2024-11-20 09:57:23.322505] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:52.587 [2024-11-20 09:57:23.322509] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:52.587 [2024-11-20 09:57:23.324054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:52.587 [2024-11-20 09:57:23.324208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:52.587 [2024-11-20 09:57:23.324527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:52.587 [2024-11-20 09:57:23.324528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:53.158 09:57:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:53.158 09:57:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:23:53.158 09:57:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:53.158 09:57:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:53.158 09:57:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:53.158 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:53.158 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:53.158 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.158 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:53.158 [2024-11-20 09:57:24.027171] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:53.158 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.158 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:53.158 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:53.158 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:53.158 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:53.158 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:53.158 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:53.158 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:53.158 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:53.158 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:53.158 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:53.158 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:53.158 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:53.158 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:53.158 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:53.158 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:53.158 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:53.158 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:53.419 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:53.419 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:53.419 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:53.419 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:53.419 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:53.419 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:53.419 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:53.419 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:53.419 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:53.419 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.419 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:53.419 Malloc1 00:23:53.419 [2024-11-20 09:57:24.140303] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:53.419 Malloc2 00:23:53.419 Malloc3 00:23:53.419 Malloc4 00:23:53.419 Malloc5 00:23:53.419 Malloc6 00:23:53.682 Malloc7 00:23:53.682 Malloc8 00:23:53.682 Malloc9 00:23:53.682 Malloc10 00:23:53.682 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.682 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:53.682 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:53.682 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:53.682 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=1442441 00:23:53.682 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 1442441 /var/tmp/bdevperf.sock 00:23:53.682 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 1442441 ']' 00:23:53.682 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:53.682 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:53.682 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:53.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:53.682 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:53.682 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:53.682 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:53.682 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:53.682 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:23:53.682 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:23:53.682 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:53.682 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:53.682 { 00:23:53.682 "params": { 00:23:53.682 "name": "Nvme$subsystem", 00:23:53.682 "trtype": "$TEST_TRANSPORT", 00:23:53.682 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:53.682 "adrfam": "ipv4", 00:23:53.682 "trsvcid": "$NVMF_PORT", 00:23:53.682 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:53.682 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:53.682 "hdgst": ${hdgst:-false}, 00:23:53.682 "ddgst": ${ddgst:-false} 00:23:53.682 }, 00:23:53.682 "method": "bdev_nvme_attach_controller" 00:23:53.682 } 00:23:53.682 EOF 00:23:53.682 )") 00:23:53.682 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:53.682 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:53.682 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:53.682 { 00:23:53.682 "params": { 00:23:53.682 "name": "Nvme$subsystem", 00:23:53.682 "trtype": "$TEST_TRANSPORT", 00:23:53.682 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:53.682 "adrfam": "ipv4", 00:23:53.682 "trsvcid": "$NVMF_PORT", 00:23:53.682 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:53.682 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:53.682 "hdgst": ${hdgst:-false}, 00:23:53.682 "ddgst": ${ddgst:-false} 00:23:53.682 }, 00:23:53.682 "method": "bdev_nvme_attach_controller" 00:23:53.682 } 00:23:53.682 EOF 00:23:53.682 )") 00:23:53.682 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:53.682 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:53.682 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:53.682 { 00:23:53.682 "params": { 00:23:53.682 "name": "Nvme$subsystem", 00:23:53.682 "trtype": "$TEST_TRANSPORT", 00:23:53.682 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:53.682 "adrfam": "ipv4", 00:23:53.682 "trsvcid": "$NVMF_PORT", 00:23:53.682 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:53.682 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:53.682 "hdgst": ${hdgst:-false}, 00:23:53.682 "ddgst": ${ddgst:-false} 00:23:53.682 }, 00:23:53.682 "method": "bdev_nvme_attach_controller" 00:23:53.682 } 00:23:53.682 EOF 00:23:53.682 )") 00:23:53.682 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:53.682 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:53.682 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:53.682 { 00:23:53.682 "params": { 00:23:53.682 "name": "Nvme$subsystem", 00:23:53.682 "trtype": "$TEST_TRANSPORT", 00:23:53.682 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:53.682 "adrfam": "ipv4", 00:23:53.682 "trsvcid": "$NVMF_PORT", 00:23:53.682 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:53.682 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:53.682 "hdgst": ${hdgst:-false}, 00:23:53.682 "ddgst": ${ddgst:-false} 00:23:53.682 }, 00:23:53.682 "method": "bdev_nvme_attach_controller" 00:23:53.682 } 00:23:53.682 EOF 00:23:53.682 )") 00:23:53.682 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:53.682 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:53.682 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:53.682 { 00:23:53.682 "params": { 00:23:53.682 "name": "Nvme$subsystem", 00:23:53.682 "trtype": "$TEST_TRANSPORT", 00:23:53.682 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:53.682 "adrfam": "ipv4", 00:23:53.682 "trsvcid": "$NVMF_PORT", 00:23:53.682 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:53.682 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:53.682 "hdgst": ${hdgst:-false}, 00:23:53.682 "ddgst": ${ddgst:-false} 00:23:53.682 }, 00:23:53.682 "method": "bdev_nvme_attach_controller" 00:23:53.682 } 00:23:53.682 EOF 00:23:53.682 )") 00:23:53.682 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:53.682 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:53.682 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:53.682 { 00:23:53.682 "params": { 00:23:53.682 "name": "Nvme$subsystem", 00:23:53.682 "trtype": "$TEST_TRANSPORT", 00:23:53.682 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:53.682 "adrfam": "ipv4", 00:23:53.682 "trsvcid": "$NVMF_PORT", 00:23:53.682 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:53.682 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:53.682 "hdgst": ${hdgst:-false}, 00:23:53.682 "ddgst": ${ddgst:-false} 00:23:53.682 }, 00:23:53.682 "method": "bdev_nvme_attach_controller" 00:23:53.682 } 00:23:53.682 EOF 00:23:53.682 )") 00:23:53.682 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:53.682 [2024-11-20 09:57:24.579542] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:23:53.683 [2024-11-20 09:57:24.579595] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1442441 ] 00:23:53.683 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:53.683 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:53.683 { 00:23:53.683 "params": { 00:23:53.683 "name": "Nvme$subsystem", 00:23:53.683 "trtype": "$TEST_TRANSPORT", 00:23:53.683 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:53.683 "adrfam": "ipv4", 00:23:53.683 "trsvcid": "$NVMF_PORT", 00:23:53.683 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:53.683 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:53.683 "hdgst": ${hdgst:-false}, 00:23:53.683 "ddgst": ${ddgst:-false} 00:23:53.683 }, 00:23:53.683 "method": "bdev_nvme_attach_controller" 00:23:53.683 } 00:23:53.683 EOF 00:23:53.683 )") 00:23:53.683 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:53.683 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:53.683 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:53.683 { 00:23:53.683 "params": { 00:23:53.683 "name": "Nvme$subsystem", 00:23:53.683 "trtype": "$TEST_TRANSPORT", 00:23:53.683 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:53.683 "adrfam": "ipv4", 00:23:53.683 "trsvcid": "$NVMF_PORT", 00:23:53.683 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:53.683 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:53.683 "hdgst": ${hdgst:-false}, 00:23:53.683 "ddgst": ${ddgst:-false} 00:23:53.683 }, 00:23:53.683 "method": "bdev_nvme_attach_controller" 00:23:53.683 } 00:23:53.683 EOF 00:23:53.683 )") 00:23:53.683 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:53.945 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:53.945 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:53.945 { 00:23:53.945 "params": { 00:23:53.945 "name": "Nvme$subsystem", 00:23:53.945 "trtype": "$TEST_TRANSPORT", 00:23:53.945 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:53.945 "adrfam": "ipv4", 00:23:53.945 "trsvcid": "$NVMF_PORT", 00:23:53.945 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:53.945 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:53.945 "hdgst": ${hdgst:-false}, 00:23:53.945 "ddgst": ${ddgst:-false} 00:23:53.945 }, 00:23:53.945 "method": "bdev_nvme_attach_controller" 00:23:53.945 } 00:23:53.945 EOF 00:23:53.945 )") 00:23:53.945 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:53.945 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:53.945 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:53.945 { 00:23:53.945 "params": { 00:23:53.945 "name": "Nvme$subsystem", 00:23:53.945 "trtype": "$TEST_TRANSPORT", 00:23:53.945 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:53.945 "adrfam": "ipv4", 00:23:53.945 "trsvcid": "$NVMF_PORT", 00:23:53.945 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:53.945 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:53.945 "hdgst": ${hdgst:-false}, 00:23:53.945 "ddgst": ${ddgst:-false} 00:23:53.945 }, 00:23:53.945 "method": "bdev_nvme_attach_controller" 00:23:53.945 } 00:23:53.945 EOF 00:23:53.945 )") 00:23:53.945 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:53.945 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:23:53.945 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:23:53.945 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:53.945 "params": { 00:23:53.945 "name": "Nvme1", 00:23:53.945 "trtype": "tcp", 00:23:53.945 "traddr": "10.0.0.2", 00:23:53.945 "adrfam": "ipv4", 00:23:53.945 "trsvcid": "4420", 00:23:53.945 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:53.945 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:53.945 "hdgst": false, 00:23:53.945 "ddgst": false 00:23:53.945 }, 00:23:53.945 "method": "bdev_nvme_attach_controller" 00:23:53.945 },{ 00:23:53.945 "params": { 00:23:53.945 "name": "Nvme2", 00:23:53.945 "trtype": "tcp", 00:23:53.945 "traddr": "10.0.0.2", 00:23:53.945 "adrfam": "ipv4", 00:23:53.945 "trsvcid": "4420", 00:23:53.945 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:53.945 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:53.945 "hdgst": false, 00:23:53.945 "ddgst": false 00:23:53.945 }, 00:23:53.945 "method": "bdev_nvme_attach_controller" 00:23:53.945 },{ 00:23:53.945 "params": { 00:23:53.945 "name": "Nvme3", 00:23:53.945 "trtype": "tcp", 00:23:53.945 "traddr": "10.0.0.2", 00:23:53.945 "adrfam": "ipv4", 00:23:53.945 "trsvcid": "4420", 00:23:53.945 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:53.945 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:53.945 "hdgst": false, 00:23:53.945 "ddgst": false 00:23:53.945 }, 00:23:53.945 "method": "bdev_nvme_attach_controller" 00:23:53.945 },{ 00:23:53.945 "params": { 00:23:53.945 "name": "Nvme4", 00:23:53.945 "trtype": "tcp", 00:23:53.945 "traddr": "10.0.0.2", 00:23:53.945 "adrfam": "ipv4", 00:23:53.945 "trsvcid": "4420", 00:23:53.945 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:53.945 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:53.945 "hdgst": false, 00:23:53.945 "ddgst": false 00:23:53.945 }, 00:23:53.945 "method": "bdev_nvme_attach_controller" 00:23:53.945 },{ 00:23:53.945 "params": { 00:23:53.945 "name": "Nvme5", 00:23:53.945 "trtype": "tcp", 00:23:53.945 "traddr": "10.0.0.2", 00:23:53.945 "adrfam": "ipv4", 00:23:53.945 "trsvcid": "4420", 00:23:53.945 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:53.946 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:53.946 "hdgst": false, 00:23:53.946 "ddgst": false 00:23:53.946 }, 00:23:53.946 "method": "bdev_nvme_attach_controller" 00:23:53.946 },{ 00:23:53.946 "params": { 00:23:53.946 "name": "Nvme6", 00:23:53.946 "trtype": "tcp", 00:23:53.946 "traddr": "10.0.0.2", 00:23:53.946 "adrfam": "ipv4", 00:23:53.946 "trsvcid": "4420", 00:23:53.946 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:53.946 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:53.946 "hdgst": false, 00:23:53.946 "ddgst": false 00:23:53.946 }, 00:23:53.946 "method": "bdev_nvme_attach_controller" 00:23:53.946 },{ 00:23:53.946 "params": { 00:23:53.946 "name": "Nvme7", 00:23:53.946 "trtype": "tcp", 00:23:53.946 "traddr": "10.0.0.2", 00:23:53.946 "adrfam": "ipv4", 00:23:53.946 "trsvcid": "4420", 00:23:53.946 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:53.946 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:53.946 "hdgst": false, 00:23:53.946 "ddgst": false 00:23:53.946 }, 00:23:53.946 "method": "bdev_nvme_attach_controller" 00:23:53.946 },{ 00:23:53.946 "params": { 00:23:53.946 "name": "Nvme8", 00:23:53.946 "trtype": "tcp", 00:23:53.946 "traddr": "10.0.0.2", 00:23:53.946 "adrfam": "ipv4", 00:23:53.946 "trsvcid": "4420", 00:23:53.946 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:53.946 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:53.946 "hdgst": false, 00:23:53.946 "ddgst": false 00:23:53.946 }, 00:23:53.946 "method": "bdev_nvme_attach_controller" 00:23:53.946 },{ 00:23:53.946 "params": { 00:23:53.946 "name": "Nvme9", 00:23:53.946 "trtype": "tcp", 00:23:53.946 "traddr": "10.0.0.2", 00:23:53.946 "adrfam": "ipv4", 00:23:53.946 "trsvcid": "4420", 00:23:53.946 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:53.946 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:53.946 "hdgst": false, 00:23:53.946 "ddgst": false 00:23:53.946 }, 00:23:53.946 "method": "bdev_nvme_attach_controller" 00:23:53.946 },{ 00:23:53.946 "params": { 00:23:53.946 "name": "Nvme10", 00:23:53.946 "trtype": "tcp", 00:23:53.946 "traddr": "10.0.0.2", 00:23:53.946 "adrfam": "ipv4", 00:23:53.946 "trsvcid": "4420", 00:23:53.946 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:53.946 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:53.946 "hdgst": false, 00:23:53.946 "ddgst": false 00:23:53.946 }, 00:23:53.946 "method": "bdev_nvme_attach_controller" 00:23:53.946 }' 00:23:53.946 [2024-11-20 09:57:24.668042] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:53.946 [2024-11-20 09:57:24.704788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:55.329 Running I/O for 10 seconds... 00:23:55.329 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:55.329 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:23:55.329 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:55.329 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.329 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:55.590 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.590 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:55.590 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:55.590 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:55.590 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:23:55.590 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:23:55.590 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:23:55.590 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:23:55.590 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:55.590 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:55.590 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:55.590 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.590 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:55.590 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.590 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:23:55.590 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:23:55.590 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:23:55.851 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:23:55.851 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:55.851 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:55.851 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:55.851 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.851 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:55.851 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.851 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:23:55.851 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:23:55.851 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:23:56.111 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:23:56.111 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:56.111 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:56.111 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:56.111 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.111 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:56.111 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.392 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=136 00:23:56.392 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 136 -ge 100 ']' 00:23:56.392 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:23:56.392 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:23:56.392 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:23:56.392 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 1442060 00:23:56.392 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 1442060 ']' 00:23:56.392 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 1442060 00:23:56.392 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:23:56.392 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:56.392 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1442060 00:23:56.392 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:56.392 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:56.392 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1442060' 00:23:56.392 killing process with pid 1442060 00:23:56.392 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 1442060 00:23:56.392 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 1442060 00:23:56.392 [2024-11-20 09:57:27.084463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753110 is same with the state(6) to be set 00:23:56.393 [2024-11-20 09:57:27.084507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753110 is same with the state(6) to be set 00:23:56.393 [2024-11-20 09:57:27.084514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753110 is same with the state(6) to be set 00:23:56.393 [2024-11-20 09:57:27.084519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753110 is same with the state(6) to be set 00:23:56.393 [2024-11-20 09:57:27.084524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753110 is same with the state(6) to be set 00:23:56.393 [2024-11-20 09:57:27.084529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753110 is same with the state(6) to be set 00:23:56.393 [2024-11-20 09:57:27.084534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753110 is same with the state(6) to be set 00:23:56.393 [2024-11-20 09:57:27.084539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753110 is same with the state(6) to be set 00:23:56.393 [2024-11-20 09:57:27.084543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753110 is same with the state(6) to be set 00:23:56.393 [2024-11-20 09:57:27.084548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753110 is same with the state(6) to be set 00:23:56.393 [2024-11-20 09:57:27.084553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753110 is same with the state(6) to be set 00:23:56.393 [2024-11-20 09:57:27.084557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753110 is same with the state(6) to be set 00:23:56.393 [2024-11-20 09:57:27.084562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753110 is same with the state(6) to be set 00:23:56.393 [2024-11-20 09:57:27.084567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753110 is same with the state(6) to be set 00:23:56.393 [2024-11-20 09:57:27.084577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753110 is same with the state(6) to be set 00:23:56.393 [2024-11-20 09:57:27.084582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753110 is same with the state(6) to be set 00:23:56.393 [2024-11-20 09:57:27.084587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753110 is same with the state(6) to be set 00:23:56.393 [2024-11-20 09:57:27.084591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753110 is same with the state(6) to be set 00:23:56.393 [2024-11-20 09:57:27.084596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753110 is same with the state(6) to be set 00:23:56.393 [2024-11-20 09:57:27.084600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753110 is same with the state(6) to be set 00:23:56.393 [2024-11-20 09:57:27.084605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753110 is same with the state(6) to be set 00:23:56.393 [2024-11-20 09:57:27.084609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753110 is same with the state(6) to be set 00:23:56.393 [2024-11-20 09:57:27.084614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753110 is same with the state(6) to be set 00:23:56.393 [2024-11-20 09:57:27.084618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753110 is same with the state(6) to be set 00:23:56.393 [2024-11-20 09:57:27.084623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753110 is same with the state(6) to be set 00:23:56.393 [2024-11-20 09:57:27.084627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753110 is same with the state(6) to be set 00:23:56.393 [2024-11-20 09:57:27.084632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753110 is same with the state(6) to be set 00:23:56.393 [2024-11-20 09:57:27.084636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753110 is same with the state(6) to be set 00:23:56.393 [2024-11-20 09:57:27.084641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753110 is same with the state(6) to be set 00:23:56.393 [2024-11-20 09:57:27.084646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753110 is same with the state(6) to be set 00:23:56.393 [2024-11-20 09:57:27.084650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753110 is same with the state(6) to be set 00:23:56.393 [2024-11-20 09:57:27.084655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753110 is same with the state(6) to be set 00:23:56.393 [2024-11-20 09:57:27.084659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753110 is same with the state(6) to be set 00:23:56.393 [2024-11-20 09:57:27.084664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753110 is same with the state(6) to be set 00:23:56.393 [2024-11-20 09:57:27.084668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753110 is same with the state(6) to be set 00:23:56.393 [2024-11-20 09:57:27.084673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753110 is same with the state(6) to be set 00:23:56.393 [2024-11-20 09:57:27.084678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753110 is same with the state(6) to be set 00:23:56.393 [2024-11-20 09:57:27.084682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753110 is same with the state(6) to be set 00:23:56.393 [2024-11-20 09:57:27.084687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753110 is same with the state(6) to be set 00:23:56.393 [2024-11-20 09:57:27.084691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753110 is same with the state(6) to be set 00:23:56.393 [2024-11-20 09:57:27.084696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753110 is same with the state(6) to be set 00:23:56.393 [2024-11-20 09:57:27.084701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753110 is same with the state(6) to be set 00:23:56.393 [2024-11-20 09:57:27.084706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753110 is same with the state(6) to be set 00:23:56.393 [2024-11-20 09:57:27.084711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753110 is same with the state(6) to be set 00:23:56.393 [2024-11-20 09:57:27.084716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753110 is same with the state(6) to be set 00:23:56.393 [2024-11-20 09:57:27.084720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753110 is same with the state(6) to be set 00:23:56.393 [2024-11-20 09:57:27.084725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753110 is same with the state(6) to be set 00:23:56.393 [2024-11-20 09:57:27.084729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753110 is same with the state(6) to be set 00:23:56.393 [2024-11-20 09:57:27.084734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753110 is same with the state(6) to be set 00:23:56.393 [2024-11-20 09:57:27.084738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753110 is same with the state(6) to be set 00:23:56.393 [2024-11-20 09:57:27.084743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753110 is same with the state(6) to be set 00:23:56.393 [2024-11-20 09:57:27.084748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753110 is same with the state(6) to be set 00:23:56.393 [2024-11-20 09:57:27.084752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753110 is same with the state(6) to be set 00:23:56.393 [2024-11-20 09:57:27.084756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753110 is same with the state(6) to be set 00:23:56.393 [2024-11-20 09:57:27.085898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755ce0 is same with the state(6) to be set 00:23:56.393 [2024-11-20 09:57:27.085923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755ce0 is same with the state(6) to be set 00:23:56.393 [2024-11-20 09:57:27.085929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755ce0 is same with the state(6) to be set 00:23:56.393 [2024-11-20 09:57:27.085934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755ce0 is same with the state(6) to be set 00:23:56.393 [2024-11-20 09:57:27.085939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755ce0 is same with the state(6) to be set 00:23:56.393 [2024-11-20 09:57:27.085943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755ce0 is same with the state(6) to be set 00:23:56.393 [2024-11-20 09:57:27.085948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755ce0 is same with the state(6) to be set 00:23:56.393 [2024-11-20 09:57:27.085953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755ce0 is same with the state(6) to be set 00:23:56.393 [2024-11-20 09:57:27.085958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755ce0 is same with the state(6) to be set 00:23:56.393 [2024-11-20 09:57:27.085962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755ce0 is same with the state(6) to be set 00:23:56.393 [2024-11-20 09:57:27.085967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755ce0 is same with the state(6) to be set 00:23:56.393 [2024-11-20 09:57:27.085972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755ce0 is same with the state(6) to be set 00:23:56.393 [2024-11-20 09:57:27.085977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755ce0 is same with the state(6) to be set 00:23:56.393 [2024-11-20 09:57:27.085982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755ce0 is same with the state(6) to be set 00:23:56.393 [2024-11-20 09:57:27.085989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755ce0 is same with the state(6) to be set 00:23:56.393 [2024-11-20 09:57:27.085995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755ce0 is same with the state(6) to be set 00:23:56.393 [2024-11-20 09:57:27.086000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755ce0 is same with the state(6) to be set 00:23:56.393 [2024-11-20 09:57:27.086005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755ce0 is same with the state(6) to be set 00:23:56.393 [2024-11-20 09:57:27.086009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755ce0 is same with the state(6) to be set 00:23:56.393 [2024-11-20 09:57:27.086014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755ce0 is same with the state(6) to be set 00:23:56.393 [2024-11-20 09:57:27.086019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755ce0 is same with the state(6) to be set 00:23:56.393 [2024-11-20 09:57:27.086024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755ce0 is same with the state(6) to be set 00:23:56.393 [2024-11-20 09:57:27.086029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755ce0 is same with the state(6) to be set 00:23:56.393 [2024-11-20 09:57:27.086033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755ce0 is same with the state(6) to be set 00:23:56.393 [2024-11-20 09:57:27.086038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755ce0 is same with the state(6) to be set 00:23:56.393 [2024-11-20 09:57:27.086042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755ce0 is same with the state(6) to be set 00:23:56.393 [2024-11-20 09:57:27.086047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755ce0 is same with the state(6) to be set 00:23:56.393 [2024-11-20 09:57:27.086052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755ce0 is same with the state(6) to be set 00:23:56.393 [2024-11-20 09:57:27.086057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755ce0 is same with the state(6) to be set 00:23:56.394 [2024-11-20 09:57:27.086061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755ce0 is same with the state(6) to be set 00:23:56.394 [2024-11-20 09:57:27.086066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755ce0 is same with the state(6) to be set 00:23:56.394 [2024-11-20 09:57:27.086071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755ce0 is same with the state(6) to be set 00:23:56.394 [2024-11-20 09:57:27.086075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755ce0 is same with the state(6) to be set 00:23:56.394 [2024-11-20 09:57:27.086080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755ce0 is same with the state(6) to be set 00:23:56.394 [2024-11-20 09:57:27.086085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755ce0 is same with the state(6) to be set 00:23:56.394 [2024-11-20 09:57:27.086089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755ce0 is same with the state(6) to be set 00:23:56.394 [2024-11-20 09:57:27.086094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755ce0 is same with the state(6) to be set 00:23:56.394 [2024-11-20 09:57:27.086098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755ce0 is same with the state(6) to be set 00:23:56.394 [2024-11-20 09:57:27.086103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755ce0 is same with the state(6) to be set 00:23:56.394 [2024-11-20 09:57:27.086107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755ce0 is same with the state(6) to be set 00:23:56.394 [2024-11-20 09:57:27.086112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755ce0 is same with the state(6) to be set 00:23:56.394 [2024-11-20 09:57:27.086118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755ce0 is same with the state(6) to be set 00:23:56.394 [2024-11-20 09:57:27.086123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755ce0 is same with the state(6) to be set 00:23:56.394 [2024-11-20 09:57:27.086128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755ce0 is same with the state(6) to be set 00:23:56.394 [2024-11-20 09:57:27.086133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755ce0 is same with the state(6) to be set 00:23:56.394 [2024-11-20 09:57:27.086137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755ce0 is same with the state(6) to be set 00:23:56.394 [2024-11-20 09:57:27.086142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755ce0 is same with the state(6) to be set 00:23:56.394 [2024-11-20 09:57:27.086147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755ce0 is same with the state(6) to be set 00:23:56.394 [2024-11-20 09:57:27.086151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755ce0 is same with the state(6) to be set 00:23:56.394 [2024-11-20 09:57:27.086156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755ce0 is same with the state(6) to be set 00:23:56.394 [2024-11-20 09:57:27.086164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755ce0 is same with the state(6) to be set 00:23:56.394 [2024-11-20 09:57:27.086168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755ce0 is same with the state(6) to be set 00:23:56.394 [2024-11-20 09:57:27.086173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755ce0 is same with the state(6) to be set 00:23:56.394 [2024-11-20 09:57:27.086178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755ce0 is same with the state(6) to be set 00:23:56.394 [2024-11-20 09:57:27.086182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755ce0 is same with the state(6) to be set 00:23:56.394 [2024-11-20 09:57:27.086187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755ce0 is same with the state(6) to be set 00:23:56.394 [2024-11-20 09:57:27.086191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755ce0 is same with the state(6) to be set 00:23:56.394 [2024-11-20 09:57:27.086196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755ce0 is same with the state(6) to be set 00:23:56.394 [2024-11-20 09:57:27.086201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755ce0 is same with the state(6) to be set 00:23:56.394 [2024-11-20 09:57:27.086205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755ce0 is same with the state(6) to be set 00:23:56.394 [2024-11-20 09:57:27.086210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755ce0 is same with the state(6) to be set 00:23:56.394 [2024-11-20 09:57:27.086214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755ce0 is same with the state(6) to be set 00:23:56.394 [2024-11-20 09:57:27.086219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755ce0 is same with the state(6) to be set 00:23:56.394 [2024-11-20 09:57:27.087318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:56.394 [2024-11-20 09:57:27.087353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.394 [2024-11-20 09:57:27.087363] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:56.394 [2024-11-20 09:57:27.087371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.394 [2024-11-20 09:57:27.087379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:56.394 [2024-11-20 09:57:27.087391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.394 [2024-11-20 09:57:27.087400] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:56.394 [2024-11-20 09:57:27.087407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.394 [2024-11-20 09:57:27.087415] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3bcb0 is same with the state(6) to be set 00:23:56.394 [2024-11-20 09:57:27.087450] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:56.394 [2024-11-20 09:57:27.087460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.394 [2024-11-20 09:57:27.087468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:56.394 [2024-11-20 09:57:27.087476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.394 [2024-11-20 09:57:27.087484] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:56.394 [2024-11-20 09:57:27.087491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.394 [2024-11-20 09:57:27.087500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:56.394 [2024-11-20 09:57:27.087507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.394 [2024-11-20 09:57:27.087514] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10afd00 is same with the state(6) to be set 00:23:56.394 [2024-11-20 09:57:27.088083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.394 [2024-11-20 09:57:27.088103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.394 [2024-11-20 09:57:27.088119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.394 [2024-11-20 09:57:27.088127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.394 [2024-11-20 09:57:27.088137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.394 [2024-11-20 09:57:27.088144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.394 [2024-11-20 09:57:27.088154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.394 [2024-11-20 09:57:27.088169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.394 [2024-11-20 09:57:27.088179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.394 [2024-11-20 09:57:27.088186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.394 [2024-11-20 09:57:27.088196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.394 [2024-11-20 09:57:27.088203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.394 [2024-11-20 09:57:27.088213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.394 [2024-11-20 09:57:27.088225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.394 [2024-11-20 09:57:27.088234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.394 [2024-11-20 09:57:27.088242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.394 [2024-11-20 09:57:27.088251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.394 [2024-11-20 09:57:27.088259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.394 [2024-11-20 09:57:27.088268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.394 [2024-11-20 09:57:27.088275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.395 [2024-11-20 09:57:27.088285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.395 [2024-11-20 09:57:27.088292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.395 [2024-11-20 09:57:27.088301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.395 [2024-11-20 09:57:27.088308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.395 [2024-11-20 09:57:27.088318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.395 [2024-11-20 09:57:27.088325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.395 [2024-11-20 09:57:27.088334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.395 [2024-11-20 09:57:27.088342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.395 [2024-11-20 09:57:27.088351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.395 [2024-11-20 09:57:27.088359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.395 [2024-11-20 09:57:27.088368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.395 [2024-11-20 09:57:27.088375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.395 [2024-11-20 09:57:27.088385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.395 [2024-11-20 09:57:27.088380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753ad0 is same with the state(6) to be set 00:23:56.395 [2024-11-20 09:57:27.088393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.395 [2024-11-20 09:57:27.088403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753ad0 is same with [2024-11-20 09:57:27.088403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:1the state(6) to be set 00:23:56.395 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.395 [2024-11-20 09:57:27.088412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753ad0 is same with [2024-11-20 09:57:27.088413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:23:56.395 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.395 [2024-11-20 09:57:27.088430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753ad0 is same with the state(6) to be set 00:23:56.395 [2024-11-20 09:57:27.088434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.395 [2024-11-20 09:57:27.088436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753ad0 is same with the state(6) to be set 00:23:56.395 [2024-11-20 09:57:27.088442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753ad0 is same with the state(6) to be set 00:23:56.395 [2024-11-20 09:57:27.088442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.395 [2024-11-20 09:57:27.088448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753ad0 is same with the state(6) to be set 00:23:56.395 [2024-11-20 09:57:27.088454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753ad0 is same with [2024-11-20 09:57:27.088453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:1the state(6) to be set 00:23:56.395 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.395 [2024-11-20 09:57:27.088461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753ad0 is same with the state(6) to be set 00:23:56.395 [2024-11-20 09:57:27.088463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.395 [2024-11-20 09:57:27.088466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753ad0 is same with the state(6) to be set 00:23:56.395 [2024-11-20 09:57:27.088471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753ad0 is same with the state(6) to be set 00:23:56.395 [2024-11-20 09:57:27.088473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.395 [2024-11-20 09:57:27.088476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753ad0 is same with the state(6) to be set 00:23:56.395 [2024-11-20 09:57:27.088481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753ad0 is same with [2024-11-20 09:57:27.088481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:23:56.395 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.395 [2024-11-20 09:57:27.088488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753ad0 is same with the state(6) to be set 00:23:56.395 [2024-11-20 09:57:27.088493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:1[2024-11-20 09:57:27.088494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753ad0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.395 the state(6) to be set 00:23:56.395 [2024-11-20 09:57:27.088501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753ad0 is same with the state(6) to be set 00:23:56.395 [2024-11-20 09:57:27.088502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.395 [2024-11-20 09:57:27.088506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753ad0 is same with the state(6) to be set 00:23:56.395 [2024-11-20 09:57:27.088511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753ad0 is same with the state(6) to be set 00:23:56.395 [2024-11-20 09:57:27.088512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.395 [2024-11-20 09:57:27.088516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753ad0 is same with [2024-11-20 09:57:27.088520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:23:56.395 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.395 [2024-11-20 09:57:27.088528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753ad0 is same with the state(6) to be set 00:23:56.395 [2024-11-20 09:57:27.088532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.395 [2024-11-20 09:57:27.088534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753ad0 is same with the state(6) to be set 00:23:56.395 [2024-11-20 09:57:27.088540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753ad0 is same with the state(6) to be set 00:23:56.395 [2024-11-20 09:57:27.088540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.395 [2024-11-20 09:57:27.088545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753ad0 is same with the state(6) to be set 00:23:56.395 [2024-11-20 09:57:27.088551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753ad0 is same with the state(6) to be set 00:23:56.395 [2024-11-20 09:57:27.088551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.395 [2024-11-20 09:57:27.088556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753ad0 is same with the state(6) to be set 00:23:56.395 [2024-11-20 09:57:27.088559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.395 [2024-11-20 09:57:27.088561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753ad0 is same with the state(6) to be set 00:23:56.395 [2024-11-20 09:57:27.088567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753ad0 is same with the state(6) to be set 00:23:56.395 [2024-11-20 09:57:27.088569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.395 [2024-11-20 09:57:27.088572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753ad0 is same with the state(6) to be set 00:23:56.395 [2024-11-20 09:57:27.088577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753ad0 is same with [2024-11-20 09:57:27.088577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:23:56.395 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.395 [2024-11-20 09:57:27.088585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753ad0 is same with the state(6) to be set 00:23:56.395 [2024-11-20 09:57:27.088589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:1[2024-11-20 09:57:27.088590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753ad0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.395 the state(6) to be set 00:23:56.395 [2024-11-20 09:57:27.088597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753ad0 is same with the state(6) to be set 00:23:56.395 [2024-11-20 09:57:27.088599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.395 [2024-11-20 09:57:27.088602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753ad0 is same with the state(6) to be set 00:23:56.395 [2024-11-20 09:57:27.088608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753ad0 is same with the state(6) to be set 00:23:56.395 [2024-11-20 09:57:27.088609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.395 [2024-11-20 09:57:27.088613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753ad0 is same with the state(6) to be set 00:23:56.395 [2024-11-20 09:57:27.088617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.395 [2024-11-20 09:57:27.088619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753ad0 is same with the state(6) to be set 00:23:56.395 [2024-11-20 09:57:27.088625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753ad0 is same with the state(6) to be set 00:23:56.395 [2024-11-20 09:57:27.088627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.395 [2024-11-20 09:57:27.088630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753ad0 is same with the state(6) to be set 00:23:56.395 [2024-11-20 09:57:27.088635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753ad0 is same with [2024-11-20 09:57:27.088635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:23:56.395 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.395 [2024-11-20 09:57:27.088642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753ad0 is same with the state(6) to be set 00:23:56.396 [2024-11-20 09:57:27.088646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:1[2024-11-20 09:57:27.088647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753ad0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.396 the state(6) to be set 00:23:56.396 [2024-11-20 09:57:27.088655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753ad0 is same with the state(6) to be set 00:23:56.396 [2024-11-20 09:57:27.088656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.396 [2024-11-20 09:57:27.088660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753ad0 is same with the state(6) to be set 00:23:56.396 [2024-11-20 09:57:27.088665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753ad0 is same with the state(6) to be set 00:23:56.396 [2024-11-20 09:57:27.088666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.396 [2024-11-20 09:57:27.088670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753ad0 is same with the state(6) to be set 00:23:56.396 [2024-11-20 09:57:27.088674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.396 [2024-11-20 09:57:27.088677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753ad0 is same with the state(6) to be set 00:23:56.396 [2024-11-20 09:57:27.088682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753ad0 is same with the state(6) to be set 00:23:56.396 [2024-11-20 09:57:27.088684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.396 [2024-11-20 09:57:27.088687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753ad0 is same with the state(6) to be set 00:23:56.396 [2024-11-20 09:57:27.088692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753ad0 is same with [2024-11-20 09:57:27.088692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:23:56.396 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.396 [2024-11-20 09:57:27.088699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753ad0 is same with the state(6) to be set 00:23:56.396 [2024-11-20 09:57:27.088704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:1[2024-11-20 09:57:27.088705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753ad0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.396 the state(6) to be set 00:23:56.396 [2024-11-20 09:57:27.088713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753ad0 is same with [2024-11-20 09:57:27.088713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:23:56.396 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.396 [2024-11-20 09:57:27.088721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753ad0 is same with the state(6) to be set 00:23:56.396 [2024-11-20 09:57:27.088725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:1[2024-11-20 09:57:27.088726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753ad0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.396 the state(6) to be set 00:23:56.396 [2024-11-20 09:57:27.088733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753ad0 is same with the state(6) to be set 00:23:56.396 [2024-11-20 09:57:27.088734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.396 [2024-11-20 09:57:27.088738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753ad0 is same with the state(6) to be set 00:23:56.396 [2024-11-20 09:57:27.088743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753ad0 is same with the state(6) to be set 00:23:56.396 [2024-11-20 09:57:27.088744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.396 [2024-11-20 09:57:27.088748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753ad0 is same with the state(6) to be set 00:23:56.396 [2024-11-20 09:57:27.088753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-20 09:57:27.088753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753ad0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.396 the state(6) to be set 00:23:56.396 [2024-11-20 09:57:27.088761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753ad0 is same with the state(6) to be set 00:23:56.396 [2024-11-20 09:57:27.088764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:1[2024-11-20 09:57:27.088765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753ad0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.396 the state(6) to be set 00:23:56.396 [2024-11-20 09:57:27.088772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753ad0 is same with the state(6) to be set 00:23:56.396 [2024-11-20 09:57:27.088773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.396 [2024-11-20 09:57:27.088777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753ad0 is same with the state(6) to be set 00:23:56.396 [2024-11-20 09:57:27.088783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.396 [2024-11-20 09:57:27.088791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.396 [2024-11-20 09:57:27.088800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.396 [2024-11-20 09:57:27.088807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.396 [2024-11-20 09:57:27.088817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.396 [2024-11-20 09:57:27.088824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.396 [2024-11-20 09:57:27.088833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.396 [2024-11-20 09:57:27.088842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.396 [2024-11-20 09:57:27.088852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.396 [2024-11-20 09:57:27.088859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.396 [2024-11-20 09:57:27.088868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.396 [2024-11-20 09:57:27.088876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.396 [2024-11-20 09:57:27.088885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.396 [2024-11-20 09:57:27.088892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.396 [2024-11-20 09:57:27.088901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.396 [2024-11-20 09:57:27.088909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.396 [2024-11-20 09:57:27.088918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.396 [2024-11-20 09:57:27.088926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.396 [2024-11-20 09:57:27.088936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.396 [2024-11-20 09:57:27.088944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.396 [2024-11-20 09:57:27.088953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.396 [2024-11-20 09:57:27.088960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.396 [2024-11-20 09:57:27.088970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.396 [2024-11-20 09:57:27.088977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.396 [2024-11-20 09:57:27.088986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.396 [2024-11-20 09:57:27.088994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.396 [2024-11-20 09:57:27.089003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.396 [2024-11-20 09:57:27.089010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.396 [2024-11-20 09:57:27.089019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.396 [2024-11-20 09:57:27.089026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.396 [2024-11-20 09:57:27.089036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.396 [2024-11-20 09:57:27.089043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.396 [2024-11-20 09:57:27.089057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.396 [2024-11-20 09:57:27.089065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.397 [2024-11-20 09:57:27.089074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.397 [2024-11-20 09:57:27.089081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.397 [2024-11-20 09:57:27.089091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.397 [2024-11-20 09:57:27.089098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.397 [2024-11-20 09:57:27.089107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.397 [2024-11-20 09:57:27.089115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.397 [2024-11-20 09:57:27.089125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.397 [2024-11-20 09:57:27.089132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.397 [2024-11-20 09:57:27.089142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.397 [2024-11-20 09:57:27.089149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.397 [2024-11-20 09:57:27.089163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.397 [2024-11-20 09:57:27.089171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.397 [2024-11-20 09:57:27.089181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.397 [2024-11-20 09:57:27.089188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.397 [2024-11-20 09:57:27.089198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.397 [2024-11-20 09:57:27.089205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.397 [2024-11-20 09:57:27.089215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.397 [2024-11-20 09:57:27.089222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.397 [2024-11-20 09:57:27.089231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.397 [2024-11-20 09:57:27.089239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.397 [2024-11-20 09:57:27.089249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.397 [2024-11-20 09:57:27.089256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.397 [2024-11-20 09:57:27.089283] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:56.397 [2024-11-20 09:57:27.089823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754490 is same with the state(6) to be set 00:23:56.397 [2024-11-20 09:57:27.089838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754490 is same with the state(6) to be set 00:23:56.397 [2024-11-20 09:57:27.089843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754490 is same with the state(6) to be set 00:23:56.397 [2024-11-20 09:57:27.089848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754490 is same with the state(6) to be set 00:23:56.397 [2024-11-20 09:57:27.089852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754490 is same with the state(6) to be set 00:23:56.397 [2024-11-20 09:57:27.089858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754490 is same with the state(6) to be set 00:23:56.397 [2024-11-20 09:57:27.089862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754490 is same with the state(6) to be set 00:23:56.397 [2024-11-20 09:57:27.089867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754490 is same with the state(6) to be set 00:23:56.397 [2024-11-20 09:57:27.089872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754490 is same with the state(6) to be set 00:23:56.397 [2024-11-20 09:57:27.089877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754490 is same with the state(6) to be set 00:23:56.397 [2024-11-20 09:57:27.089882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754490 is same with the state(6) to be set 00:23:56.397 [2024-11-20 09:57:27.089887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754490 is same with the state(6) to be set 00:23:56.397 [2024-11-20 09:57:27.089891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754490 is same with the state(6) to be set 00:23:56.397 [2024-11-20 09:57:27.089896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754490 is same with the state(6) to be set 00:23:56.397 [2024-11-20 09:57:27.089901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754490 is same with the state(6) to be set 00:23:56.397 [2024-11-20 09:57:27.089905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754490 is same with the state(6) to be set 00:23:56.397 [2024-11-20 09:57:27.089910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754490 is same with the state(6) to be set 00:23:56.397 [2024-11-20 09:57:27.089915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754490 is same with the state(6) to be set 00:23:56.397 [2024-11-20 09:57:27.089920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754490 is same with the state(6) to be set 00:23:56.397 [2024-11-20 09:57:27.089925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754490 is same with the state(6) to be set 00:23:56.397 [2024-11-20 09:57:27.089929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754490 is same with the state(6) to be set 00:23:56.397 [2024-11-20 09:57:27.089934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754490 is same with the state(6) to be set 00:23:56.397 [2024-11-20 09:57:27.089938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754490 is same with the state(6) to be set 00:23:56.397 [2024-11-20 09:57:27.089943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754490 is same with the state(6) to be set 00:23:56.397 [2024-11-20 09:57:27.089948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754490 is same with the state(6) to be set 00:23:56.397 [2024-11-20 09:57:27.089952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754490 is same with the state(6) to be set 00:23:56.397 [2024-11-20 09:57:27.089957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754490 is same with the state(6) to be set 00:23:56.397 [2024-11-20 09:57:27.089965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754490 is same with the state(6) to be set 00:23:56.397 [2024-11-20 09:57:27.089969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754490 is same with the state(6) to be set 00:23:56.397 [2024-11-20 09:57:27.089974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754490 is same with the state(6) to be set 00:23:56.397 [2024-11-20 09:57:27.089979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754490 is same with the state(6) to be set 00:23:56.397 [2024-11-20 09:57:27.089983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754490 is same with the state(6) to be set 00:23:56.397 [2024-11-20 09:57:27.089988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754490 is same with the state(6) to be set 00:23:56.397 [2024-11-20 09:57:27.089992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754490 is same with the state(6) to be set 00:23:56.397 [2024-11-20 09:57:27.089997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754490 is same with the state(6) to be set 00:23:56.397 [2024-11-20 09:57:27.090002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754490 is same with the state(6) to be set 00:23:56.397 [2024-11-20 09:57:27.090007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754490 is same with the state(6) to be set 00:23:56.397 [2024-11-20 09:57:27.090011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754490 is same with the state(6) to be set 00:23:56.397 [2024-11-20 09:57:27.090016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754490 is same with the state(6) to be set 00:23:56.397 [2024-11-20 09:57:27.090021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754490 is same with the state(6) to be set 00:23:56.397 [2024-11-20 09:57:27.090026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754490 is same with the state(6) to be set 00:23:56.397 [2024-11-20 09:57:27.090030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754490 is same with the state(6) to be set 00:23:56.397 [2024-11-20 09:57:27.090035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754490 is same with the state(6) to be set 00:23:56.397 [2024-11-20 09:57:27.090039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754490 is same with the state(6) to be set 00:23:56.397 [2024-11-20 09:57:27.090044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754490 is same with the state(6) to be set 00:23:56.398 [2024-11-20 09:57:27.090049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754490 is same with the state(6) to be set 00:23:56.398 [2024-11-20 09:57:27.090053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754490 is same with the state(6) to be set 00:23:56.398 [2024-11-20 09:57:27.090058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754490 is same with the state(6) to be set 00:23:56.398 [2024-11-20 09:57:27.090063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754490 is same with the state(6) to be set 00:23:56.398 [2024-11-20 09:57:27.090067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754490 is same with the state(6) to be set 00:23:56.398 [2024-11-20 09:57:27.090072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754490 is same with the state(6) to be set 00:23:56.398 [2024-11-20 09:57:27.090077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754490 is same with the state(6) to be set 00:23:56.398 [2024-11-20 09:57:27.090082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754490 is same with the state(6) to be set 00:23:56.398 [2024-11-20 09:57:27.090086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754490 is same with the state(6) to be set 00:23:56.398 [2024-11-20 09:57:27.090091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754490 is same with the state(6) to be set 00:23:56.398 [2024-11-20 09:57:27.090097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754490 is same with the state(6) to be set 00:23:56.398 [2024-11-20 09:57:27.090101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754490 is same with the state(6) to be set 00:23:56.398 [2024-11-20 09:57:27.090106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754490 is same with the state(6) to be set 00:23:56.398 [2024-11-20 09:57:27.090843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754960 is same with the state(6) to be set 00:23:56.398 [2024-11-20 09:57:27.090859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754960 is same with the state(6) to be set 00:23:56.398 [2024-11-20 09:57:27.090864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754960 is same with the state(6) to be set 00:23:56.398 [2024-11-20 09:57:27.090869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754960 is same with the state(6) to be set 00:23:56.398 [2024-11-20 09:57:27.090874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754960 is same with the state(6) to be set 00:23:56.398 [2024-11-20 09:57:27.090879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754960 is same with the state(6) to be set 00:23:56.398 [2024-11-20 09:57:27.090884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754960 is same with the state(6) to be set 00:23:56.398 [2024-11-20 09:57:27.090888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754960 is same with the state(6) to be set 00:23:56.398 [2024-11-20 09:57:27.090893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754960 is same with the state(6) to be set 00:23:56.398 [2024-11-20 09:57:27.090898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754960 is same with the state(6) to be set 00:23:56.398 [2024-11-20 09:57:27.090902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754960 is same with the state(6) to be set 00:23:56.398 [2024-11-20 09:57:27.090907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754960 is same with the state(6) to be set 00:23:56.398 [2024-11-20 09:57:27.090912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754960 is same with the state(6) to be set 00:23:56.398 [2024-11-20 09:57:27.090917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754960 is same with the state(6) to be set 00:23:56.398 [2024-11-20 09:57:27.090922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754960 is same with the state(6) to be set 00:23:56.398 [2024-11-20 09:57:27.090926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754960 is same with the state(6) to be set 00:23:56.398 [2024-11-20 09:57:27.090931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754960 is same with the state(6) to be set 00:23:56.398 [2024-11-20 09:57:27.090936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754960 is same with the state(6) to be set 00:23:56.398 [2024-11-20 09:57:27.090940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754960 is same with the state(6) to be set 00:23:56.398 [2024-11-20 09:57:27.090945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754960 is same with the state(6) to be set 00:23:56.398 [2024-11-20 09:57:27.090950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754960 is same with the state(6) to be set 00:23:56.398 [2024-11-20 09:57:27.090955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754960 is same with the state(6) to be set 00:23:56.398 [2024-11-20 09:57:27.090961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754960 is same with the state(6) to be set 00:23:56.398 [2024-11-20 09:57:27.090966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754960 is same with the state(6) to be set 00:23:56.398 [2024-11-20 09:57:27.090974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754960 is same with the state(6) to be set 00:23:56.398 [2024-11-20 09:57:27.090979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754960 is same with the state(6) to be set 00:23:56.398 [2024-11-20 09:57:27.090984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754960 is same with the state(6) to be set 00:23:56.398 [2024-11-20 09:57:27.090989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754960 is same with the state(6) to be set 00:23:56.398 [2024-11-20 09:57:27.090994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754960 is same with the state(6) to be set 00:23:56.398 [2024-11-20 09:57:27.090999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754960 is same with the state(6) to be set 00:23:56.398 [2024-11-20 09:57:27.091003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754960 is same with the state(6) to be set 00:23:56.398 [2024-11-20 09:57:27.091008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754960 is same with the state(6) to be set 00:23:56.398 [2024-11-20 09:57:27.091013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754960 is same with the state(6) to be set 00:23:56.398 [2024-11-20 09:57:27.091018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754960 is same with the state(6) to be set 00:23:56.398 [2024-11-20 09:57:27.091022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754960 is same with the state(6) to be set 00:23:56.398 [2024-11-20 09:57:27.091027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754960 is same with the state(6) to be set 00:23:56.398 [2024-11-20 09:57:27.091032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754960 is same with the state(6) to be set 00:23:56.398 [2024-11-20 09:57:27.091036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754960 is same with the state(6) to be set 00:23:56.398 [2024-11-20 09:57:27.091041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754960 is same with the state(6) to be set 00:23:56.398 [2024-11-20 09:57:27.091046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754960 is same with the state(6) to be set 00:23:56.398 [2024-11-20 09:57:27.091050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754960 is same with the state(6) to be set 00:23:56.398 [2024-11-20 09:57:27.091055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754960 is same with the state(6) to be set 00:23:56.398 [2024-11-20 09:57:27.091059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754960 is same with the state(6) to be set 00:23:56.398 [2024-11-20 09:57:27.091064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754960 is same with the state(6) to be set 00:23:56.398 [2024-11-20 09:57:27.091069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754960 is same with the state(6) to be set 00:23:56.398 [2024-11-20 09:57:27.091074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754960 is same with the state(6) to be set 00:23:56.398 [2024-11-20 09:57:27.091078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754960 is same with the state(6) to be set 00:23:56.398 [2024-11-20 09:57:27.091083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754960 is same with the state(6) to be set 00:23:56.398 [2024-11-20 09:57:27.091087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754960 is same with the state(6) to be set 00:23:56.398 [2024-11-20 09:57:27.091092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754960 is same with the state(6) to be set 00:23:56.398 [2024-11-20 09:57:27.091097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754960 is same with the state(6) to be set 00:23:56.398 [2024-11-20 09:57:27.091104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754960 is same with the state(6) to be set 00:23:56.398 [2024-11-20 09:57:27.091109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754960 is same with the state(6) to be set 00:23:56.398 [2024-11-20 09:57:27.091113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754960 is same with the state(6) to be set 00:23:56.398 [2024-11-20 09:57:27.091118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754960 is same with the state(6) to be set 00:23:56.398 [2024-11-20 09:57:27.091123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754960 is same with the state(6) to be set 00:23:56.398 [2024-11-20 09:57:27.091127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754960 is same with the state(6) to be set 00:23:56.398 [2024-11-20 09:57:27.091132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754960 is same with the state(6) to be set 00:23:56.398 [2024-11-20 09:57:27.091136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754960 is same with the state(6) to be set 00:23:56.398 [2024-11-20 09:57:27.091141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754960 is same with the state(6) to be set 00:23:56.398 [2024-11-20 09:57:27.091145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754960 is same with the state(6) to be set 00:23:56.398 [2024-11-20 09:57:27.091150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754960 is same with the state(6) to be set 00:23:56.398 [2024-11-20 09:57:27.091155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754960 is same with the state(6) to be set 00:23:56.398 [2024-11-20 09:57:27.091697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.398 [2024-11-20 09:57:27.091719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.399 [2024-11-20 09:57:27.091734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.399 [2024-11-20 09:57:27.091743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.399 [2024-11-20 09:57:27.091754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.399 [2024-11-20 09:57:27.091763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.399 [2024-11-20 09:57:27.091774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.399 [2024-11-20 09:57:27.091783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.399 [2024-11-20 09:57:27.091794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.399 [2024-11-20 09:57:27.091803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.399 [2024-11-20 09:57:27.091815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.399 [2024-11-20 09:57:27.091823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.399 [2024-11-20 09:57:27.091835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.399 [2024-11-20 09:57:27.091844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.399 [2024-11-20 09:57:27.091859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.399 [2024-11-20 09:57:27.091869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.399 [2024-11-20 09:57:27.091880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.399 [2024-11-20 09:57:27.091888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.399 [2024-11-20 09:57:27.091900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.399 [2024-11-20 09:57:27.091909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.399 [2024-11-20 09:57:27.091907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754e30 is same with the state(6) to be set 00:23:56.399 [2024-11-20 09:57:27.091921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.399 [2024-11-20 09:57:27.091930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.399 [2024-11-20 09:57:27.091941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.399 [2024-11-20 09:57:27.091951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.399 [2024-11-20 09:57:27.091962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.399 [2024-11-20 09:57:27.091970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.399 [2024-11-20 09:57:27.091981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.399 [2024-11-20 09:57:27.091990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.399 [2024-11-20 09:57:27.092001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.399 [2024-11-20 09:57:27.092009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.399 [2024-11-20 09:57:27.092019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.399 [2024-11-20 09:57:27.092026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.399 [2024-11-20 09:57:27.092035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.399 [2024-11-20 09:57:27.092043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.399 [2024-11-20 09:57:27.092052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.399 [2024-11-20 09:57:27.092060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.399 [2024-11-20 09:57:27.092069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.399 [2024-11-20 09:57:27.092076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.399 [2024-11-20 09:57:27.092087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.399 [2024-11-20 09:57:27.092094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.399 [2024-11-20 09:57:27.092104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.399 [2024-11-20 09:57:27.092111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.399 [2024-11-20 09:57:27.092120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.399 [2024-11-20 09:57:27.092127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.399 [2024-11-20 09:57:27.092137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.399 [2024-11-20 09:57:27.092144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.399 [2024-11-20 09:57:27.092154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.399 [2024-11-20 09:57:27.092167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.399 [2024-11-20 09:57:27.092177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.399 [2024-11-20 09:57:27.092185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.399 [2024-11-20 09:57:27.092194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.399 [2024-11-20 09:57:27.092201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.399 [2024-11-20 09:57:27.092211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.399 [2024-11-20 09:57:27.092218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.399 [2024-11-20 09:57:27.092228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.399 [2024-11-20 09:57:27.092235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.399 [2024-11-20 09:57:27.092245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.399 [2024-11-20 09:57:27.092252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.399 [2024-11-20 09:57:27.092261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.399 [2024-11-20 09:57:27.092269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.399 [2024-11-20 09:57:27.092278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.399 [2024-11-20 09:57:27.092286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.399 [2024-11-20 09:57:27.092295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.399 [2024-11-20 09:57:27.092304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.399 [2024-11-20 09:57:27.092316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.399 [2024-11-20 09:57:27.092324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.399 [2024-11-20 09:57:27.092334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.399 [2024-11-20 09:57:27.092342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.399 [2024-11-20 09:57:27.092352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.399 [2024-11-20 09:57:27.092360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.399 [2024-11-20 09:57:27.092370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.399 [2024-11-20 09:57:27.092377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.399 [2024-11-20 09:57:27.092377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755320 is same with [2024-11-20 09:57:27.092387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:1the state(6) to be set 00:23:56.399 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.399 [2024-11-20 09:57:27.092398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.400 [2024-11-20 09:57:27.092400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755320 is same with the state(6) to be set 00:23:56.400 [2024-11-20 09:57:27.092406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755320 is same with the state(6) to be set 00:23:56.400 [2024-11-20 09:57:27.092407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.400 [2024-11-20 09:57:27.092411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755320 is same with the state(6) to be set 00:23:56.400 [2024-11-20 09:57:27.092415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.400 [2024-11-20 09:57:27.092417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755320 is same with the state(6) to be set 00:23:56.400 [2024-11-20 09:57:27.092424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755320 is same with the state(6) to be set 00:23:56.400 [2024-11-20 09:57:27.092426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.400 [2024-11-20 09:57:27.092429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755320 is same with the state(6) to be set 00:23:56.400 [2024-11-20 09:57:27.092434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-20 09:57:27.092435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755320 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.400 the state(6) to be set 00:23:56.400 [2024-11-20 09:57:27.092442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755320 is same with the state(6) to be set 00:23:56.400 [2024-11-20 09:57:27.092445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.400 [2024-11-20 09:57:27.092447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755320 is same with [2024-11-20 09:57:27.092453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:23:56.400 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.400 [2024-11-20 09:57:27.092465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755320 is same with the state(6) to be set 00:23:56.400 [2024-11-20 09:57:27.092469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:1[2024-11-20 09:57:27.092470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755320 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.400 the state(6) to be set 00:23:56.400 [2024-11-20 09:57:27.092478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755320 is same with the state(6) to be set 00:23:56.400 [2024-11-20 09:57:27.092479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.400 [2024-11-20 09:57:27.092483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755320 is same with the state(6) to be set 00:23:56.400 [2024-11-20 09:57:27.092489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755320 is same with the state(6) to be set 00:23:56.400 [2024-11-20 09:57:27.092490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.400 [2024-11-20 09:57:27.092494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755320 is same with the state(6) to be set 00:23:56.400 [2024-11-20 09:57:27.092497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-20 09:57:27.092499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755320 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.400 the state(6) to be set 00:23:56.400 [2024-11-20 09:57:27.092506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755320 is same with the state(6) to be set 00:23:56.400 [2024-11-20 09:57:27.092508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.400 [2024-11-20 09:57:27.092510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755320 is same with the state(6) to be set 00:23:56.400 [2024-11-20 09:57:27.092516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755320 is same with the state(6) to be set 00:23:56.400 [2024-11-20 09:57:27.092516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.400 [2024-11-20 09:57:27.092521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755320 is same with the state(6) to be set 00:23:56.400 [2024-11-20 09:57:27.092526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755320 is same with the state(6) to be set 00:23:56.400 [2024-11-20 09:57:27.092526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.400 [2024-11-20 09:57:27.092531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755320 is same with the state(6) to be set 00:23:56.400 [2024-11-20 09:57:27.092535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.400 [2024-11-20 09:57:27.092537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755320 is same with the state(6) to be set 00:23:56.400 [2024-11-20 09:57:27.092543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755320 is same with the state(6) to be set 00:23:56.400 [2024-11-20 09:57:27.092545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.400 [2024-11-20 09:57:27.092548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755320 is same with the state(6) to be set 00:23:56.400 [2024-11-20 09:57:27.092553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.400 [2024-11-20 09:57:27.092555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755320 is same with the state(6) to be set 00:23:56.400 [2024-11-20 09:57:27.092561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755320 is same with the state(6) to be set 00:23:56.400 [2024-11-20 09:57:27.092563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.400 [2024-11-20 09:57:27.092566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755320 is same with the state(6) to be set 00:23:56.400 [2024-11-20 09:57:27.092571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755320 is same with the state(6) to be set 00:23:56.400 [2024-11-20 09:57:27.092571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.400 [2024-11-20 09:57:27.092577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755320 is same with the state(6) to be set 00:23:56.400 [2024-11-20 09:57:27.092582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755320 is same with [2024-11-20 09:57:27.092582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:1the state(6) to be set 00:23:56.400 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.400 [2024-11-20 09:57:27.092590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755320 is same with the state(6) to be set 00:23:56.400 [2024-11-20 09:57:27.092592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.400 [2024-11-20 09:57:27.092596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755320 is same with the state(6) to be set 00:23:56.400 [2024-11-20 09:57:27.092601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755320 is same with the state(6) to be set 00:23:56.400 [2024-11-20 09:57:27.092602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.400 [2024-11-20 09:57:27.092606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755320 is same with the state(6) to be set 00:23:56.400 [2024-11-20 09:57:27.092610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-20 09:57:27.092611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755320 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.400 the state(6) to be set 00:23:56.400 [2024-11-20 09:57:27.092619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755320 is same with the state(6) to be set 00:23:56.400 [2024-11-20 09:57:27.092621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.400 [2024-11-20 09:57:27.092623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755320 is same with the state(6) to be set 00:23:56.400 [2024-11-20 09:57:27.092629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755320 is same with the state(6) to be set 00:23:56.400 [2024-11-20 09:57:27.092629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.400 [2024-11-20 09:57:27.092635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755320 is same with the state(6) to be set 00:23:56.400 [2024-11-20 09:57:27.092640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755320 is same with the state(6) to be set 00:23:56.400 [2024-11-20 09:57:27.092640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.400 [2024-11-20 09:57:27.092645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755320 is same with the state(6) to be set 00:23:56.400 [2024-11-20 09:57:27.092651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755320 is same with [2024-11-20 09:57:27.092650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:23:56.400 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.400 [2024-11-20 09:57:27.092658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755320 is same with the state(6) to be set 00:23:56.400 [2024-11-20 09:57:27.092662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:1[2024-11-20 09:57:27.092663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755320 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.400 the state(6) to be set 00:23:56.400 [2024-11-20 09:57:27.092670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755320 is same with the state(6) to be set 00:23:56.400 [2024-11-20 09:57:27.092671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.400 [2024-11-20 09:57:27.092675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755320 is same with the state(6) to be set 00:23:56.400 [2024-11-20 09:57:27.092681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755320 is same with the state(6) to be set 00:23:56.400 [2024-11-20 09:57:27.092681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.400 [2024-11-20 09:57:27.092686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755320 is same with the state(6) to be set 00:23:56.400 [2024-11-20 09:57:27.092690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.400 [2024-11-20 09:57:27.092692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755320 is same with the state(6) to be set 00:23:56.401 [2024-11-20 09:57:27.092697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755320 is same with the state(6) to be set 00:23:56.401 [2024-11-20 09:57:27.092699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.401 [2024-11-20 09:57:27.092702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755320 is same with the state(6) to be set 00:23:56.401 [2024-11-20 09:57:27.092707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755320 is same with the state(6) to be set 00:23:56.401 [2024-11-20 09:57:27.092707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.401 [2024-11-20 09:57:27.092713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755320 is same with the state(6) to be set 00:23:56.401 [2024-11-20 09:57:27.092718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755320 is same with the state(6) to be set 00:23:56.401 [2024-11-20 09:57:27.092719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.401 [2024-11-20 09:57:27.092724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755320 is same with the state(6) to be set 00:23:56.401 [2024-11-20 09:57:27.092726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.401 [2024-11-20 09:57:27.092729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755320 is same with the state(6) to be set 00:23:56.401 [2024-11-20 09:57:27.092734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755320 is same with the state(6) to be set 00:23:56.401 [2024-11-20 09:57:27.092736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:1[2024-11-20 09:57:27.092739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755320 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.401 the state(6) to be set 00:23:56.401 [2024-11-20 09:57:27.092746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755320 is same with the state(6) to be set 00:23:56.401 [2024-11-20 09:57:27.092747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.401 [2024-11-20 09:57:27.092751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755320 is same with the state(6) to be set 00:23:56.401 [2024-11-20 09:57:27.092756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755320 is same with the state(6) to be set 00:23:56.401 [2024-11-20 09:57:27.092757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.401 [2024-11-20 09:57:27.092765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.401 [2024-11-20 09:57:27.092774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.401 [2024-11-20 09:57:27.092782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.401 [2024-11-20 09:57:27.092791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.401 [2024-11-20 09:57:27.092799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.401 [2024-11-20 09:57:27.092808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.401 [2024-11-20 09:57:27.092815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.401 [2024-11-20 09:57:27.092824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.401 [2024-11-20 09:57:27.092832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.401 [2024-11-20 09:57:27.092841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.401 [2024-11-20 09:57:27.092849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.401 [2024-11-20 09:57:27.092859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.401 [2024-11-20 09:57:27.092866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.401 [2024-11-20 09:57:27.092875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.401 [2024-11-20 09:57:27.092884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.401 [2024-11-20 09:57:27.092894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.401 [2024-11-20 09:57:27.092901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.401 [2024-11-20 09:57:27.093215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17557f0 is same with the state(6) to be set 00:23:56.401 [2024-11-20 09:57:27.093229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17557f0 is same with the state(6) to be set 00:23:56.401 [2024-11-20 09:57:27.093236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17557f0 is same with the state(6) to be set 00:23:56.401 [2024-11-20 09:57:27.093241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17557f0 is same with the state(6) to be set 00:23:56.401 [2024-11-20 09:57:27.093246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17557f0 is same with the state(6) to be set 00:23:56.401 [2024-11-20 09:57:27.093250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17557f0 is same with the state(6) to be set 00:23:56.401 [2024-11-20 09:57:27.093255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17557f0 is same with the state(6) to be set 00:23:56.401 [2024-11-20 09:57:27.093260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17557f0 is same with the state(6) to be set 00:23:56.401 [2024-11-20 09:57:27.093265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17557f0 is same with the state(6) to be set 00:23:56.401 [2024-11-20 09:57:27.093270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17557f0 is same with the state(6) to be set 00:23:56.401 [2024-11-20 09:57:27.093275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17557f0 is same with the state(6) to be set 00:23:56.401 [2024-11-20 09:57:27.093279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17557f0 is same with the state(6) to be set 00:23:56.401 [2024-11-20 09:57:27.093284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17557f0 is same with the state(6) to be set 00:23:56.401 [2024-11-20 09:57:27.093288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17557f0 is same with the state(6) to be set 00:23:56.401 [2024-11-20 09:57:27.093293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17557f0 is same with the state(6) to be set 00:23:56.401 [2024-11-20 09:57:27.093298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17557f0 is same with the state(6) to be set 00:23:56.401 [2024-11-20 09:57:27.093303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17557f0 is same with the state(6) to be set 00:23:56.401 [2024-11-20 09:57:27.093308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17557f0 is same with the state(6) to be set 00:23:56.401 [2024-11-20 09:57:27.093312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17557f0 is same with the state(6) to be set 00:23:56.401 [2024-11-20 09:57:27.093317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17557f0 is same with the state(6) to be set 00:23:56.401 [2024-11-20 09:57:27.093321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17557f0 is same with the state(6) to be set 00:23:56.401 [2024-11-20 09:57:27.093326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17557f0 is same with the state(6) to be set 00:23:56.401 [2024-11-20 09:57:27.093331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17557f0 is same with the state(6) to be set 00:23:56.401 [2024-11-20 09:57:27.093335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17557f0 is same with the state(6) to be set 00:23:56.401 [2024-11-20 09:57:27.093340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17557f0 is same with the state(6) to be set 00:23:56.401 [2024-11-20 09:57:27.093344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17557f0 is same with the state(6) to be set 00:23:56.401 [2024-11-20 09:57:27.093354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17557f0 is same with the state(6) to be set 00:23:56.401 [2024-11-20 09:57:27.093359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17557f0 is same with the state(6) to be set 00:23:56.401 [2024-11-20 09:57:27.093364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17557f0 is same with the state(6) to be set 00:23:56.401 [2024-11-20 09:57:27.093370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17557f0 is same with the state(6) to be set 00:23:56.401 [2024-11-20 09:57:27.093375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17557f0 is same with the state(6) to be set 00:23:56.401 [2024-11-20 09:57:27.093455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.401 [2024-11-20 09:57:27.093475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.401 [2024-11-20 09:57:27.093486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.401 [2024-11-20 09:57:27.093521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.401 [2024-11-20 09:57:27.093572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.401 [2024-11-20 09:57:27.093618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.401 [2024-11-20 09:57:27.093681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.401 [2024-11-20 09:57:27.093730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.401 [2024-11-20 09:57:27.093784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.401 [2024-11-20 09:57:27.093831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.402 [2024-11-20 09:57:27.093885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.402 [2024-11-20 09:57:27.093932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.402 [2024-11-20 09:57:27.093987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.402 [2024-11-20 09:57:27.094035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.402 [2024-11-20 09:57:27.094095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.402 [2024-11-20 09:57:27.094143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.402 [2024-11-20 09:57:27.094204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.402 [2024-11-20 09:57:27.094251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.402 [2024-11-20 09:57:27.094305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.402 [2024-11-20 09:57:27.094346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.402 [2024-11-20 09:57:27.094395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.402 [2024-11-20 09:57:27.094436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.402 [2024-11-20 09:57:27.094489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.402 [2024-11-20 09:57:27.094532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.402 [2024-11-20 09:57:27.094577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.402 [2024-11-20 09:57:27.094620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.402 [2024-11-20 09:57:27.094672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.402 [2024-11-20 09:57:27.094714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.402 [2024-11-20 09:57:27.094760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.402 [2024-11-20 09:57:27.094807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.402 [2024-11-20 09:57:27.094853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.402 [2024-11-20 09:57:27.094897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.402 [2024-11-20 09:57:27.094949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.402 [2024-11-20 09:57:27.094991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.402 [2024-11-20 09:57:27.095038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.402 [2024-11-20 09:57:27.095080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.402 [2024-11-20 09:57:27.095128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.402 [2024-11-20 09:57:27.095175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.402 [2024-11-20 09:57:27.095225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.402 [2024-11-20 09:57:27.095268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.402 [2024-11-20 09:57:27.095314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.402 [2024-11-20 09:57:27.095355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.402 [2024-11-20 09:57:27.095409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.402 [2024-11-20 09:57:27.095451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.402 [2024-11-20 09:57:27.095497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.402 [2024-11-20 09:57:27.095539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.402 [2024-11-20 09:57:27.095587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.402 [2024-11-20 09:57:27.095628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.402 [2024-11-20 09:57:27.095675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.402 [2024-11-20 09:57:27.095718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.402 [2024-11-20 09:57:27.095765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.402 [2024-11-20 09:57:27.095806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.402 [2024-11-20 09:57:27.095858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.402 [2024-11-20 09:57:27.095899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.402 [2024-11-20 09:57:27.095945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.402 [2024-11-20 09:57:27.095987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.402 [2024-11-20 09:57:27.096032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.402 [2024-11-20 09:57:27.096075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.402 [2024-11-20 09:57:27.096119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.402 [2024-11-20 09:57:27.096164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.402 [2024-11-20 09:57:27.096214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.402 [2024-11-20 09:57:27.096256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.402 [2024-11-20 09:57:27.096301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.402 [2024-11-20 09:57:27.096350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.402 [2024-11-20 09:57:27.096396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.402 [2024-11-20 09:57:27.096439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.402 [2024-11-20 09:57:27.096485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.402 [2024-11-20 09:57:27.096526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.402 [2024-11-20 09:57:27.096574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.402 [2024-11-20 09:57:27.096617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.402 [2024-11-20 09:57:27.096664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.402 [2024-11-20 09:57:27.096704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.402 [2024-11-20 09:57:27.096750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.402 [2024-11-20 09:57:27.096798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.402 [2024-11-20 09:57:27.096844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.403 [2024-11-20 09:57:27.096886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.403 [2024-11-20 09:57:27.096932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.403 [2024-11-20 09:57:27.096975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.403 [2024-11-20 09:57:27.097022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.403 [2024-11-20 09:57:27.097063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.403 [2024-11-20 09:57:27.109313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17557f0 is same with the state(6) to be set 00:23:56.403 [2024-11-20 09:57:27.109337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17557f0 is same with the state(6) to be set 00:23:56.403 [2024-11-20 09:57:27.109345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17557f0 is same with the state(6) to be set 00:23:56.403 [2024-11-20 09:57:27.109353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17557f0 is same with the state(6) to be set 00:23:56.403 [2024-11-20 09:57:27.109360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17557f0 is same with the state(6) to be set 00:23:56.403 [2024-11-20 09:57:27.109368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17557f0 is same with the state(6) to be set 00:23:56.403 [2024-11-20 09:57:27.109375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17557f0 is same with the state(6) to be set 00:23:56.403 [2024-11-20 09:57:27.109382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17557f0 is same with the state(6) to be set 00:23:56.403 [2024-11-20 09:57:27.109390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17557f0 is same with the state(6) to be set 00:23:56.403 [2024-11-20 09:57:27.109397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17557f0 is same with the state(6) to be set 00:23:56.403 [2024-11-20 09:57:27.109404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17557f0 is same with the state(6) to be set 00:23:56.403 [2024-11-20 09:57:27.109412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17557f0 is same with the state(6) to be set 00:23:56.403 [2024-11-20 09:57:27.109418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17557f0 is same with the state(6) to be set 00:23:56.403 [2024-11-20 09:57:27.109424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17557f0 is same with the state(6) to be set 00:23:56.403 [2024-11-20 09:57:27.109430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17557f0 is same with the state(6) to be set 00:23:56.403 [2024-11-20 09:57:27.109436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17557f0 is same with the state(6) to be set 00:23:56.403 [2024-11-20 09:57:27.109443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17557f0 is same with the state(6) to be set 00:23:56.403 [2024-11-20 09:57:27.109449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17557f0 is same with the state(6) to be set 00:23:56.403 [2024-11-20 09:57:27.109455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17557f0 is same with the state(6) to be set 00:23:56.403 [2024-11-20 09:57:27.109461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17557f0 is same with the state(6) to be set 00:23:56.403 [2024-11-20 09:57:27.109472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17557f0 is same with the state(6) to be set 00:23:56.403 [2024-11-20 09:57:27.109478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17557f0 is same with the state(6) to be set 00:23:56.403 [2024-11-20 09:57:27.109484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17557f0 is same with the state(6) to be set 00:23:56.403 [2024-11-20 09:57:27.109490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17557f0 is same with the state(6) to be set 00:23:56.403 [2024-11-20 09:57:27.109496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17557f0 is same with the state(6) to be set 00:23:56.403 [2024-11-20 09:57:27.109502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17557f0 is same with the state(6) to be set 00:23:56.403 [2024-11-20 09:57:27.109508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17557f0 is same with the state(6) to be set 00:23:56.403 [2024-11-20 09:57:27.109514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17557f0 is same with the state(6) to be set 00:23:56.403 [2024-11-20 09:57:27.109520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17557f0 is same with the state(6) to be set 00:23:56.403 [2024-11-20 09:57:27.109527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17557f0 is same with the state(6) to be set 00:23:56.403 [2024-11-20 09:57:27.109533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17557f0 is same with the state(6) to be set 00:23:56.403 [2024-11-20 09:57:27.109539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17557f0 is same with the state(6) to be set 00:23:56.403 [2024-11-20 09:57:27.112662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.403 [2024-11-20 09:57:27.112691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.403 [2024-11-20 09:57:27.112703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.403 [2024-11-20 09:57:27.112711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.403 [2024-11-20 09:57:27.112721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.403 [2024-11-20 09:57:27.112729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.403 [2024-11-20 09:57:27.112739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.403 [2024-11-20 09:57:27.112746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.403 [2024-11-20 09:57:27.112756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.403 [2024-11-20 09:57:27.112763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.403 [2024-11-20 09:57:27.112773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.403 [2024-11-20 09:57:27.112780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.403 [2024-11-20 09:57:27.112789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.403 [2024-11-20 09:57:27.112797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.403 [2024-11-20 09:57:27.112811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.403 [2024-11-20 09:57:27.112818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.403 [2024-11-20 09:57:27.112828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.403 [2024-11-20 09:57:27.112835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.403 [2024-11-20 09:57:27.112845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.403 [2024-11-20 09:57:27.112852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.403 [2024-11-20 09:57:27.112862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.403 [2024-11-20 09:57:27.112869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.403 [2024-11-20 09:57:27.112878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.403 [2024-11-20 09:57:27.112886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.403 [2024-11-20 09:57:27.112895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.403 [2024-11-20 09:57:27.112902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.403 [2024-11-20 09:57:27.112911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.403 [2024-11-20 09:57:27.112919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.403 [2024-11-20 09:57:27.112928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.403 [2024-11-20 09:57:27.112935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.403 [2024-11-20 09:57:27.112945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.403 [2024-11-20 09:57:27.112953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.403 [2024-11-20 09:57:27.112963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.403 [2024-11-20 09:57:27.112970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.403 [2024-11-20 09:57:27.112980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.403 [2024-11-20 09:57:27.112987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.403 [2024-11-20 09:57:27.112996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.403 [2024-11-20 09:57:27.113003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.403 [2024-11-20 09:57:27.113013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.403 [2024-11-20 09:57:27.113022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.404 [2024-11-20 09:57:27.113032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.404 [2024-11-20 09:57:27.113039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.404 [2024-11-20 09:57:27.113049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.404 [2024-11-20 09:57:27.113056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.404 [2024-11-20 09:57:27.113066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.404 [2024-11-20 09:57:27.113073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.404 [2024-11-20 09:57:27.113082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.404 [2024-11-20 09:57:27.113090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.404 [2024-11-20 09:57:27.113127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:56.404 [2024-11-20 09:57:27.113325] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:23:56.404 [2024-11-20 09:57:27.113385] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc329f0 (9): Bad file descriptor 00:23:56.404 [2024-11-20 09:57:27.113426] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:56.404 [2024-11-20 09:57:27.113437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.404 [2024-11-20 09:57:27.113445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:56.404 [2024-11-20 09:57:27.113453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.404 [2024-11-20 09:57:27.113460] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:56.404 [2024-11-20 09:57:27.113468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.404 [2024-11-20 09:57:27.113476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:56.404 [2024-11-20 09:57:27.113484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.404 [2024-11-20 09:57:27.113491] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53610 is same with the state(6) to be set 00:23:56.404 [2024-11-20 09:57:27.113509] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc3bcb0 (9): Bad file descriptor 00:23:56.404 [2024-11-20 09:57:27.113534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:56.404 [2024-11-20 09:57:27.113544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.404 [2024-11-20 09:57:27.113552] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:56.404 [2024-11-20 09:57:27.113563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.404 [2024-11-20 09:57:27.113571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:56.404 [2024-11-20 09:57:27.113578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.404 [2024-11-20 09:57:27.113586] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:56.404 [2024-11-20 09:57:27.113593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.404 [2024-11-20 09:57:27.113601] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc38420 is same with the state(6) to be set 00:23:56.404 [2024-11-20 09:57:27.113611] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10afd00 (9): Bad file descriptor 00:23:56.404 [2024-11-20 09:57:27.113639] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:56.404 [2024-11-20 09:57:27.113651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.404 [2024-11-20 09:57:27.113664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:56.404 [2024-11-20 09:57:27.113677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.404 [2024-11-20 09:57:27.113689] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:56.404 [2024-11-20 09:57:27.113696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.404 [2024-11-20 09:57:27.113704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:56.404 [2024-11-20 09:57:27.113712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.404 [2024-11-20 09:57:27.113719] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1067180 is same with the state(6) to be set 00:23:56.404 [2024-11-20 09:57:27.113746] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:56.404 [2024-11-20 09:57:27.113755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.404 [2024-11-20 09:57:27.113764] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:56.404 [2024-11-20 09:57:27.113771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.404 [2024-11-20 09:57:27.113779] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:56.404 [2024-11-20 09:57:27.113786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.404 [2024-11-20 09:57:27.113794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:56.404 [2024-11-20 09:57:27.113802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.404 [2024-11-20 09:57:27.113809] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082310 is same with the state(6) to be set 00:23:56.404 [2024-11-20 09:57:27.113833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:56.404 [2024-11-20 09:57:27.113845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.404 [2024-11-20 09:57:27.113853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:56.404 [2024-11-20 09:57:27.113861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.404 [2024-11-20 09:57:27.113868] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:56.404 [2024-11-20 09:57:27.113876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.404 [2024-11-20 09:57:27.113883] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:56.404 [2024-11-20 09:57:27.113891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.404 [2024-11-20 09:57:27.113898] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30fa0 is same with the state(6) to be set 00:23:56.404 [2024-11-20 09:57:27.113923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:56.404 [2024-11-20 09:57:27.113931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.404 [2024-11-20 09:57:27.113940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:56.404 [2024-11-20 09:57:27.113947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.404 [2024-11-20 09:57:27.113955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:56.404 [2024-11-20 09:57:27.113963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.404 [2024-11-20 09:57:27.113971] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:56.404 [2024-11-20 09:57:27.113978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.404 [2024-11-20 09:57:27.113984] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc39810 is same with the state(6) to be set 00:23:56.404 [2024-11-20 09:57:27.114009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:56.404 [2024-11-20 09:57:27.114017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.404 [2024-11-20 09:57:27.114025] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:56.404 [2024-11-20 09:57:27.114032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.404 [2024-11-20 09:57:27.114040] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:56.404 [2024-11-20 09:57:27.114047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.404 [2024-11-20 09:57:27.114056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:56.404 [2024-11-20 09:57:27.114063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.405 [2024-11-20 09:57:27.114070] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108df20 is same with the state(6) to be set 00:23:56.405 [2024-11-20 09:57:27.117098] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:23:56.405 [2024-11-20 09:57:27.117128] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:23:56.405 [2024-11-20 09:57:27.117144] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108df20 (9): Bad file descriptor 00:23:56.405 [2024-11-20 09:57:27.117163] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc38420 (9): Bad file descriptor 00:23:56.405 [2024-11-20 09:57:27.118009] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:56.405 [2024-11-20 09:57:27.118402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.405 [2024-11-20 09:57:27.118447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc329f0 with addr=10.0.0.2, port=4420 00:23:56.405 [2024-11-20 09:57:27.118462] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc329f0 is same with the state(6) to be set 00:23:56.405 [2024-11-20 09:57:27.118563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.405 [2024-11-20 09:57:27.118580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.405 [2024-11-20 09:57:27.118598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.405 [2024-11-20 09:57:27.118608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.405 [2024-11-20 09:57:27.118621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.405 [2024-11-20 09:57:27.118630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.405 [2024-11-20 09:57:27.118642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.405 [2024-11-20 09:57:27.118651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.405 [2024-11-20 09:57:27.118664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.405 [2024-11-20 09:57:27.118674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.405 [2024-11-20 09:57:27.118685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.405 [2024-11-20 09:57:27.118695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.405 [2024-11-20 09:57:27.118707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.405 [2024-11-20 09:57:27.118716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.405 [2024-11-20 09:57:27.118729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.405 [2024-11-20 09:57:27.118738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.405 [2024-11-20 09:57:27.118750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.405 [2024-11-20 09:57:27.118759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.405 [2024-11-20 09:57:27.118777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.405 [2024-11-20 09:57:27.118787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.405 [2024-11-20 09:57:27.118799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.405 [2024-11-20 09:57:27.118808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.405 [2024-11-20 09:57:27.118820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.405 [2024-11-20 09:57:27.118829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.405 [2024-11-20 09:57:27.118841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.405 [2024-11-20 09:57:27.118851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.405 [2024-11-20 09:57:27.118862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.405 [2024-11-20 09:57:27.118872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.405 [2024-11-20 09:57:27.118883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.405 [2024-11-20 09:57:27.118893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.405 [2024-11-20 09:57:27.118904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.405 [2024-11-20 09:57:27.118914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.405 [2024-11-20 09:57:27.118926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.405 [2024-11-20 09:57:27.118935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.405 [2024-11-20 09:57:27.118947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.405 [2024-11-20 09:57:27.118956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.405 [2024-11-20 09:57:27.118968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.405 [2024-11-20 09:57:27.118977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.405 [2024-11-20 09:57:27.118989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.405 [2024-11-20 09:57:27.118998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.405 [2024-11-20 09:57:27.119010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.405 [2024-11-20 09:57:27.119019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.405 [2024-11-20 09:57:27.119031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.405 [2024-11-20 09:57:27.119043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.405 [2024-11-20 09:57:27.119055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.405 [2024-11-20 09:57:27.119065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.405 [2024-11-20 09:57:27.119076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.405 [2024-11-20 09:57:27.119086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.405 [2024-11-20 09:57:27.119098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.405 [2024-11-20 09:57:27.119107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.405 [2024-11-20 09:57:27.119119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.405 [2024-11-20 09:57:27.119128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.405 [2024-11-20 09:57:27.119140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.405 [2024-11-20 09:57:27.119149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.405 [2024-11-20 09:57:27.119168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.405 [2024-11-20 09:57:27.119178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.405 [2024-11-20 09:57:27.119191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.405 [2024-11-20 09:57:27.119200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.405 [2024-11-20 09:57:27.119212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.405 [2024-11-20 09:57:27.119221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.405 [2024-11-20 09:57:27.119234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.405 [2024-11-20 09:57:27.119243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.405 [2024-11-20 09:57:27.119255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.405 [2024-11-20 09:57:27.119264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.405 [2024-11-20 09:57:27.119276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.405 [2024-11-20 09:57:27.119285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.406 [2024-11-20 09:57:27.119297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.406 [2024-11-20 09:57:27.119306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.406 [2024-11-20 09:57:27.119321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.406 [2024-11-20 09:57:27.119330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.406 [2024-11-20 09:57:27.119342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.406 [2024-11-20 09:57:27.119351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.406 [2024-11-20 09:57:27.119363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.406 [2024-11-20 09:57:27.119372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.406 [2024-11-20 09:57:27.119384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.406 [2024-11-20 09:57:27.119393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.406 [2024-11-20 09:57:27.119405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.406 [2024-11-20 09:57:27.119414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.406 [2024-11-20 09:57:27.119426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.406 [2024-11-20 09:57:27.119436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.406 [2024-11-20 09:57:27.119447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.406 [2024-11-20 09:57:27.119457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.406 [2024-11-20 09:57:27.119468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.406 [2024-11-20 09:57:27.119478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.406 [2024-11-20 09:57:27.119490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.406 [2024-11-20 09:57:27.119499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.406 [2024-11-20 09:57:27.119511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.406 [2024-11-20 09:57:27.119520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.406 [2024-11-20 09:57:27.119532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.406 [2024-11-20 09:57:27.119541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.406 [2024-11-20 09:57:27.119552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.406 [2024-11-20 09:57:27.119562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.406 [2024-11-20 09:57:27.119573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.406 [2024-11-20 09:57:27.119584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.406 [2024-11-20 09:57:27.119597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.406 [2024-11-20 09:57:27.119606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.406 [2024-11-20 09:57:27.119618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.406 [2024-11-20 09:57:27.119627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.406 [2024-11-20 09:57:27.119639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.406 [2024-11-20 09:57:27.119648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.406 [2024-11-20 09:57:27.119660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.406 [2024-11-20 09:57:27.119669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.406 [2024-11-20 09:57:27.119681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.406 [2024-11-20 09:57:27.119690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.406 [2024-11-20 09:57:27.119702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.406 [2024-11-20 09:57:27.119710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.406 [2024-11-20 09:57:27.119722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.406 [2024-11-20 09:57:27.119731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.406 [2024-11-20 09:57:27.119743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.406 [2024-11-20 09:57:27.119752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.406 [2024-11-20 09:57:27.119764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.406 [2024-11-20 09:57:27.119774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.406 [2024-11-20 09:57:27.119786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.406 [2024-11-20 09:57:27.119795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.406 [2024-11-20 09:57:27.119807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.406 [2024-11-20 09:57:27.119816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.406 [2024-11-20 09:57:27.119828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.406 [2024-11-20 09:57:27.119837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.406 [2024-11-20 09:57:27.119851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.406 [2024-11-20 09:57:27.119860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.406 [2024-11-20 09:57:27.119991] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:56.406 [2024-11-20 09:57:27.121377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.406 [2024-11-20 09:57:27.121422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc38420 with addr=10.0.0.2, port=4420 00:23:56.406 [2024-11-20 09:57:27.121437] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc38420 is same with the state(6) to be set 00:23:56.406 [2024-11-20 09:57:27.121618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.406 [2024-11-20 09:57:27.121632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108df20 with addr=10.0.0.2, port=4420 00:23:56.406 [2024-11-20 09:57:27.121642] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108df20 is same with the state(6) to be set 00:23:56.406 [2024-11-20 09:57:27.121657] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc329f0 (9): Bad file descriptor 00:23:56.406 [2024-11-20 09:57:27.123281] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:56.406 [2024-11-20 09:57:27.123339] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:56.406 [2024-11-20 09:57:27.123396] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:56.406 [2024-11-20 09:57:27.123444] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:56.406 [2024-11-20 09:57:27.123468] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:56.406 [2024-11-20 09:57:27.123499] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc38420 (9): Bad file descriptor 00:23:56.406 [2024-11-20 09:57:27.123513] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108df20 (9): Bad file descriptor 00:23:56.406 [2024-11-20 09:57:27.123524] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:23:56.406 [2024-11-20 09:57:27.123533] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:23:56.406 [2024-11-20 09:57:27.123543] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:23:56.406 [2024-11-20 09:57:27.123555] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:23:56.406 [2024-11-20 09:57:27.123589] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb53610 (9): Bad file descriptor 00:23:56.406 [2024-11-20 09:57:27.123629] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1067180 (9): Bad file descriptor 00:23:56.406 [2024-11-20 09:57:27.123653] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082310 (9): Bad file descriptor 00:23:56.406 [2024-11-20 09:57:27.123677] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30fa0 (9): Bad file descriptor 00:23:56.406 [2024-11-20 09:57:27.123700] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc39810 (9): Bad file descriptor 00:23:56.406 [2024-11-20 09:57:27.124174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.406 [2024-11-20 09:57:27.124195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc3bcb0 with addr=10.0.0.2, port=4420 00:23:56.407 [2024-11-20 09:57:27.124205] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3bcb0 is same with the state(6) to be set 00:23:56.407 [2024-11-20 09:57:27.124215] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:23:56.407 [2024-11-20 09:57:27.124229] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:23:56.407 [2024-11-20 09:57:27.124238] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:23:56.407 [2024-11-20 09:57:27.124248] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:23:56.407 [2024-11-20 09:57:27.124257] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:23:56.407 [2024-11-20 09:57:27.124265] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:23:56.407 [2024-11-20 09:57:27.124273] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:23:56.407 [2024-11-20 09:57:27.124281] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:23:56.407 [2024-11-20 09:57:27.124685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.407 [2024-11-20 09:57:27.124701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.407 [2024-11-20 09:57:27.124718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.407 [2024-11-20 09:57:27.124728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.407 [2024-11-20 09:57:27.124740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.407 [2024-11-20 09:57:27.124750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.407 [2024-11-20 09:57:27.124762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.407 [2024-11-20 09:57:27.124771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.407 [2024-11-20 09:57:27.124783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.407 [2024-11-20 09:57:27.124793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.407 [2024-11-20 09:57:27.124804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.407 [2024-11-20 09:57:27.124813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.407 [2024-11-20 09:57:27.124826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.407 [2024-11-20 09:57:27.124835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.407 [2024-11-20 09:57:27.124847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.407 [2024-11-20 09:57:27.124856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.407 [2024-11-20 09:57:27.124868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.407 [2024-11-20 09:57:27.124877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.407 [2024-11-20 09:57:27.124889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.407 [2024-11-20 09:57:27.124902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.407 [2024-11-20 09:57:27.124914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.407 [2024-11-20 09:57:27.124923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.407 [2024-11-20 09:57:27.124935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.407 [2024-11-20 09:57:27.124945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.407 [2024-11-20 09:57:27.124956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.407 [2024-11-20 09:57:27.124966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.407 [2024-11-20 09:57:27.124978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.407 [2024-11-20 09:57:27.124987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.407 [2024-11-20 09:57:27.124999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.407 [2024-11-20 09:57:27.125008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.407 [2024-11-20 09:57:27.125020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.407 [2024-11-20 09:57:27.125030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.407 [2024-11-20 09:57:27.125041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.407 [2024-11-20 09:57:27.125051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.407 [2024-11-20 09:57:27.125062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.407 [2024-11-20 09:57:27.125073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.407 [2024-11-20 09:57:27.125085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.407 [2024-11-20 09:57:27.125094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.407 [2024-11-20 09:57:27.125106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.407 [2024-11-20 09:57:27.125116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.407 [2024-11-20 09:57:27.125128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.407 [2024-11-20 09:57:27.125137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.407 [2024-11-20 09:57:27.125149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.407 [2024-11-20 09:57:27.125165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.407 [2024-11-20 09:57:27.125179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.407 [2024-11-20 09:57:27.125188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.407 [2024-11-20 09:57:27.125200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.407 [2024-11-20 09:57:27.125210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.407 [2024-11-20 09:57:27.125222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.407 [2024-11-20 09:57:27.125231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.407 [2024-11-20 09:57:27.125243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.407 [2024-11-20 09:57:27.125253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.407 [2024-11-20 09:57:27.125265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.407 [2024-11-20 09:57:27.125274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.407 [2024-11-20 09:57:27.125286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.407 [2024-11-20 09:57:27.125295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.408 [2024-11-20 09:57:27.125307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.408 [2024-11-20 09:57:27.125316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.408 [2024-11-20 09:57:27.125328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.408 [2024-11-20 09:57:27.125337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.408 [2024-11-20 09:57:27.125349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.408 [2024-11-20 09:57:27.125358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.408 [2024-11-20 09:57:27.125370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.408 [2024-11-20 09:57:27.125379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.408 [2024-11-20 09:57:27.125391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.408 [2024-11-20 09:57:27.125400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.408 [2024-11-20 09:57:27.125412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.408 [2024-11-20 09:57:27.125421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.408 [2024-11-20 09:57:27.125432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.408 [2024-11-20 09:57:27.125444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.408 [2024-11-20 09:57:27.125456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.408 [2024-11-20 09:57:27.125465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.408 [2024-11-20 09:57:27.125477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.408 [2024-11-20 09:57:27.125486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.408 [2024-11-20 09:57:27.125498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.408 [2024-11-20 09:57:27.125507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.408 [2024-11-20 09:57:27.125519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.408 [2024-11-20 09:57:27.125528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.408 [2024-11-20 09:57:27.125540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.408 [2024-11-20 09:57:27.125549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.408 [2024-11-20 09:57:27.125562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.408 [2024-11-20 09:57:27.125571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.408 [2024-11-20 09:57:27.125582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.408 [2024-11-20 09:57:27.125592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.408 [2024-11-20 09:57:27.125604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.408 [2024-11-20 09:57:27.125613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.408 [2024-11-20 09:57:27.125625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.408 [2024-11-20 09:57:27.125635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.408 [2024-11-20 09:57:27.125647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.408 [2024-11-20 09:57:27.125656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.408 [2024-11-20 09:57:27.125667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.408 [2024-11-20 09:57:27.125677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.408 [2024-11-20 09:57:27.125689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.408 [2024-11-20 09:57:27.125698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.408 [2024-11-20 09:57:27.125710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.408 [2024-11-20 09:57:27.125721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.408 [2024-11-20 09:57:27.125733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.408 [2024-11-20 09:57:27.125742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.408 [2024-11-20 09:57:27.125754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.408 [2024-11-20 09:57:27.125763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.408 [2024-11-20 09:57:27.125775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.408 [2024-11-20 09:57:27.125785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.408 [2024-11-20 09:57:27.125797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.408 [2024-11-20 09:57:27.125806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.408 [2024-11-20 09:57:27.125818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.408 [2024-11-20 09:57:27.125827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.408 [2024-11-20 09:57:27.125839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.408 [2024-11-20 09:57:27.125848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.408 [2024-11-20 09:57:27.125860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.408 [2024-11-20 09:57:27.125869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.408 [2024-11-20 09:57:27.125881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.408 [2024-11-20 09:57:27.125890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.408 [2024-11-20 09:57:27.125902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.408 [2024-11-20 09:57:27.125911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.408 [2024-11-20 09:57:27.125923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.408 [2024-11-20 09:57:27.125933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.408 [2024-11-20 09:57:27.125945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.408 [2024-11-20 09:57:27.125955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.408 [2024-11-20 09:57:27.125967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.408 [2024-11-20 09:57:27.125977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.408 [2024-11-20 09:57:27.125990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.408 [2024-11-20 09:57:27.125999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.408 [2024-11-20 09:57:27.126012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.408 [2024-11-20 09:57:27.126021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.408 [2024-11-20 09:57:27.126033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.408 [2024-11-20 09:57:27.126042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.408 [2024-11-20 09:57:27.126054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.408 [2024-11-20 09:57:27.126063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.408 [2024-11-20 09:57:27.126074] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7c320 is same with the state(6) to be set 00:23:56.408 [2024-11-20 09:57:27.127413] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:23:56.408 [2024-11-20 09:57:27.127437] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc3bcb0 (9): Bad file descriptor 00:23:56.408 [2024-11-20 09:57:27.127818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.408 [2024-11-20 09:57:27.127835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10afd00 with addr=10.0.0.2, port=4420 00:23:56.408 [2024-11-20 09:57:27.127844] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10afd00 is same with the state(6) to be set 00:23:56.408 [2024-11-20 09:57:27.127855] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:23:56.408 [2024-11-20 09:57:27.127862] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:23:56.409 [2024-11-20 09:57:27.127872] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:56.409 [2024-11-20 09:57:27.127882] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:23:56.409 [2024-11-20 09:57:27.128199] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:23:56.409 [2024-11-20 09:57:27.128220] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10afd00 (9): Bad file descriptor 00:23:56.409 [2024-11-20 09:57:27.128573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.409 [2024-11-20 09:57:27.128585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc329f0 with addr=10.0.0.2, port=4420 00:23:56.409 [2024-11-20 09:57:27.128593] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc329f0 is same with the state(6) to be set 00:23:56.409 [2024-11-20 09:57:27.128601] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:23:56.409 [2024-11-20 09:57:27.128607] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:23:56.409 [2024-11-20 09:57:27.128615] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:23:56.409 [2024-11-20 09:57:27.128623] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:23:56.409 [2024-11-20 09:57:27.128666] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc329f0 (9): Bad file descriptor 00:23:56.409 [2024-11-20 09:57:27.128722] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:23:56.409 [2024-11-20 09:57:27.128730] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:23:56.409 [2024-11-20 09:57:27.128738] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:23:56.409 [2024-11-20 09:57:27.128744] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:23:56.409 [2024-11-20 09:57:27.128776] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:23:56.409 [2024-11-20 09:57:27.128786] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:23:56.409 [2024-11-20 09:57:27.129138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.409 [2024-11-20 09:57:27.129151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108df20 with addr=10.0.0.2, port=4420 00:23:56.409 [2024-11-20 09:57:27.129163] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108df20 is same with the state(6) to be set 00:23:56.409 [2024-11-20 09:57:27.129409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.409 [2024-11-20 09:57:27.129419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc38420 with addr=10.0.0.2, port=4420 00:23:56.409 [2024-11-20 09:57:27.129426] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc38420 is same with the state(6) to be set 00:23:56.409 [2024-11-20 09:57:27.129460] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108df20 (9): Bad file descriptor 00:23:56.409 [2024-11-20 09:57:27.129470] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc38420 (9): Bad file descriptor 00:23:56.409 [2024-11-20 09:57:27.129502] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:23:56.409 [2024-11-20 09:57:27.129509] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:23:56.409 [2024-11-20 09:57:27.129516] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:23:56.409 [2024-11-20 09:57:27.129522] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:23:56.409 [2024-11-20 09:57:27.129530] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:23:56.409 [2024-11-20 09:57:27.129536] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:23:56.409 [2024-11-20 09:57:27.129543] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:23:56.409 [2024-11-20 09:57:27.129550] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:23:56.409 [2024-11-20 09:57:27.133601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.409 [2024-11-20 09:57:27.133618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.409 [2024-11-20 09:57:27.133629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.409 [2024-11-20 09:57:27.133637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.409 [2024-11-20 09:57:27.133647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.409 [2024-11-20 09:57:27.133654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.409 [2024-11-20 09:57:27.133667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.409 [2024-11-20 09:57:27.133675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.409 [2024-11-20 09:57:27.133684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.409 [2024-11-20 09:57:27.133692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.409 [2024-11-20 09:57:27.133701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.409 [2024-11-20 09:57:27.133709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.409 [2024-11-20 09:57:27.133718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.409 [2024-11-20 09:57:27.133725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.409 [2024-11-20 09:57:27.133735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.409 [2024-11-20 09:57:27.133742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.409 [2024-11-20 09:57:27.133752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.409 [2024-11-20 09:57:27.133760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.409 [2024-11-20 09:57:27.133769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.409 [2024-11-20 09:57:27.133777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.409 [2024-11-20 09:57:27.133787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.409 [2024-11-20 09:57:27.133794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.409 [2024-11-20 09:57:27.133803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.409 [2024-11-20 09:57:27.133811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.409 [2024-11-20 09:57:27.133821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.409 [2024-11-20 09:57:27.133828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.409 [2024-11-20 09:57:27.133837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.409 [2024-11-20 09:57:27.133844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.409 [2024-11-20 09:57:27.133854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.409 [2024-11-20 09:57:27.133861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.409 [2024-11-20 09:57:27.133871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.409 [2024-11-20 09:57:27.133880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.409 [2024-11-20 09:57:27.133890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.409 [2024-11-20 09:57:27.133897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.409 [2024-11-20 09:57:27.133907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.409 [2024-11-20 09:57:27.133915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.409 [2024-11-20 09:57:27.133924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.409 [2024-11-20 09:57:27.133933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.409 [2024-11-20 09:57:27.133944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.409 [2024-11-20 09:57:27.133952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.409 [2024-11-20 09:57:27.133962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.409 [2024-11-20 09:57:27.133969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.409 [2024-11-20 09:57:27.133979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.409 [2024-11-20 09:57:27.133986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.409 [2024-11-20 09:57:27.133995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.409 [2024-11-20 09:57:27.134003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.409 [2024-11-20 09:57:27.134012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.410 [2024-11-20 09:57:27.134020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.410 [2024-11-20 09:57:27.134029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.410 [2024-11-20 09:57:27.134037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.410 [2024-11-20 09:57:27.134047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.410 [2024-11-20 09:57:27.134054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.410 [2024-11-20 09:57:27.134063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.410 [2024-11-20 09:57:27.134071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.410 [2024-11-20 09:57:27.134080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.410 [2024-11-20 09:57:27.134088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.410 [2024-11-20 09:57:27.134102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.410 [2024-11-20 09:57:27.134110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.410 [2024-11-20 09:57:27.134119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.410 [2024-11-20 09:57:27.134127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.410 [2024-11-20 09:57:27.134136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.410 [2024-11-20 09:57:27.134144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.410 [2024-11-20 09:57:27.134153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.410 [2024-11-20 09:57:27.134164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.410 [2024-11-20 09:57:27.134174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.410 [2024-11-20 09:57:27.134181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.410 [2024-11-20 09:57:27.134191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.410 [2024-11-20 09:57:27.134198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.410 [2024-11-20 09:57:27.134208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.410 [2024-11-20 09:57:27.134215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.410 [2024-11-20 09:57:27.134224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.410 [2024-11-20 09:57:27.134231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.410 [2024-11-20 09:57:27.134241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.410 [2024-11-20 09:57:27.134248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.410 [2024-11-20 09:57:27.134258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.410 [2024-11-20 09:57:27.134265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.410 [2024-11-20 09:57:27.134275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.410 [2024-11-20 09:57:27.134282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.410 [2024-11-20 09:57:27.134291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.410 [2024-11-20 09:57:27.134299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.410 [2024-11-20 09:57:27.134308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.410 [2024-11-20 09:57:27.134317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.410 [2024-11-20 09:57:27.134327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.410 [2024-11-20 09:57:27.134334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.410 [2024-11-20 09:57:27.134344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.410 [2024-11-20 09:57:27.134352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.410 [2024-11-20 09:57:27.134361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.410 [2024-11-20 09:57:27.134369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.410 [2024-11-20 09:57:27.134379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.410 [2024-11-20 09:57:27.134387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.410 [2024-11-20 09:57:27.134396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.410 [2024-11-20 09:57:27.134404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.410 [2024-11-20 09:57:27.134413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.410 [2024-11-20 09:57:27.134420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.410 [2024-11-20 09:57:27.134430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.410 [2024-11-20 09:57:27.134438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.410 [2024-11-20 09:57:27.134447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.410 [2024-11-20 09:57:27.134455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.410 [2024-11-20 09:57:27.134464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.410 [2024-11-20 09:57:27.134472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.410 [2024-11-20 09:57:27.134481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.410 [2024-11-20 09:57:27.134488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.410 [2024-11-20 09:57:27.134498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.410 [2024-11-20 09:57:27.134505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.410 [2024-11-20 09:57:27.134515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.410 [2024-11-20 09:57:27.134522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.410 [2024-11-20 09:57:27.134533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.410 [2024-11-20 09:57:27.134541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.410 [2024-11-20 09:57:27.134550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.410 [2024-11-20 09:57:27.134558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.410 [2024-11-20 09:57:27.134568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.410 [2024-11-20 09:57:27.134576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.410 [2024-11-20 09:57:27.134587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.410 [2024-11-20 09:57:27.134595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.410 [2024-11-20 09:57:27.134604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.410 [2024-11-20 09:57:27.134612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.410 [2024-11-20 09:57:27.134621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.410 [2024-11-20 09:57:27.134629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.410 [2024-11-20 09:57:27.134638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.410 [2024-11-20 09:57:27.134646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.410 [2024-11-20 09:57:27.134655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.410 [2024-11-20 09:57:27.134663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.411 [2024-11-20 09:57:27.134672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.411 [2024-11-20 09:57:27.134680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.411 [2024-11-20 09:57:27.134689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.411 [2024-11-20 09:57:27.134697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.411 [2024-11-20 09:57:27.134706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.411 [2024-11-20 09:57:27.134714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.411 [2024-11-20 09:57:27.134723] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe40d60 is same with the state(6) to be set 00:23:56.411 [2024-11-20 09:57:27.136010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.411 [2024-11-20 09:57:27.136025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.411 [2024-11-20 09:57:27.136040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.411 [2024-11-20 09:57:27.136050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.411 [2024-11-20 09:57:27.136061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.411 [2024-11-20 09:57:27.136071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.411 [2024-11-20 09:57:27.136082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.411 [2024-11-20 09:57:27.136090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.411 [2024-11-20 09:57:27.136100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.411 [2024-11-20 09:57:27.136108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.411 [2024-11-20 09:57:27.136117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.411 [2024-11-20 09:57:27.136125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.411 [2024-11-20 09:57:27.136134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.411 [2024-11-20 09:57:27.136142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.411 [2024-11-20 09:57:27.136152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.411 [2024-11-20 09:57:27.136163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.411 [2024-11-20 09:57:27.136173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.411 [2024-11-20 09:57:27.136180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.411 [2024-11-20 09:57:27.136189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.411 [2024-11-20 09:57:27.136197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.411 [2024-11-20 09:57:27.136207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.411 [2024-11-20 09:57:27.136214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.411 [2024-11-20 09:57:27.136223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.411 [2024-11-20 09:57:27.136231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.411 [2024-11-20 09:57:27.136240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.411 [2024-11-20 09:57:27.136247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.411 [2024-11-20 09:57:27.136257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.411 [2024-11-20 09:57:27.136266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.411 [2024-11-20 09:57:27.136275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.411 [2024-11-20 09:57:27.136283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.411 [2024-11-20 09:57:27.136292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.411 [2024-11-20 09:57:27.136299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.411 [2024-11-20 09:57:27.136309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.411 [2024-11-20 09:57:27.136316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.411 [2024-11-20 09:57:27.136326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.411 [2024-11-20 09:57:27.136333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.411 [2024-11-20 09:57:27.136343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.411 [2024-11-20 09:57:27.136350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.411 [2024-11-20 09:57:27.136359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.411 [2024-11-20 09:57:27.136367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.411 [2024-11-20 09:57:27.136376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.411 [2024-11-20 09:57:27.136384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.411 [2024-11-20 09:57:27.136393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.411 [2024-11-20 09:57:27.136400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.411 [2024-11-20 09:57:27.136410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.411 [2024-11-20 09:57:27.136417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.411 [2024-11-20 09:57:27.136427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.411 [2024-11-20 09:57:27.136434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.411 [2024-11-20 09:57:27.136444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.411 [2024-11-20 09:57:27.136451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.411 [2024-11-20 09:57:27.136461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.411 [2024-11-20 09:57:27.136468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.411 [2024-11-20 09:57:27.136479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.411 [2024-11-20 09:57:27.136486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.411 [2024-11-20 09:57:27.136496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.411 [2024-11-20 09:57:27.136503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.411 [2024-11-20 09:57:27.136512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.411 [2024-11-20 09:57:27.136520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.412 [2024-11-20 09:57:27.136529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.412 [2024-11-20 09:57:27.136536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.412 [2024-11-20 09:57:27.136545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.412 [2024-11-20 09:57:27.136553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.412 [2024-11-20 09:57:27.136562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.412 [2024-11-20 09:57:27.136570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.412 [2024-11-20 09:57:27.136579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.412 [2024-11-20 09:57:27.136587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.412 [2024-11-20 09:57:27.136596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.412 [2024-11-20 09:57:27.136604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.412 [2024-11-20 09:57:27.136613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.412 [2024-11-20 09:57:27.136620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.412 [2024-11-20 09:57:27.136630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.412 [2024-11-20 09:57:27.136637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.412 [2024-11-20 09:57:27.136646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.412 [2024-11-20 09:57:27.136654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.412 [2024-11-20 09:57:27.136663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.412 [2024-11-20 09:57:27.136670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.412 [2024-11-20 09:57:27.136680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.412 [2024-11-20 09:57:27.136689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.412 [2024-11-20 09:57:27.136698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.412 [2024-11-20 09:57:27.136706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.412 [2024-11-20 09:57:27.136715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.412 [2024-11-20 09:57:27.136723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.412 [2024-11-20 09:57:27.136732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.412 [2024-11-20 09:57:27.136739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.412 [2024-11-20 09:57:27.136749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.412 [2024-11-20 09:57:27.136756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.412 [2024-11-20 09:57:27.136766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.412 [2024-11-20 09:57:27.136774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.412 [2024-11-20 09:57:27.136783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.412 [2024-11-20 09:57:27.136791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.412 [2024-11-20 09:57:27.136800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.412 [2024-11-20 09:57:27.136807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.412 [2024-11-20 09:57:27.136817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.412 [2024-11-20 09:57:27.136824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.412 [2024-11-20 09:57:27.136833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.412 [2024-11-20 09:57:27.136841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.412 [2024-11-20 09:57:27.136850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.412 [2024-11-20 09:57:27.136858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.412 [2024-11-20 09:57:27.136867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.412 [2024-11-20 09:57:27.136875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.412 [2024-11-20 09:57:27.136884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.412 [2024-11-20 09:57:27.136892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.412 [2024-11-20 09:57:27.136903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.412 [2024-11-20 09:57:27.136911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.412 [2024-11-20 09:57:27.136921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.412 [2024-11-20 09:57:27.136928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.412 [2024-11-20 09:57:27.136938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.412 [2024-11-20 09:57:27.136946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.412 [2024-11-20 09:57:27.136955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.412 [2024-11-20 09:57:27.136963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.412 [2024-11-20 09:57:27.136972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.412 [2024-11-20 09:57:27.136979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.412 [2024-11-20 09:57:27.136989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.412 [2024-11-20 09:57:27.136996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.412 [2024-11-20 09:57:27.137005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.412 [2024-11-20 09:57:27.137013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.412 [2024-11-20 09:57:27.137022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.412 [2024-11-20 09:57:27.137029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.412 [2024-11-20 09:57:27.137038] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103d4d0 is same with the state(6) to be set 00:23:56.412 [2024-11-20 09:57:27.138300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.412 [2024-11-20 09:57:27.138314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.412 [2024-11-20 09:57:27.138325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.412 [2024-11-20 09:57:27.138333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.412 [2024-11-20 09:57:27.138343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.412 [2024-11-20 09:57:27.138350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.412 [2024-11-20 09:57:27.138360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.412 [2024-11-20 09:57:27.138367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.412 [2024-11-20 09:57:27.138379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.412 [2024-11-20 09:57:27.138387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.412 [2024-11-20 09:57:27.138396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.412 [2024-11-20 09:57:27.138404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.412 [2024-11-20 09:57:27.138413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.412 [2024-11-20 09:57:27.138420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.412 [2024-11-20 09:57:27.138430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.412 [2024-11-20 09:57:27.138437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.412 [2024-11-20 09:57:27.138447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.412 [2024-11-20 09:57:27.138455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.413 [2024-11-20 09:57:27.138464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.413 [2024-11-20 09:57:27.138471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.413 [2024-11-20 09:57:27.138481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.413 [2024-11-20 09:57:27.138488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.413 [2024-11-20 09:57:27.138497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.413 [2024-11-20 09:57:27.138505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.413 [2024-11-20 09:57:27.138514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.413 [2024-11-20 09:57:27.138522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.413 [2024-11-20 09:57:27.138531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.413 [2024-11-20 09:57:27.138538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.413 [2024-11-20 09:57:27.138548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.413 [2024-11-20 09:57:27.138555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.413 [2024-11-20 09:57:27.138565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.413 [2024-11-20 09:57:27.138573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.413 [2024-11-20 09:57:27.138582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.413 [2024-11-20 09:57:27.138591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.413 [2024-11-20 09:57:27.138600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.413 [2024-11-20 09:57:27.138608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.413 [2024-11-20 09:57:27.138617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.413 [2024-11-20 09:57:27.138625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.413 [2024-11-20 09:57:27.138636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.413 [2024-11-20 09:57:27.138643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.413 [2024-11-20 09:57:27.138652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.413 [2024-11-20 09:57:27.138660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.413 [2024-11-20 09:57:27.138669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.413 [2024-11-20 09:57:27.138677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.413 [2024-11-20 09:57:27.138686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.413 [2024-11-20 09:57:27.138693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.413 [2024-11-20 09:57:27.138703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.413 [2024-11-20 09:57:27.138710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.413 [2024-11-20 09:57:27.138720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.413 [2024-11-20 09:57:27.138727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.413 [2024-11-20 09:57:27.138737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.413 [2024-11-20 09:57:27.138744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.413 [2024-11-20 09:57:27.138753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.413 [2024-11-20 09:57:27.138761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.413 [2024-11-20 09:57:27.138770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.413 [2024-11-20 09:57:27.138777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.413 [2024-11-20 09:57:27.138787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.413 [2024-11-20 09:57:27.138795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.413 [2024-11-20 09:57:27.138806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.413 [2024-11-20 09:57:27.138815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.413 [2024-11-20 09:57:27.138825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.413 [2024-11-20 09:57:27.138834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.413 [2024-11-20 09:57:27.138844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.413 [2024-11-20 09:57:27.138851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.413 [2024-11-20 09:57:27.138861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.413 [2024-11-20 09:57:27.138868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.413 [2024-11-20 09:57:27.138878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.413 [2024-11-20 09:57:27.138885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.413 [2024-11-20 09:57:27.138895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.413 [2024-11-20 09:57:27.138902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.413 [2024-11-20 09:57:27.138912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.413 [2024-11-20 09:57:27.138919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.413 [2024-11-20 09:57:27.138928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.413 [2024-11-20 09:57:27.138935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.413 [2024-11-20 09:57:27.138945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.413 [2024-11-20 09:57:27.138952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.413 [2024-11-20 09:57:27.138962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.413 [2024-11-20 09:57:27.138969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.413 [2024-11-20 09:57:27.138978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.413 [2024-11-20 09:57:27.138986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.413 [2024-11-20 09:57:27.138995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.413 [2024-11-20 09:57:27.139003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.413 [2024-11-20 09:57:27.139012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.413 [2024-11-20 09:57:27.139021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.413 [2024-11-20 09:57:27.139031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.413 [2024-11-20 09:57:27.139038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.413 [2024-11-20 09:57:27.139047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.413 [2024-11-20 09:57:27.139054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.413 [2024-11-20 09:57:27.139064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.413 [2024-11-20 09:57:27.139071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.413 [2024-11-20 09:57:27.139081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.413 [2024-11-20 09:57:27.139088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.413 [2024-11-20 09:57:27.139097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.413 [2024-11-20 09:57:27.139105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.413 [2024-11-20 09:57:27.139114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.413 [2024-11-20 09:57:27.139121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.413 [2024-11-20 09:57:27.139130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.414 [2024-11-20 09:57:27.139138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.414 [2024-11-20 09:57:27.139147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.414 [2024-11-20 09:57:27.139155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.414 [2024-11-20 09:57:27.139168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.414 [2024-11-20 09:57:27.139175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.414 [2024-11-20 09:57:27.139185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.414 [2024-11-20 09:57:27.139192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.414 [2024-11-20 09:57:27.139201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.414 [2024-11-20 09:57:27.139209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.414 [2024-11-20 09:57:27.139219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.414 [2024-11-20 09:57:27.139227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.414 [2024-11-20 09:57:27.139238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.414 [2024-11-20 09:57:27.139246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.414 [2024-11-20 09:57:27.139255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.414 [2024-11-20 09:57:27.139263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.414 [2024-11-20 09:57:27.139273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.414 [2024-11-20 09:57:27.139280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.414 [2024-11-20 09:57:27.139290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.414 [2024-11-20 09:57:27.139297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.414 [2024-11-20 09:57:27.139307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.414 [2024-11-20 09:57:27.139314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.414 [2024-11-20 09:57:27.139324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.414 [2024-11-20 09:57:27.139332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.414 [2024-11-20 09:57:27.139341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.414 [2024-11-20 09:57:27.139348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.414 [2024-11-20 09:57:27.139358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.414 [2024-11-20 09:57:27.139365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.414 [2024-11-20 09:57:27.139375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.414 [2024-11-20 09:57:27.139382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.414 [2024-11-20 09:57:27.139392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.414 [2024-11-20 09:57:27.139399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.414 [2024-11-20 09:57:27.139408] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103ea00 is same with the state(6) to be set 00:23:56.414 [2024-11-20 09:57:27.140692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.414 [2024-11-20 09:57:27.140707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.414 [2024-11-20 09:57:27.140720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.414 [2024-11-20 09:57:27.140729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.414 [2024-11-20 09:57:27.140741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.414 [2024-11-20 09:57:27.140753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.414 [2024-11-20 09:57:27.140765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.414 [2024-11-20 09:57:27.140774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.414 [2024-11-20 09:57:27.140783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.414 [2024-11-20 09:57:27.140791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.414 [2024-11-20 09:57:27.140800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.414 [2024-11-20 09:57:27.140807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.414 [2024-11-20 09:57:27.140817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.414 [2024-11-20 09:57:27.140825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.414 [2024-11-20 09:57:27.140834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.414 [2024-11-20 09:57:27.140841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.414 [2024-11-20 09:57:27.140851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.414 [2024-11-20 09:57:27.140858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.414 [2024-11-20 09:57:27.140868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.414 [2024-11-20 09:57:27.140875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.414 [2024-11-20 09:57:27.140885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.414 [2024-11-20 09:57:27.140892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.414 [2024-11-20 09:57:27.140902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.414 [2024-11-20 09:57:27.140910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.414 [2024-11-20 09:57:27.140919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.414 [2024-11-20 09:57:27.140927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.414 [2024-11-20 09:57:27.140936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.414 [2024-11-20 09:57:27.140944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.414 [2024-11-20 09:57:27.140954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.414 [2024-11-20 09:57:27.140963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.414 [2024-11-20 09:57:27.140974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.414 [2024-11-20 09:57:27.140981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.414 [2024-11-20 09:57:27.140991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.414 [2024-11-20 09:57:27.140999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.414 [2024-11-20 09:57:27.141008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.414 [2024-11-20 09:57:27.141015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.414 [2024-11-20 09:57:27.141025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.414 [2024-11-20 09:57:27.141032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.414 [2024-11-20 09:57:27.141041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.414 [2024-11-20 09:57:27.141048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.414 [2024-11-20 09:57:27.141058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.414 [2024-11-20 09:57:27.141065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.414 [2024-11-20 09:57:27.141075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.414 [2024-11-20 09:57:27.141082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.414 [2024-11-20 09:57:27.141091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.414 [2024-11-20 09:57:27.141098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.414 [2024-11-20 09:57:27.141108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.415 [2024-11-20 09:57:27.141115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.415 [2024-11-20 09:57:27.141124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.415 [2024-11-20 09:57:27.141132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.415 [2024-11-20 09:57:27.141141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.415 [2024-11-20 09:57:27.141148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.415 [2024-11-20 09:57:27.141162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.415 [2024-11-20 09:57:27.141170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.415 [2024-11-20 09:57:27.141180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.415 [2024-11-20 09:57:27.141191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.415 [2024-11-20 09:57:27.141201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.415 [2024-11-20 09:57:27.141208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.415 [2024-11-20 09:57:27.141217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.415 [2024-11-20 09:57:27.141225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.415 [2024-11-20 09:57:27.141234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.415 [2024-11-20 09:57:27.141241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.415 [2024-11-20 09:57:27.141251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.415 [2024-11-20 09:57:27.141258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.415 [2024-11-20 09:57:27.141268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.415 [2024-11-20 09:57:27.141275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.415 [2024-11-20 09:57:27.141284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.415 [2024-11-20 09:57:27.141292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.415 [2024-11-20 09:57:27.141301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.415 [2024-11-20 09:57:27.141309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.415 [2024-11-20 09:57:27.141319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.415 [2024-11-20 09:57:27.141326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.415 [2024-11-20 09:57:27.141336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.415 [2024-11-20 09:57:27.141343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.415 [2024-11-20 09:57:27.141352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.415 [2024-11-20 09:57:27.141360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.415 [2024-11-20 09:57:27.141369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.415 [2024-11-20 09:57:27.141376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.415 [2024-11-20 09:57:27.141386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.415 [2024-11-20 09:57:27.141393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.415 [2024-11-20 09:57:27.141404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.415 [2024-11-20 09:57:27.141412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.415 [2024-11-20 09:57:27.141421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.415 [2024-11-20 09:57:27.141428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.415 [2024-11-20 09:57:27.141438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.415 [2024-11-20 09:57:27.141446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.415 [2024-11-20 09:57:27.141455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.415 [2024-11-20 09:57:27.141462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.415 [2024-11-20 09:57:27.141472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.415 [2024-11-20 09:57:27.141480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.415 [2024-11-20 09:57:27.141490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.415 [2024-11-20 09:57:27.141497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.415 [2024-11-20 09:57:27.141506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.415 [2024-11-20 09:57:27.141514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.415 [2024-11-20 09:57:27.141523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.415 [2024-11-20 09:57:27.141531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.415 [2024-11-20 09:57:27.141541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.415 [2024-11-20 09:57:27.141548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.415 [2024-11-20 09:57:27.141558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.415 [2024-11-20 09:57:27.141565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.415 [2024-11-20 09:57:27.141574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.415 [2024-11-20 09:57:27.141582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.415 [2024-11-20 09:57:27.141591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.415 [2024-11-20 09:57:27.141598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.415 [2024-11-20 09:57:27.141608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.415 [2024-11-20 09:57:27.141617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.415 [2024-11-20 09:57:27.141626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.415 [2024-11-20 09:57:27.141634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.415 [2024-11-20 09:57:27.141643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.415 [2024-11-20 09:57:27.141650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.415 [2024-11-20 09:57:27.141660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.415 [2024-11-20 09:57:27.141667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.415 [2024-11-20 09:57:27.141677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.415 [2024-11-20 09:57:27.141684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.415 [2024-11-20 09:57:27.141693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.415 [2024-11-20 09:57:27.141700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.415 [2024-11-20 09:57:27.141709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.415 [2024-11-20 09:57:27.141717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.415 [2024-11-20 09:57:27.141726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.415 [2024-11-20 09:57:27.141734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.415 [2024-11-20 09:57:27.141743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.415 [2024-11-20 09:57:27.141750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.415 [2024-11-20 09:57:27.141760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.415 [2024-11-20 09:57:27.141767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.416 [2024-11-20 09:57:27.141777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.416 [2024-11-20 09:57:27.141784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.416 [2024-11-20 09:57:27.141793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.416 [2024-11-20 09:57:27.141801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.416 [2024-11-20 09:57:27.141809] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10414f0 is same with the state(6) to be set 00:23:56.416 [2024-11-20 09:57:27.143078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.416 [2024-11-20 09:57:27.143095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.416 [2024-11-20 09:57:27.143108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.416 [2024-11-20 09:57:27.143117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.416 [2024-11-20 09:57:27.143128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.416 [2024-11-20 09:57:27.143137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.416 [2024-11-20 09:57:27.143149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.416 [2024-11-20 09:57:27.143161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.416 [2024-11-20 09:57:27.143173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.416 [2024-11-20 09:57:27.143182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.416 [2024-11-20 09:57:27.143193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.416 [2024-11-20 09:57:27.143200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.416 [2024-11-20 09:57:27.143210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.416 [2024-11-20 09:57:27.143218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.416 [2024-11-20 09:57:27.143227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.416 [2024-11-20 09:57:27.143234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.416 [2024-11-20 09:57:27.143244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.416 [2024-11-20 09:57:27.143252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.416 [2024-11-20 09:57:27.143262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.416 [2024-11-20 09:57:27.143269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.416 [2024-11-20 09:57:27.143279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.416 [2024-11-20 09:57:27.143287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.416 [2024-11-20 09:57:27.143296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.416 [2024-11-20 09:57:27.143304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.416 [2024-11-20 09:57:27.143313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.416 [2024-11-20 09:57:27.143320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.416 [2024-11-20 09:57:27.143331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.416 [2024-11-20 09:57:27.143339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.416 [2024-11-20 09:57:27.143348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.416 [2024-11-20 09:57:27.143355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.416 [2024-11-20 09:57:27.143365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.416 [2024-11-20 09:57:27.143372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.416 [2024-11-20 09:57:27.143382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.416 [2024-11-20 09:57:27.143389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.416 [2024-11-20 09:57:27.143399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.416 [2024-11-20 09:57:27.143407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.416 [2024-11-20 09:57:27.143416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.416 [2024-11-20 09:57:27.143423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.416 [2024-11-20 09:57:27.143433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.416 [2024-11-20 09:57:27.143440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.416 [2024-11-20 09:57:27.143449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.416 [2024-11-20 09:57:27.143457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.416 [2024-11-20 09:57:27.143466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.416 [2024-11-20 09:57:27.143473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.416 [2024-11-20 09:57:27.143483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.416 [2024-11-20 09:57:27.143490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.416 [2024-11-20 09:57:27.143499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.416 [2024-11-20 09:57:27.143507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.416 [2024-11-20 09:57:27.143516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.416 [2024-11-20 09:57:27.143524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.416 [2024-11-20 09:57:27.143533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.416 [2024-11-20 09:57:27.143542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.416 [2024-11-20 09:57:27.143551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.416 [2024-11-20 09:57:27.143558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.416 [2024-11-20 09:57:27.143568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.416 [2024-11-20 09:57:27.143575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.416 [2024-11-20 09:57:27.143585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.416 [2024-11-20 09:57:27.143592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.416 [2024-11-20 09:57:27.143601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.416 [2024-11-20 09:57:27.143609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.416 [2024-11-20 09:57:27.143618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.417 [2024-11-20 09:57:27.143625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.417 [2024-11-20 09:57:27.143634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.417 [2024-11-20 09:57:27.143642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.417 [2024-11-20 09:57:27.143651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.417 [2024-11-20 09:57:27.143658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.417 [2024-11-20 09:57:27.143669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.417 [2024-11-20 09:57:27.143676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.417 [2024-11-20 09:57:27.143685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.417 [2024-11-20 09:57:27.143693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.417 [2024-11-20 09:57:27.143703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.417 [2024-11-20 09:57:27.143711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.417 [2024-11-20 09:57:27.143720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.417 [2024-11-20 09:57:27.143728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.417 [2024-11-20 09:57:27.143737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.417 [2024-11-20 09:57:27.143745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.417 [2024-11-20 09:57:27.143759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.417 [2024-11-20 09:57:27.143766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.417 [2024-11-20 09:57:27.143776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.417 [2024-11-20 09:57:27.143783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.417 [2024-11-20 09:57:27.143793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.417 [2024-11-20 09:57:27.143800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.417 [2024-11-20 09:57:27.143810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.417 [2024-11-20 09:57:27.143817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.417 [2024-11-20 09:57:27.143826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.417 [2024-11-20 09:57:27.143833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.417 [2024-11-20 09:57:27.143843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.417 [2024-11-20 09:57:27.143850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.417 [2024-11-20 09:57:27.143860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.417 [2024-11-20 09:57:27.143867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.417 [2024-11-20 09:57:27.143876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.417 [2024-11-20 09:57:27.143884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.417 [2024-11-20 09:57:27.143893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.417 [2024-11-20 09:57:27.143900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.417 [2024-11-20 09:57:27.143909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.417 [2024-11-20 09:57:27.143917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.417 [2024-11-20 09:57:27.143926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.417 [2024-11-20 09:57:27.143933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.417 [2024-11-20 09:57:27.143943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.417 [2024-11-20 09:57:27.143950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.417 [2024-11-20 09:57:27.143960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.417 [2024-11-20 09:57:27.143968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.417 [2024-11-20 09:57:27.143978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.417 [2024-11-20 09:57:27.143985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.417 [2024-11-20 09:57:27.143995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.417 [2024-11-20 09:57:27.144002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.417 [2024-11-20 09:57:27.144012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.417 [2024-11-20 09:57:27.144019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.417 [2024-11-20 09:57:27.144028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.417 [2024-11-20 09:57:27.144036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.417 [2024-11-20 09:57:27.144045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.417 [2024-11-20 09:57:27.144052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.417 [2024-11-20 09:57:27.144062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.417 [2024-11-20 09:57:27.144069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.417 [2024-11-20 09:57:27.144078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.417 [2024-11-20 09:57:27.144085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.417 [2024-11-20 09:57:27.144095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.417 [2024-11-20 09:57:27.144102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.417 [2024-11-20 09:57:27.144111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.417 [2024-11-20 09:57:27.144119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.417 [2024-11-20 09:57:27.144128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.417 [2024-11-20 09:57:27.144135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.417 [2024-11-20 09:57:27.144145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.417 [2024-11-20 09:57:27.144152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.417 [2024-11-20 09:57:27.144166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.417 [2024-11-20 09:57:27.144174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.417 [2024-11-20 09:57:27.144186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.417 [2024-11-20 09:57:27.144193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.417 [2024-11-20 09:57:27.144202] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1042a20 is same with the state(6) to be set 00:23:56.417 [2024-11-20 09:57:27.145732] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:23:56.417 [2024-11-20 09:57:27.145755] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:23:56.417 [2024-11-20 09:57:27.145765] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:23:56.417 [2024-11-20 09:57:27.145846] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:23:56.417 [2024-11-20 09:57:27.145859] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:23:56.417 [2024-11-20 09:57:27.145872] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:23:56.417 [2024-11-20 09:57:27.145933] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:23:56.417 task offset: 26240 on job bdev=Nvme2n1 fails 00:23:56.417 00:23:56.417 Latency(us) 00:23:56.417 [2024-11-20T08:57:27.333Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:56.417 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:56.417 Job: Nvme1n1 ended in about 0.94 seconds with error 00:23:56.418 Verification LBA range: start 0x0 length 0x400 00:23:56.418 Nvme1n1 : 0.94 208.15 13.01 63.72 0.00 232524.32 4642.13 248162.99 00:23:56.418 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:56.418 Job: Nvme2n1 ended in about 0.91 seconds with error 00:23:56.418 Verification LBA range: start 0x0 length 0x400 00:23:56.418 Nvme2n1 : 0.91 210.89 13.18 70.30 0.00 220036.29 2730.67 235929.60 00:23:56.418 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:56.418 Job: Nvme3n1 ended in about 0.95 seconds with error 00:23:56.418 Verification LBA range: start 0x0 length 0x400 00:23:56.418 Nvme3n1 : 0.95 206.37 12.90 67.04 0.00 222040.03 11468.80 256901.12 00:23:56.418 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:56.418 Job: Nvme4n1 ended in about 0.93 seconds with error 00:23:56.418 Verification LBA range: start 0x0 length 0x400 00:23:56.418 Nvme4n1 : 0.93 205.55 12.85 68.52 0.00 216498.99 22391.47 253405.87 00:23:56.418 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:56.418 Job: Nvme5n1 ended in about 0.96 seconds with error 00:23:56.418 Verification LBA range: start 0x0 length 0x400 00:23:56.418 Nvme5n1 : 0.96 138.99 8.69 61.66 0.00 289536.57 21954.56 284863.15 00:23:56.418 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:56.418 Job: Nvme6n1 ended in about 0.96 seconds with error 00:23:56.418 Verification LBA range: start 0x0 length 0x400 00:23:56.418 Nvme6n1 : 0.96 133.43 8.34 66.72 0.00 284596.05 18896.21 277872.64 00:23:56.418 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:56.418 Job: Nvme7n1 ended in about 0.94 seconds with error 00:23:56.418 Verification LBA range: start 0x0 length 0x400 00:23:56.418 Nvme7n1 : 0.94 205.25 12.83 68.42 0.00 202643.20 21626.88 255153.49 00:23:56.418 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:56.418 Job: Nvme8n1 ended in about 0.96 seconds with error 00:23:56.418 Verification LBA range: start 0x0 length 0x400 00:23:56.418 Nvme8n1 : 0.96 133.10 8.32 66.55 0.00 272698.45 13707.95 265639.25 00:23:56.418 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:56.418 Job: Nvme9n1 ended in about 0.96 seconds with error 00:23:56.418 Verification LBA range: start 0x0 length 0x400 00:23:56.418 Nvme9n1 : 0.96 132.77 8.30 66.39 0.00 267327.72 18240.85 251658.24 00:23:56.418 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:56.418 Job: Nvme10n1 ended in about 0.95 seconds with error 00:23:56.418 Verification LBA range: start 0x0 length 0x400 00:23:56.418 Nvme10n1 : 0.95 135.30 8.46 67.65 0.00 255162.60 20316.16 274377.39 00:23:56.418 [2024-11-20T08:57:27.334Z] =================================================================================================================== 00:23:56.418 [2024-11-20T08:57:27.334Z] Total : 1709.81 106.86 666.95 0.00 242324.31 2730.67 284863.15 00:23:56.418 [2024-11-20 09:57:27.172489] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:56.418 [2024-11-20 09:57:27.172536] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:23:56.418 [2024-11-20 09:57:27.172554] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:56.418 [2024-11-20 09:57:27.173008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.418 [2024-11-20 09:57:27.173028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc39810 with addr=10.0.0.2, port=4420 00:23:56.418 [2024-11-20 09:57:27.173039] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc39810 is same with the state(6) to be set 00:23:56.418 [2024-11-20 09:57:27.173347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.418 [2024-11-20 09:57:27.173358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1067180 with addr=10.0.0.2, port=4420 00:23:56.418 [2024-11-20 09:57:27.173366] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1067180 is same with the state(6) to be set 00:23:56.418 [2024-11-20 09:57:27.173682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.418 [2024-11-20 09:57:27.173692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb53610 with addr=10.0.0.2, port=4420 00:23:56.418 [2024-11-20 09:57:27.173700] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53610 is same with the state(6) to be set 00:23:56.418 [2024-11-20 09:57:27.175060] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:23:56.418 [2024-11-20 09:57:27.175076] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:23:56.418 [2024-11-20 09:57:27.175086] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:23:56.418 [2024-11-20 09:57:27.175095] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:23:56.418 [2024-11-20 09:57:27.175470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.418 [2024-11-20 09:57:27.175485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30fa0 with addr=10.0.0.2, port=4420 00:23:56.418 [2024-11-20 09:57:27.175493] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30fa0 is same with the state(6) to be set 00:23:56.418 [2024-11-20 09:57:27.175684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.418 [2024-11-20 09:57:27.175694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082310 with addr=10.0.0.2, port=4420 00:23:56.418 [2024-11-20 09:57:27.175702] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082310 is same with the state(6) to be set 00:23:56.418 [2024-11-20 09:57:27.175994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.418 [2024-11-20 09:57:27.176005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc3bcb0 with addr=10.0.0.2, port=4420 00:23:56.418 [2024-11-20 09:57:27.176012] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3bcb0 is same with the state(6) to be set 00:23:56.418 [2024-11-20 09:57:27.176030] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc39810 (9): Bad file descriptor 00:23:56.418 [2024-11-20 09:57:27.176042] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1067180 (9): Bad file descriptor 00:23:56.418 [2024-11-20 09:57:27.176052] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb53610 (9): Bad file descriptor 00:23:56.418 [2024-11-20 09:57:27.176089] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:23:56.418 [2024-11-20 09:57:27.176101] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:23:56.418 [2024-11-20 09:57:27.176113] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:23:56.418 [2024-11-20 09:57:27.176679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.418 [2024-11-20 09:57:27.176696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10afd00 with addr=10.0.0.2, port=4420 00:23:56.418 [2024-11-20 09:57:27.176704] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10afd00 is same with the state(6) to be set 00:23:56.418 [2024-11-20 09:57:27.176897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.418 [2024-11-20 09:57:27.176906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc329f0 with addr=10.0.0.2, port=4420 00:23:56.418 [2024-11-20 09:57:27.176914] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc329f0 is same with the state(6) to be set 00:23:56.418 [2024-11-20 09:57:27.177218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.418 [2024-11-20 09:57:27.177228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc38420 with addr=10.0.0.2, port=4420 00:23:56.418 [2024-11-20 09:57:27.177236] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc38420 is same with the state(6) to be set 00:23:56.418 [2024-11-20 09:57:27.177608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.418 [2024-11-20 09:57:27.177618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108df20 with addr=10.0.0.2, port=4420 00:23:56.418 [2024-11-20 09:57:27.177625] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108df20 is same with the state(6) to be set 00:23:56.418 [2024-11-20 09:57:27.177635] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30fa0 (9): Bad file descriptor 00:23:56.418 [2024-11-20 09:57:27.177646] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082310 (9): Bad file descriptor 00:23:56.418 [2024-11-20 09:57:27.177655] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc3bcb0 (9): Bad file descriptor 00:23:56.418 [2024-11-20 09:57:27.177663] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:23:56.418 [2024-11-20 09:57:27.177670] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:23:56.418 [2024-11-20 09:57:27.177678] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:23:56.418 [2024-11-20 09:57:27.177687] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:23:56.418 [2024-11-20 09:57:27.177695] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:23:56.418 [2024-11-20 09:57:27.177702] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:23:56.418 [2024-11-20 09:57:27.177709] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:23:56.418 [2024-11-20 09:57:27.177719] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:23:56.418 [2024-11-20 09:57:27.177726] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:23:56.418 [2024-11-20 09:57:27.177733] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:23:56.418 [2024-11-20 09:57:27.177740] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:23:56.418 [2024-11-20 09:57:27.177746] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:23:56.418 [2024-11-20 09:57:27.177816] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10afd00 (9): Bad file descriptor 00:23:56.418 [2024-11-20 09:57:27.177828] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc329f0 (9): Bad file descriptor 00:23:56.418 [2024-11-20 09:57:27.177837] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc38420 (9): Bad file descriptor 00:23:56.418 [2024-11-20 09:57:27.177847] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108df20 (9): Bad file descriptor 00:23:56.418 [2024-11-20 09:57:27.177855] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:23:56.419 [2024-11-20 09:57:27.177861] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:23:56.419 [2024-11-20 09:57:27.177868] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:23:56.419 [2024-11-20 09:57:27.177875] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:23:56.419 [2024-11-20 09:57:27.177882] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:23:56.419 [2024-11-20 09:57:27.177889] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:23:56.419 [2024-11-20 09:57:27.177896] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:23:56.419 [2024-11-20 09:57:27.177902] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:23:56.419 [2024-11-20 09:57:27.177909] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:23:56.419 [2024-11-20 09:57:27.177916] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:23:56.419 [2024-11-20 09:57:27.177922] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:56.419 [2024-11-20 09:57:27.177929] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:23:56.419 [2024-11-20 09:57:27.177956] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:23:56.419 [2024-11-20 09:57:27.177964] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:23:56.419 [2024-11-20 09:57:27.177971] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:23:56.419 [2024-11-20 09:57:27.177977] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:23:56.419 [2024-11-20 09:57:27.177985] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:23:56.419 [2024-11-20 09:57:27.177991] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:23:56.419 [2024-11-20 09:57:27.177998] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:23:56.419 [2024-11-20 09:57:27.178004] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:23:56.419 [2024-11-20 09:57:27.178014] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:23:56.419 [2024-11-20 09:57:27.178020] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:23:56.419 [2024-11-20 09:57:27.178027] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:23:56.419 [2024-11-20 09:57:27.178033] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:23:56.419 [2024-11-20 09:57:27.178041] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:23:56.419 [2024-11-20 09:57:27.178048] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:23:56.419 [2024-11-20 09:57:27.178054] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:23:56.419 [2024-11-20 09:57:27.178061] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:23:56.679 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:23:57.618 09:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 1442441 00:23:57.619 09:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:23:57.619 09:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1442441 00:23:57.619 09:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:23:57.619 09:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:57.619 09:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:23:57.619 09:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:57.619 09:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 1442441 00:23:57.619 09:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:23:57.619 09:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:57.619 09:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:23:57.619 09:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:23:57.619 09:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:23:57.619 09:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:57.619 09:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:23:57.619 09:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:57.619 09:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:57.619 09:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:57.619 09:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:57.619 09:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:57.619 09:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:23:57.619 09:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:57.619 09:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:23:57.619 09:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:57.619 09:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:57.619 rmmod nvme_tcp 00:23:57.619 rmmod nvme_fabrics 00:23:57.619 rmmod nvme_keyring 00:23:57.619 09:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:57.619 09:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:23:57.619 09:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:23:57.619 09:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 1442060 ']' 00:23:57.619 09:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 1442060 00:23:57.619 09:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 1442060 ']' 00:23:57.619 09:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 1442060 00:23:57.619 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1442060) - No such process 00:23:57.619 09:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 1442060 is not found' 00:23:57.619 Process with pid 1442060 is not found 00:23:57.619 09:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:57.619 09:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:57.619 09:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:57.619 09:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:23:57.619 09:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:23:57.619 09:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:57.619 09:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:23:57.619 09:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:57.619 09:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:57.619 09:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:57.619 09:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:57.619 09:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:00.162 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:00.162 00:24:00.162 real 0m7.788s 00:24:00.162 user 0m19.017s 00:24:00.162 sys 0m1.235s 00:24:00.162 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:00.162 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:00.162 ************************************ 00:24:00.162 END TEST nvmf_shutdown_tc3 00:24:00.162 ************************************ 00:24:00.162 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:24:00.162 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:24:00.162 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:24:00.162 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:00.162 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:00.162 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:00.162 ************************************ 00:24:00.162 START TEST nvmf_shutdown_tc4 00:24:00.162 ************************************ 00:24:00.162 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:24:00.162 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:24:00.162 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:24:00.162 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:00.162 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:00.162 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:00.162 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:00.162 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:00.162 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:00.162 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:00.162 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:00.162 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:00.162 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:00.162 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:24:00.162 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:00.162 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:00.162 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:24:00.162 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:00.162 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:00.162 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:00.162 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:00.162 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:00.162 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:24:00.162 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:00.162 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:24:00.162 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:24:00.162 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:24:00.162 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:24:00.162 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:24:00.162 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:24:00.162 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:00.162 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:00.162 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:00.162 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:00.162 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:00.162 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:00.162 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:00.162 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:00.162 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:00.162 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:00.162 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:00.162 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:00.162 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:00.162 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:00.162 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:00.162 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:00.162 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:00.162 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:00.162 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:00.162 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:00.162 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:00.162 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:00.162 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:00.162 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:00.162 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:00.162 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:00.162 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:00.162 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:00.162 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:00.162 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:00.162 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:00.163 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:00.163 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:00.163 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:00.163 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:00.163 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:00.163 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:00.163 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:00.163 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:00.163 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:00.163 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:00.163 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:00.163 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:00.163 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:00.163 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:00.163 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:00.163 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:00.163 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:00.163 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:00.163 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:00.163 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:00.163 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:00.163 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:00.163 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:00.163 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:00.163 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:00.163 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:00.163 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:00.163 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:24:00.163 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:00.163 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:00.163 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:00.163 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:00.163 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:00.163 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:00.163 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:00.163 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:00.163 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:00.163 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:00.163 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:00.163 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:00.163 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:00.163 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:00.163 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:00.163 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:00.163 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:00.163 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:00.163 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:00.163 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:00.163 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:00.163 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:00.163 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:00.163 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:00.163 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:00.163 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:00.163 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:00.163 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.618 ms 00:24:00.163 00:24:00.163 --- 10.0.0.2 ping statistics --- 00:24:00.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:00.163 rtt min/avg/max/mdev = 0.618/0.618/0.618/0.000 ms 00:24:00.163 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:00.163 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:00.163 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:24:00.163 00:24:00.163 --- 10.0.0.1 ping statistics --- 00:24:00.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:00.163 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:24:00.163 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:00.163 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:24:00.163 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:00.163 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:00.163 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:00.163 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:00.163 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:00.163 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:00.163 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:00.163 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:24:00.163 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:00.163 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:00.163 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:00.163 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=1443699 00:24:00.163 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 1443699 00:24:00.163 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:00.163 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 1443699 ']' 00:24:00.163 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:00.163 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:00.163 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:00.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:00.163 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:00.163 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:00.163 [2024-11-20 09:57:31.032155] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:24:00.163 [2024-11-20 09:57:31.032226] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:00.423 [2024-11-20 09:57:31.126240] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:00.423 [2024-11-20 09:57:31.161058] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:00.423 [2024-11-20 09:57:31.161088] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:00.423 [2024-11-20 09:57:31.161094] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:00.423 [2024-11-20 09:57:31.161099] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:00.423 [2024-11-20 09:57:31.161104] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:00.423 [2024-11-20 09:57:31.162776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:00.423 [2024-11-20 09:57:31.162929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:00.423 [2024-11-20 09:57:31.163085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:00.423 [2024-11-20 09:57:31.163088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:24:00.994 09:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:00.994 09:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:24:00.994 09:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:00.994 09:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:00.994 09:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:00.994 09:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:00.994 09:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:00.994 09:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.994 09:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:00.994 [2024-11-20 09:57:31.881589] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:00.994 09:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.994 09:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:24:00.994 09:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:24:00.994 09:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:00.994 09:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:00.994 09:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:00.994 09:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:00.994 09:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:00.994 09:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:00.994 09:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:01.254 09:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:01.254 09:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:01.254 09:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:01.254 09:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:01.254 09:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:01.254 09:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:01.254 09:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:01.254 09:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:01.254 09:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:01.254 09:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:01.254 09:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:01.254 09:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:01.254 09:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:01.254 09:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:01.254 09:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:01.254 09:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:01.254 09:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:24:01.254 09:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.254 09:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:01.254 Malloc1 00:24:01.254 [2024-11-20 09:57:31.992992] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:01.254 Malloc2 00:24:01.254 Malloc3 00:24:01.254 Malloc4 00:24:01.254 Malloc5 00:24:01.254 Malloc6 00:24:01.515 Malloc7 00:24:01.515 Malloc8 00:24:01.515 Malloc9 00:24:01.515 Malloc10 00:24:01.515 09:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.515 09:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:24:01.515 09:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:01.515 09:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:01.515 09:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=1443968 00:24:01.515 09:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:24:01.515 09:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:24:01.784 [2024-11-20 09:57:32.470507] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:07.077 09:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:07.077 09:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 1443699 00:24:07.077 09:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 1443699 ']' 00:24:07.077 09:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 1443699 00:24:07.077 09:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:24:07.077 09:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:07.077 09:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1443699 00:24:07.077 09:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:07.077 09:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:07.077 09:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1443699' 00:24:07.077 killing process with pid 1443699 00:24:07.077 09:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 1443699 00:24:07.077 09:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 1443699 00:24:07.077 Write completed with error (sct=0, sc=8) 00:24:07.077 Write completed with error (sct=0, sc=8) 00:24:07.077 Write completed with error (sct=0, sc=8) 00:24:07.077 starting I/O failed: -6 00:24:07.077 Write completed with error (sct=0, sc=8) 00:24:07.077 Write completed with error (sct=0, sc=8) 00:24:07.077 Write completed with error (sct=0, sc=8) 00:24:07.077 Write completed with error (sct=0, sc=8) 00:24:07.077 starting I/O failed: -6 00:24:07.077 Write completed with error (sct=0, sc=8) 00:24:07.077 Write completed with error (sct=0, sc=8) 00:24:07.077 Write completed with error (sct=0, sc=8) 00:24:07.077 Write completed with error (sct=0, sc=8) 00:24:07.077 starting I/O failed: -6 00:24:07.077 Write completed with error (sct=0, sc=8) 00:24:07.077 Write completed with error (sct=0, sc=8) 00:24:07.077 Write completed with error (sct=0, sc=8) 00:24:07.077 Write completed with error (sct=0, sc=8) 00:24:07.077 starting I/O failed: -6 00:24:07.077 Write completed with error (sct=0, sc=8) 00:24:07.077 Write completed with error (sct=0, sc=8) 00:24:07.077 Write completed with error (sct=0, sc=8) 00:24:07.077 Write completed with error (sct=0, sc=8) 00:24:07.077 starting I/O failed: -6 00:24:07.077 Write completed with error (sct=0, sc=8) 00:24:07.077 Write completed with error (sct=0, sc=8) 00:24:07.077 Write completed with error (sct=0, sc=8) 00:24:07.077 Write completed with error (sct=0, sc=8) 00:24:07.077 starting I/O failed: -6 00:24:07.077 Write completed with error (sct=0, sc=8) 00:24:07.077 Write completed with error (sct=0, sc=8) 00:24:07.077 Write completed with error (sct=0, sc=8) 00:24:07.077 Write completed with error (sct=0, sc=8) 00:24:07.077 starting I/O failed: -6 00:24:07.077 Write completed with error (sct=0, sc=8) 00:24:07.077 Write completed with error (sct=0, sc=8) 00:24:07.077 Write completed with error (sct=0, sc=8) 00:24:07.077 Write completed with error (sct=0, sc=8) 00:24:07.077 starting I/O failed: -6 00:24:07.077 Write completed with error (sct=0, sc=8) 00:24:07.077 Write completed with error (sct=0, sc=8) 00:24:07.077 Write completed with error (sct=0, sc=8) 00:24:07.077 Write completed with error (sct=0, sc=8) 00:24:07.077 starting I/O failed: -6 00:24:07.077 Write completed with error (sct=0, sc=8) 00:24:07.077 Write completed with error (sct=0, sc=8) 00:24:07.077 Write completed with error (sct=0, sc=8) 00:24:07.077 [2024-11-20 09:57:37.474078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2163440 is same with the state(6) to be set 00:24:07.077 Write completed with error (sct=0, sc=8) 00:24:07.077 starting I/O failed: -6 00:24:07.077 [2024-11-20 09:57:37.474117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2163440 is same with the state(6) to be set 00:24:07.077 [2024-11-20 09:57:37.474124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2163440 is same with the state(6) to be set 00:24:07.077 Write completed with error (sct=0, sc=8) 00:24:07.077 [2024-11-20 09:57:37.474129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2163440 is same with the state(6) to be set 00:24:07.077 [2024-11-20 09:57:37.474134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2163440 is same with the state(6) to be set 00:24:07.077 [2024-11-20 09:57:37.474139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2163440 is same with the state(6) to be set 00:24:07.077 Write completed with error (sct=0, sc=8) 00:24:07.077 [2024-11-20 09:57:37.474144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2163440 is same with the state(6) to be set 00:24:07.077 [2024-11-20 09:57:37.474150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2163440 is same with the state(6) to be set 00:24:07.077 [2024-11-20 09:57:37.474194] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:07.077 Write completed with error (sct=0, sc=8) 00:24:07.077 Write completed with error (sct=0, sc=8) 00:24:07.077 starting I/O failed: -6 00:24:07.077 Write completed with error (sct=0, sc=8) 00:24:07.077 starting I/O failed: -6 00:24:07.077 Write completed with error (sct=0, sc=8) 00:24:07.077 Write completed with error (sct=0, sc=8) 00:24:07.077 Write completed with error (sct=0, sc=8) 00:24:07.077 starting I/O failed: -6 00:24:07.077 Write completed with error (sct=0, sc=8) 00:24:07.077 starting I/O failed: -6 00:24:07.077 Write completed with error (sct=0, sc=8) 00:24:07.077 Write completed with error (sct=0, sc=8) 00:24:07.077 Write completed with error (sct=0, sc=8) 00:24:07.077 starting I/O failed: -6 00:24:07.077 Write completed with error (sct=0, sc=8) 00:24:07.077 starting I/O failed: -6 00:24:07.077 Write completed with error (sct=0, sc=8) 00:24:07.077 Write completed with error (sct=0, sc=8) 00:24:07.077 Write completed with error (sct=0, sc=8) 00:24:07.077 starting I/O failed: -6 00:24:07.077 Write completed with error (sct=0, sc=8) 00:24:07.077 starting I/O failed: -6 00:24:07.077 Write completed with error (sct=0, sc=8) 00:24:07.077 Write completed with error (sct=0, sc=8) 00:24:07.077 Write completed with error (sct=0, sc=8) 00:24:07.077 starting I/O failed: -6 00:24:07.077 Write completed with error (sct=0, sc=8) 00:24:07.077 starting I/O failed: -6 00:24:07.077 Write completed with error (sct=0, sc=8) 00:24:07.077 Write completed with error (sct=0, sc=8) 00:24:07.077 Write completed with error (sct=0, sc=8) 00:24:07.077 starting I/O failed: -6 00:24:07.077 Write completed with error (sct=0, sc=8) 00:24:07.078 starting I/O failed: -6 00:24:07.078 Write completed with error (sct=0, sc=8) 00:24:07.078 Write completed with error (sct=0, sc=8) 00:24:07.078 Write completed with error (sct=0, sc=8) 00:24:07.078 starting I/O failed: -6 00:24:07.078 Write completed with error (sct=0, sc=8) 00:24:07.078 starting I/O failed: -6 00:24:07.078 Write completed with error (sct=0, sc=8) 00:24:07.078 Write completed with error (sct=0, sc=8) 00:24:07.078 Write completed with error (sct=0, sc=8) 00:24:07.078 starting I/O failed: -6 00:24:07.078 Write completed with error (sct=0, sc=8) 00:24:07.078 [2024-11-20 09:57:37.474857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2165650 is same with starting I/O failed: -6 00:24:07.078 the state(6) to be set 00:24:07.078 Write completed with error (sct=0, sc=8) 00:24:07.078 [2024-11-20 09:57:37.474882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2165650 is same with the state(6) to be set 00:24:07.078 [2024-11-20 09:57:37.474888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2165650 is same with the state(6) to be set 00:24:07.078 [2024-11-20 09:57:37.474893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2165650 is same with the state(6) to be set 00:24:07.078 Write completed with error (sct=0, sc=8) 00:24:07.078 [2024-11-20 09:57:37.474898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2165650 is same with the state(6) to be set 00:24:07.078 Write completed with error (sct=0, sc=8) 00:24:07.078 starting I/O failed: -6 00:24:07.078 Write completed with error (sct=0, sc=8) 00:24:07.078 starting I/O failed: -6 00:24:07.078 Write completed with error (sct=0, sc=8) 00:24:07.078 Write completed with error (sct=0, sc=8) 00:24:07.078 Write completed with error (sct=0, sc=8) 00:24:07.078 starting I/O failed: -6 00:24:07.078 Write completed with error (sct=0, sc=8) 00:24:07.078 starting I/O failed: -6 00:24:07.078 Write completed with error (sct=0, sc=8) 00:24:07.078 Write completed with error (sct=0, sc=8) 00:24:07.078 Write completed with error (sct=0, sc=8) 00:24:07.078 starting I/O failed: -6 00:24:07.078 Write completed with error (sct=0, sc=8) 00:24:07.078 starting I/O failed: -6 00:24:07.078 Write completed with error (sct=0, sc=8) 00:24:07.078 Write completed with error (sct=0, sc=8) 00:24:07.078 [2024-11-20 09:57:37.475117] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:07.078 [2024-11-20 09:57:37.475148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2165b20 is same with the state(6) to be set 00:24:07.078 [2024-11-20 09:57:37.475162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2165b20 is same with the state(6) to be set 00:24:07.078 [2024-11-20 09:57:37.475167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2165b20 is same with the state(6) to be set 00:24:07.078 [2024-11-20 09:57:37.475172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2165b20 is same with the state(6) to be set 00:24:07.078 [2024-11-20 09:57:37.475177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2165b20 is same with the state(6) to be set 00:24:07.078 Write completed with error (sct=0, sc=8) 00:24:07.078 starting I/O failed: -6 00:24:07.078 Write completed with error (sct=0, sc=8) 00:24:07.078 starting I/O failed: -6 00:24:07.078 Write completed with error (sct=0, sc=8) 00:24:07.078 starting I/O failed: -6 00:24:07.078 Write completed with error (sct=0, sc=8) 00:24:07.078 Write completed with error (sct=0, sc=8) 00:24:07.078 starting I/O failed: -6 00:24:07.078 Write completed with error (sct=0, sc=8) 00:24:07.078 starting I/O failed: -6 00:24:07.078 Write completed with error (sct=0, sc=8) 00:24:07.078 [2024-11-20 09:57:37.475346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2165ff0 is same with the state(6) to be set 00:24:07.078 starting I/O failed: -6 00:24:07.078 Write completed with error (sct=0, sc=8) 00:24:07.078 [2024-11-20 09:57:37.475369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2165ff0 is same with the state(6) to be set 00:24:07.078 [2024-11-20 09:57:37.475376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2165ff0 is same with the state(6) to be set 00:24:07.078 [2024-11-20 09:57:37.475381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2165ff0 is same with the state(6) to be set 00:24:07.078 Write completed with error (sct=0, sc=8) 00:24:07.078 starting I/O failed: -6 00:24:07.078 Write completed with error (sct=0, sc=8) 00:24:07.078 starting I/O failed: -6 00:24:07.078 Write completed with error (sct=0, sc=8) 00:24:07.078 starting I/O failed: -6 00:24:07.078 Write completed with error (sct=0, sc=8) 00:24:07.078 Write completed with error (sct=0, sc=8) 00:24:07.078 starting I/O failed: -6 00:24:07.078 Write completed with error (sct=0, sc=8) 00:24:07.078 starting I/O failed: -6 00:24:07.078 Write completed with error (sct=0, sc=8) 00:24:07.078 starting I/O failed: -6 00:24:07.078 Write completed with error (sct=0, sc=8) 00:24:07.078 Write completed with error (sct=0, sc=8) 00:24:07.078 starting I/O failed: -6 00:24:07.078 Write completed with error (sct=0, sc=8) 00:24:07.078 starting I/O failed: -6 00:24:07.078 Write completed with error (sct=0, sc=8) 00:24:07.078 starting I/O failed: -6 00:24:07.078 Write completed with error (sct=0, sc=8) 00:24:07.078 Write completed with error (sct=0, sc=8) 00:24:07.078 [2024-11-20 09:57:37.475567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2165180 is same with the state(6) to be set 00:24:07.078 starting I/O failed: -6 00:24:07.078 [2024-11-20 09:57:37.475588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2165180 is same with Write completed with error (sct=0, sc=8) 00:24:07.078 the state(6) to be set 00:24:07.078 [2024-11-20 09:57:37.475596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2165180 is same with the state(6) to be set 00:24:07.078 starting I/O failed: -6 00:24:07.078 [2024-11-20 09:57:37.475601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2165180 is same with the state(6) to be set 00:24:07.078 Write completed with error (sct=0, sc=8) 00:24:07.078 [2024-11-20 09:57:37.475611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2165180 is same with the state(6) to be set 00:24:07.078 starting I/O failed: -6 00:24:07.078 [2024-11-20 09:57:37.475616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2165180 is same with the state(6) to be set 00:24:07.078 Write completed with error (sct=0, sc=8) 00:24:07.078 Write completed with error (sct=0, sc=8) 00:24:07.078 starting I/O failed: -6 00:24:07.078 Write completed with error (sct=0, sc=8) 00:24:07.078 starting I/O failed: -6 00:24:07.078 Write completed with error (sct=0, sc=8) 00:24:07.078 starting I/O failed: -6 00:24:07.078 Write completed with error (sct=0, sc=8) 00:24:07.078 Write completed with error (sct=0, sc=8) 00:24:07.078 starting I/O failed: -6 00:24:07.078 Write completed with error (sct=0, sc=8) 00:24:07.078 starting I/O failed: -6 00:24:07.078 Write completed with error (sct=0, sc=8) 00:24:07.078 starting I/O failed: -6 00:24:07.078 Write completed with error (sct=0, sc=8) 00:24:07.078 Write completed with error (sct=0, sc=8) 00:24:07.078 starting I/O failed: -6 00:24:07.078 Write completed with error (sct=0, sc=8) 00:24:07.078 starting I/O failed: -6 00:24:07.078 Write completed with error (sct=0, sc=8) 00:24:07.078 starting I/O failed: -6 00:24:07.078 Write completed with error (sct=0, sc=8) 00:24:07.078 Write completed with error (sct=0, sc=8) 00:24:07.078 starting I/O failed: -6 00:24:07.078 [2024-11-20 09:57:37.475837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21642f0 is same with the state(6) to be set 00:24:07.078 Write completed with error (sct=0, sc=8) 00:24:07.078 [2024-11-20 09:57:37.475850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21642f0 is same with the state(6) to be set 00:24:07.078 starting I/O failed: -6 00:24:07.078 [2024-11-20 09:57:37.475855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21642f0 is same with the state(6) to be set 00:24:07.078 Write completed with error (sct=0, sc=8) 00:24:07.078 [2024-11-20 09:57:37.475861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21642f0 is same with the state(6) to be set 00:24:07.078 starting I/O failed: -6 00:24:07.078 [2024-11-20 09:57:37.475866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21642f0 is same with the state(6) to be set 00:24:07.078 [2024-11-20 09:57:37.475872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21642f0 is same with the state(6) to be set 00:24:07.078 Write completed with error (sct=0, sc=8) 00:24:07.078 [2024-11-20 09:57:37.475877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21642f0 is same with the state(6) to be set 00:24:07.078 [2024-11-20 09:57:37.475882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21642f0 is same with the state(6) to be set 00:24:07.078 Write completed with error (sct=0, sc=8) 00:24:07.078 starting I/O failed: -6 00:24:07.078 Write completed with error (sct=0, sc=8) 00:24:07.078 starting I/O failed: -6 00:24:07.078 Write completed with error (sct=0, sc=8) 00:24:07.078 starting I/O failed: -6 00:24:07.078 Write completed with error (sct=0, sc=8) 00:24:07.078 Write completed with error (sct=0, sc=8) 00:24:07.078 starting I/O failed: -6 00:24:07.078 Write completed with error (sct=0, sc=8) 00:24:07.078 starting I/O failed: -6 00:24:07.078 Write completed with error (sct=0, sc=8) 00:24:07.078 starting I/O failed: -6 00:24:07.078 Write completed with error (sct=0, sc=8) 00:24:07.078 [2024-11-20 09:57:37.476028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:07.078 [2024-11-20 09:57:37.476073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21647c0 is same with the state(6) to be set 00:24:07.078 [2024-11-20 09:57:37.476082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21647c0 is same with the state(6) to be set 00:24:07.078 [2024-11-20 09:57:37.476088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21647c0 is same with the state(6) to be set 00:24:07.078 [2024-11-20 09:57:37.476093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21647c0 is same with the state(6) to be set 00:24:07.078 [2024-11-20 09:57:37.476098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21647c0 is same with the state(6) to be set 00:24:07.078 [2024-11-20 09:57:37.476107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21647c0 is same with the state(6) to be set 00:24:07.078 [2024-11-20 09:57:37.476113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21647c0 is same with the state(6) to be set 00:24:07.079 [2024-11-20 09:57:37.476118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21647c0 is same with the state(6) to be set 00:24:07.079 starting I/O failed: -6 00:24:07.079 [2024-11-20 09:57:37.476375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2164c90 is same with the state(6) to be set 00:24:07.079 starting I/O failed: -6 00:24:07.079 [2024-11-20 09:57:37.476387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2164c90 is same with the state(6) to be set 00:24:07.079 [2024-11-20 09:57:37.476393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2164c90 is same with the state(6) to be set 00:24:07.079 [2024-11-20 09:57:37.476398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2164c90 is same with the state(6) to be set 00:24:07.079 starting I/O failed: -6 00:24:07.079 [2024-11-20 09:57:37.476431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2164c90 is same with the state(6) to be set 00:24:07.079 [2024-11-20 09:57:37.476437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2164c90 is same with the state(6) to be set 00:24:07.079 starting I/O failed: -6 00:24:07.079 [2024-11-20 09:57:37.476442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2164c90 is same with the state(6) to be set 00:24:07.079 [2024-11-20 09:57:37.476447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2164c90 is same with the state(6) to be set 00:24:07.079 starting I/O failed: -6 00:24:07.079 starting I/O failed: -6 00:24:07.079 starting I/O failed: -6 00:24:07.079 starting I/O failed: -6 00:24:07.079 starting I/O failed: -6 00:24:07.079 starting I/O failed: -6 00:24:07.079 starting I/O failed: -6 00:24:07.079 NVMe io qpair process completion error 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 starting I/O failed: -6 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 starting I/O failed: -6 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 starting I/O failed: -6 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 starting I/O failed: -6 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 starting I/O failed: -6 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 starting I/O failed: -6 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 starting I/O failed: -6 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 starting I/O failed: -6 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 starting I/O failed: -6 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 [2024-11-20 09:57:37.477756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 starting I/O failed: -6 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 starting I/O failed: -6 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 starting I/O failed: -6 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 starting I/O failed: -6 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 starting I/O failed: -6 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 starting I/O failed: -6 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 starting I/O failed: -6 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 starting I/O failed: -6 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 starting I/O failed: -6 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 starting I/O failed: -6 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 starting I/O failed: -6 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 starting I/O failed: -6 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 starting I/O failed: -6 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 starting I/O failed: -6 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 starting I/O failed: -6 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 starting I/O failed: -6 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 starting I/O failed: -6 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 starting I/O failed: -6 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 starting I/O failed: -6 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 starting I/O failed: -6 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 starting I/O failed: -6 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 starting I/O failed: -6 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 starting I/O failed: -6 00:24:07.079 [2024-11-20 09:57:37.478667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 starting I/O failed: -6 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 starting I/O failed: -6 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 starting I/O failed: -6 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 starting I/O failed: -6 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 starting I/O failed: -6 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 starting I/O failed: -6 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 starting I/O failed: -6 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 starting I/O failed: -6 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 starting I/O failed: -6 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 starting I/O failed: -6 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 starting I/O failed: -6 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 starting I/O failed: -6 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 starting I/O failed: -6 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 starting I/O failed: -6 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 starting I/O failed: -6 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 starting I/O failed: -6 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 starting I/O failed: -6 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 starting I/O failed: -6 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 starting I/O failed: -6 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 starting I/O failed: -6 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 Write completed with error (sct=0, sc=8) 00:24:07.079 starting I/O failed: -6 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 starting I/O failed: -6 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 starting I/O failed: -6 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 starting I/O failed: -6 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 starting I/O failed: -6 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 starting I/O failed: -6 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 starting I/O failed: -6 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 starting I/O failed: -6 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 starting I/O failed: -6 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 starting I/O failed: -6 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 starting I/O failed: -6 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 starting I/O failed: -6 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 starting I/O failed: -6 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 starting I/O failed: -6 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 starting I/O failed: -6 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 starting I/O failed: -6 00:24:07.080 [2024-11-20 09:57:37.479583] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 starting I/O failed: -6 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 starting I/O failed: -6 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 starting I/O failed: -6 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 starting I/O failed: -6 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 starting I/O failed: -6 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 starting I/O failed: -6 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 starting I/O failed: -6 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 starting I/O failed: -6 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 starting I/O failed: -6 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 starting I/O failed: -6 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 starting I/O failed: -6 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 starting I/O failed: -6 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 starting I/O failed: -6 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 starting I/O failed: -6 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 starting I/O failed: -6 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 starting I/O failed: -6 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 starting I/O failed: -6 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 starting I/O failed: -6 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 starting I/O failed: -6 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 starting I/O failed: -6 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 starting I/O failed: -6 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 starting I/O failed: -6 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 starting I/O failed: -6 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 starting I/O failed: -6 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 starting I/O failed: -6 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 starting I/O failed: -6 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 starting I/O failed: -6 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 starting I/O failed: -6 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 starting I/O failed: -6 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 starting I/O failed: -6 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 starting I/O failed: -6 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 starting I/O failed: -6 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 starting I/O failed: -6 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 starting I/O failed: -6 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 starting I/O failed: -6 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 starting I/O failed: -6 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 starting I/O failed: -6 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 starting I/O failed: -6 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 starting I/O failed: -6 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 starting I/O failed: -6 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 starting I/O failed: -6 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 starting I/O failed: -6 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 starting I/O failed: -6 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 starting I/O failed: -6 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 starting I/O failed: -6 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 starting I/O failed: -6 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 starting I/O failed: -6 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 starting I/O failed: -6 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 starting I/O failed: -6 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 starting I/O failed: -6 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 starting I/O failed: -6 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 starting I/O failed: -6 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 starting I/O failed: -6 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 starting I/O failed: -6 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 starting I/O failed: -6 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 starting I/O failed: -6 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 starting I/O failed: -6 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 starting I/O failed: -6 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 starting I/O failed: -6 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 starting I/O failed: -6 00:24:07.080 [2024-11-20 09:57:37.480856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:07.080 NVMe io qpair process completion error 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 starting I/O failed: -6 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 starting I/O failed: -6 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 starting I/O failed: -6 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 starting I/O failed: -6 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 starting I/O failed: -6 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 starting I/O failed: -6 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 starting I/O failed: -6 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 starting I/O failed: -6 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 starting I/O failed: -6 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 starting I/O failed: -6 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.080 Write completed with error (sct=0, sc=8) 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 starting I/O failed: -6 00:24:07.081 [2024-11-20 09:57:37.482023] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:07.081 starting I/O failed: -6 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 starting I/O failed: -6 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 starting I/O failed: -6 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 starting I/O failed: -6 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 starting I/O failed: -6 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 starting I/O failed: -6 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 starting I/O failed: -6 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 starting I/O failed: -6 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 starting I/O failed: -6 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 starting I/O failed: -6 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 starting I/O failed: -6 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 starting I/O failed: -6 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 starting I/O failed: -6 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 starting I/O failed: -6 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 starting I/O failed: -6 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 starting I/O failed: -6 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 starting I/O failed: -6 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 starting I/O failed: -6 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 starting I/O failed: -6 00:24:07.081 [2024-11-20 09:57:37.482854] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 starting I/O failed: -6 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 starting I/O failed: -6 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 starting I/O failed: -6 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 starting I/O failed: -6 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 starting I/O failed: -6 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 starting I/O failed: -6 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 starting I/O failed: -6 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 starting I/O failed: -6 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 starting I/O failed: -6 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 starting I/O failed: -6 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 starting I/O failed: -6 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 starting I/O failed: -6 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 starting I/O failed: -6 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 starting I/O failed: -6 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 starting I/O failed: -6 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 starting I/O failed: -6 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 starting I/O failed: -6 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 starting I/O failed: -6 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 starting I/O failed: -6 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 starting I/O failed: -6 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 starting I/O failed: -6 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 starting I/O failed: -6 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 starting I/O failed: -6 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 starting I/O failed: -6 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 starting I/O failed: -6 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 starting I/O failed: -6 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 starting I/O failed: -6 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 starting I/O failed: -6 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 starting I/O failed: -6 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 starting I/O failed: -6 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 starting I/O failed: -6 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 starting I/O failed: -6 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 starting I/O failed: -6 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 starting I/O failed: -6 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 starting I/O failed: -6 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 starting I/O failed: -6 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 starting I/O failed: -6 00:24:07.081 [2024-11-20 09:57:37.483800] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 starting I/O failed: -6 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 starting I/O failed: -6 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 starting I/O failed: -6 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 starting I/O failed: -6 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 starting I/O failed: -6 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 starting I/O failed: -6 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 starting I/O failed: -6 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 starting I/O failed: -6 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 starting I/O failed: -6 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 starting I/O failed: -6 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 starting I/O failed: -6 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 starting I/O failed: -6 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 starting I/O failed: -6 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 starting I/O failed: -6 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 starting I/O failed: -6 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 starting I/O failed: -6 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 starting I/O failed: -6 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 starting I/O failed: -6 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 starting I/O failed: -6 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 starting I/O failed: -6 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 starting I/O failed: -6 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 starting I/O failed: -6 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 starting I/O failed: -6 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 starting I/O failed: -6 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 starting I/O failed: -6 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 starting I/O failed: -6 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 starting I/O failed: -6 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 starting I/O failed: -6 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 starting I/O failed: -6 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 starting I/O failed: -6 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 starting I/O failed: -6 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 starting I/O failed: -6 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 starting I/O failed: -6 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.081 starting I/O failed: -6 00:24:07.081 Write completed with error (sct=0, sc=8) 00:24:07.082 starting I/O failed: -6 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 starting I/O failed: -6 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 starting I/O failed: -6 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 starting I/O failed: -6 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 starting I/O failed: -6 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 starting I/O failed: -6 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 starting I/O failed: -6 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 starting I/O failed: -6 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 starting I/O failed: -6 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 starting I/O failed: -6 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 starting I/O failed: -6 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 starting I/O failed: -6 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 starting I/O failed: -6 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 starting I/O failed: -6 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 starting I/O failed: -6 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 starting I/O failed: -6 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 starting I/O failed: -6 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 starting I/O failed: -6 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 starting I/O failed: -6 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 starting I/O failed: -6 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 starting I/O failed: -6 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 starting I/O failed: -6 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 starting I/O failed: -6 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 starting I/O failed: -6 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 starting I/O failed: -6 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 starting I/O failed: -6 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 starting I/O failed: -6 00:24:07.082 [2024-11-20 09:57:37.486440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:07.082 NVMe io qpair process completion error 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 starting I/O failed: -6 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 starting I/O failed: -6 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 starting I/O failed: -6 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 starting I/O failed: -6 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 starting I/O failed: -6 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 starting I/O failed: -6 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 starting I/O failed: -6 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 starting I/O failed: -6 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 starting I/O failed: -6 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 starting I/O failed: -6 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 [2024-11-20 09:57:37.487824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:07.082 starting I/O failed: -6 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 starting I/O failed: -6 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 starting I/O failed: -6 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 starting I/O failed: -6 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 starting I/O failed: -6 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 starting I/O failed: -6 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 starting I/O failed: -6 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 starting I/O failed: -6 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 starting I/O failed: -6 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 starting I/O failed: -6 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 starting I/O failed: -6 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 starting I/O failed: -6 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 starting I/O failed: -6 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 starting I/O failed: -6 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 starting I/O failed: -6 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 starting I/O failed: -6 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 starting I/O failed: -6 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 starting I/O failed: -6 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 starting I/O failed: -6 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 [2024-11-20 09:57:37.488674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 starting I/O failed: -6 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 starting I/O failed: -6 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.082 starting I/O failed: -6 00:24:07.082 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 [2024-11-20 09:57:37.489619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 [2024-11-20 09:57:37.491600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:07.083 NVMe io qpair process completion error 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 starting I/O failed: -6 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.083 Write completed with error (sct=0, sc=8) 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 starting I/O failed: -6 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 starting I/O failed: -6 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 starting I/O failed: -6 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 starting I/O failed: -6 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 starting I/O failed: -6 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 starting I/O failed: -6 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 starting I/O failed: -6 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 [2024-11-20 09:57:37.492794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:07.084 starting I/O failed: -6 00:24:07.084 starting I/O failed: -6 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 starting I/O failed: -6 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 starting I/O failed: -6 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 starting I/O failed: -6 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 starting I/O failed: -6 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 starting I/O failed: -6 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 starting I/O failed: -6 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 starting I/O failed: -6 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 starting I/O failed: -6 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 starting I/O failed: -6 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 starting I/O failed: -6 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 starting I/O failed: -6 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 starting I/O failed: -6 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 starting I/O failed: -6 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 starting I/O failed: -6 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 starting I/O failed: -6 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 starting I/O failed: -6 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 starting I/O failed: -6 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 starting I/O failed: -6 00:24:07.084 [2024-11-20 09:57:37.493644] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 starting I/O failed: -6 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 starting I/O failed: -6 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 starting I/O failed: -6 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 starting I/O failed: -6 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 starting I/O failed: -6 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 starting I/O failed: -6 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 starting I/O failed: -6 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 starting I/O failed: -6 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 starting I/O failed: -6 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 starting I/O failed: -6 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 starting I/O failed: -6 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 starting I/O failed: -6 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 starting I/O failed: -6 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 starting I/O failed: -6 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 starting I/O failed: -6 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 starting I/O failed: -6 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 starting I/O failed: -6 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 starting I/O failed: -6 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 starting I/O failed: -6 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 starting I/O failed: -6 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 starting I/O failed: -6 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 starting I/O failed: -6 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 starting I/O failed: -6 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 starting I/O failed: -6 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 starting I/O failed: -6 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 starting I/O failed: -6 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 starting I/O failed: -6 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 starting I/O failed: -6 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 starting I/O failed: -6 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 starting I/O failed: -6 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 starting I/O failed: -6 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 starting I/O failed: -6 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 starting I/O failed: -6 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 starting I/O failed: -6 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 starting I/O failed: -6 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 starting I/O failed: -6 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 starting I/O failed: -6 00:24:07.084 [2024-11-20 09:57:37.494581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 starting I/O failed: -6 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 starting I/O failed: -6 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 starting I/O failed: -6 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 starting I/O failed: -6 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 starting I/O failed: -6 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 starting I/O failed: -6 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 starting I/O failed: -6 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 starting I/O failed: -6 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 starting I/O failed: -6 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 starting I/O failed: -6 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 starting I/O failed: -6 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 starting I/O failed: -6 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 starting I/O failed: -6 00:24:07.084 Write completed with error (sct=0, sc=8) 00:24:07.084 starting I/O failed: -6 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 starting I/O failed: -6 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 starting I/O failed: -6 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 starting I/O failed: -6 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 starting I/O failed: -6 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 starting I/O failed: -6 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 starting I/O failed: -6 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 starting I/O failed: -6 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 starting I/O failed: -6 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 starting I/O failed: -6 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 starting I/O failed: -6 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 starting I/O failed: -6 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 starting I/O failed: -6 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 starting I/O failed: -6 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 starting I/O failed: -6 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 starting I/O failed: -6 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 starting I/O failed: -6 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 starting I/O failed: -6 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 starting I/O failed: -6 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 starting I/O failed: -6 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 starting I/O failed: -6 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 starting I/O failed: -6 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 starting I/O failed: -6 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 starting I/O failed: -6 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 starting I/O failed: -6 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 starting I/O failed: -6 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 starting I/O failed: -6 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 starting I/O failed: -6 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 starting I/O failed: -6 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 starting I/O failed: -6 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 starting I/O failed: -6 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 starting I/O failed: -6 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 starting I/O failed: -6 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 starting I/O failed: -6 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 starting I/O failed: -6 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 starting I/O failed: -6 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 starting I/O failed: -6 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 starting I/O failed: -6 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 starting I/O failed: -6 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 starting I/O failed: -6 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 starting I/O failed: -6 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 starting I/O failed: -6 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 starting I/O failed: -6 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 starting I/O failed: -6 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 starting I/O failed: -6 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 starting I/O failed: -6 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 starting I/O failed: -6 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 starting I/O failed: -6 00:24:07.085 [2024-11-20 09:57:37.496487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:07.085 NVMe io qpair process completion error 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 starting I/O failed: -6 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 starting I/O failed: -6 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 starting I/O failed: -6 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 starting I/O failed: -6 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 starting I/O failed: -6 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 starting I/O failed: -6 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 starting I/O failed: -6 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 starting I/O failed: -6 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 starting I/O failed: -6 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 starting I/O failed: -6 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 starting I/O failed: -6 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 [2024-11-20 09:57:37.497652] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 starting I/O failed: -6 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 starting I/O failed: -6 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 starting I/O failed: -6 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 starting I/O failed: -6 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 starting I/O failed: -6 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 starting I/O failed: -6 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 starting I/O failed: -6 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 starting I/O failed: -6 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 starting I/O failed: -6 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 starting I/O failed: -6 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 starting I/O failed: -6 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 starting I/O failed: -6 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 starting I/O failed: -6 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 starting I/O failed: -6 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 starting I/O failed: -6 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 starting I/O failed: -6 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 starting I/O failed: -6 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 starting I/O failed: -6 00:24:07.085 Write completed with error (sct=0, sc=8) 00:24:07.085 starting I/O failed: -6 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 [2024-11-20 09:57:37.498471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 starting I/O failed: -6 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 starting I/O failed: -6 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 starting I/O failed: -6 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 starting I/O failed: -6 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 starting I/O failed: -6 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 starting I/O failed: -6 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 starting I/O failed: -6 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 starting I/O failed: -6 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 starting I/O failed: -6 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 starting I/O failed: -6 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 starting I/O failed: -6 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 starting I/O failed: -6 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 starting I/O failed: -6 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 starting I/O failed: -6 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 starting I/O failed: -6 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 starting I/O failed: -6 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 starting I/O failed: -6 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 starting I/O failed: -6 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 starting I/O failed: -6 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 starting I/O failed: -6 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 starting I/O failed: -6 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 starting I/O failed: -6 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 starting I/O failed: -6 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 starting I/O failed: -6 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 starting I/O failed: -6 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 starting I/O failed: -6 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 starting I/O failed: -6 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 starting I/O failed: -6 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 starting I/O failed: -6 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 starting I/O failed: -6 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 starting I/O failed: -6 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 starting I/O failed: -6 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 starting I/O failed: -6 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 starting I/O failed: -6 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 starting I/O failed: -6 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 starting I/O failed: -6 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 [2024-11-20 09:57:37.499409] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 starting I/O failed: -6 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 starting I/O failed: -6 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 starting I/O failed: -6 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 starting I/O failed: -6 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 starting I/O failed: -6 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 starting I/O failed: -6 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 starting I/O failed: -6 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 starting I/O failed: -6 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 starting I/O failed: -6 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 starting I/O failed: -6 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 starting I/O failed: -6 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 starting I/O failed: -6 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 starting I/O failed: -6 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 starting I/O failed: -6 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 starting I/O failed: -6 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 starting I/O failed: -6 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 starting I/O failed: -6 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 starting I/O failed: -6 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 starting I/O failed: -6 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 starting I/O failed: -6 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 starting I/O failed: -6 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 starting I/O failed: -6 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 starting I/O failed: -6 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 starting I/O failed: -6 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 starting I/O failed: -6 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 starting I/O failed: -6 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 starting I/O failed: -6 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 starting I/O failed: -6 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 starting I/O failed: -6 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 starting I/O failed: -6 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 starting I/O failed: -6 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 starting I/O failed: -6 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 starting I/O failed: -6 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 starting I/O failed: -6 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 starting I/O failed: -6 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 starting I/O failed: -6 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 starting I/O failed: -6 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 starting I/O failed: -6 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 starting I/O failed: -6 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 starting I/O failed: -6 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 starting I/O failed: -6 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 starting I/O failed: -6 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 starting I/O failed: -6 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 starting I/O failed: -6 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 starting I/O failed: -6 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 starting I/O failed: -6 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 starting I/O failed: -6 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 starting I/O failed: -6 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 starting I/O failed: -6 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 starting I/O failed: -6 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 starting I/O failed: -6 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 starting I/O failed: -6 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 starting I/O failed: -6 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 starting I/O failed: -6 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 starting I/O failed: -6 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 starting I/O failed: -6 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 starting I/O failed: -6 00:24:07.086 Write completed with error (sct=0, sc=8) 00:24:07.086 starting I/O failed: -6 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 starting I/O failed: -6 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 starting I/O failed: -6 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 starting I/O failed: -6 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 starting I/O failed: -6 00:24:07.087 [2024-11-20 09:57:37.501587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:07.087 NVMe io qpair process completion error 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 starting I/O failed: -6 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 starting I/O failed: -6 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 starting I/O failed: -6 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 starting I/O failed: -6 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 starting I/O failed: -6 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 starting I/O failed: -6 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 starting I/O failed: -6 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 starting I/O failed: -6 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 starting I/O failed: -6 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 starting I/O failed: -6 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 starting I/O failed: -6 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 [2024-11-20 09:57:37.502816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:07.087 starting I/O failed: -6 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 starting I/O failed: -6 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 starting I/O failed: -6 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 starting I/O failed: -6 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 starting I/O failed: -6 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 starting I/O failed: -6 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 starting I/O failed: -6 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 starting I/O failed: -6 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 starting I/O failed: -6 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 starting I/O failed: -6 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 starting I/O failed: -6 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 starting I/O failed: -6 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 starting I/O failed: -6 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 starting I/O failed: -6 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 starting I/O failed: -6 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 starting I/O failed: -6 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 starting I/O failed: -6 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 starting I/O failed: -6 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 starting I/O failed: -6 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 [2024-11-20 09:57:37.503625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 starting I/O failed: -6 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 starting I/O failed: -6 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 starting I/O failed: -6 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 starting I/O failed: -6 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 starting I/O failed: -6 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 starting I/O failed: -6 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 starting I/O failed: -6 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 starting I/O failed: -6 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 starting I/O failed: -6 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 starting I/O failed: -6 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 starting I/O failed: -6 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 starting I/O failed: -6 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 starting I/O failed: -6 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 starting I/O failed: -6 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 starting I/O failed: -6 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 starting I/O failed: -6 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 starting I/O failed: -6 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 starting I/O failed: -6 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 starting I/O failed: -6 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 starting I/O failed: -6 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 starting I/O failed: -6 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 starting I/O failed: -6 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 starting I/O failed: -6 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 starting I/O failed: -6 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.087 Write completed with error (sct=0, sc=8) 00:24:07.088 starting I/O failed: -6 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 starting I/O failed: -6 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 starting I/O failed: -6 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 starting I/O failed: -6 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 starting I/O failed: -6 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 starting I/O failed: -6 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 starting I/O failed: -6 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 starting I/O failed: -6 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 starting I/O failed: -6 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 starting I/O failed: -6 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 starting I/O failed: -6 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 starting I/O failed: -6 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 starting I/O failed: -6 00:24:07.088 [2024-11-20 09:57:37.504553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 starting I/O failed: -6 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 starting I/O failed: -6 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 starting I/O failed: -6 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 starting I/O failed: -6 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 starting I/O failed: -6 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 starting I/O failed: -6 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 starting I/O failed: -6 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 starting I/O failed: -6 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 starting I/O failed: -6 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 starting I/O failed: -6 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 starting I/O failed: -6 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 starting I/O failed: -6 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 starting I/O failed: -6 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 starting I/O failed: -6 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 starting I/O failed: -6 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 starting I/O failed: -6 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 starting I/O failed: -6 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 starting I/O failed: -6 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 starting I/O failed: -6 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 starting I/O failed: -6 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 starting I/O failed: -6 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 starting I/O failed: -6 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 starting I/O failed: -6 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 starting I/O failed: -6 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 starting I/O failed: -6 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 starting I/O failed: -6 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 starting I/O failed: -6 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 starting I/O failed: -6 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 starting I/O failed: -6 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 starting I/O failed: -6 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 starting I/O failed: -6 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 starting I/O failed: -6 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 starting I/O failed: -6 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 starting I/O failed: -6 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 starting I/O failed: -6 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 starting I/O failed: -6 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 starting I/O failed: -6 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 starting I/O failed: -6 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 starting I/O failed: -6 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 starting I/O failed: -6 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 starting I/O failed: -6 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 starting I/O failed: -6 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 starting I/O failed: -6 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 starting I/O failed: -6 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 starting I/O failed: -6 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 starting I/O failed: -6 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 starting I/O failed: -6 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 starting I/O failed: -6 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 starting I/O failed: -6 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 starting I/O failed: -6 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 starting I/O failed: -6 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 starting I/O failed: -6 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 starting I/O failed: -6 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 starting I/O failed: -6 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 starting I/O failed: -6 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 starting I/O failed: -6 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 starting I/O failed: -6 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 starting I/O failed: -6 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 starting I/O failed: -6 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 starting I/O failed: -6 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 starting I/O failed: -6 00:24:07.088 [2024-11-20 09:57:37.505986] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:07.088 NVMe io qpair process completion error 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 starting I/O failed: -6 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 starting I/O failed: -6 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 starting I/O failed: -6 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 starting I/O failed: -6 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 starting I/O failed: -6 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 starting I/O failed: -6 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 starting I/O failed: -6 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 starting I/O failed: -6 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 starting I/O failed: -6 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 starting I/O failed: -6 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 [2024-11-20 09:57:37.507129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:07.088 starting I/O failed: -6 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 starting I/O failed: -6 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 starting I/O failed: -6 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.088 Write completed with error (sct=0, sc=8) 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 starting I/O failed: -6 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 starting I/O failed: -6 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 starting I/O failed: -6 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 starting I/O failed: -6 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 starting I/O failed: -6 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 starting I/O failed: -6 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 starting I/O failed: -6 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 starting I/O failed: -6 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 starting I/O failed: -6 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 starting I/O failed: -6 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 starting I/O failed: -6 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 starting I/O failed: -6 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 starting I/O failed: -6 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 starting I/O failed: -6 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 starting I/O failed: -6 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 starting I/O failed: -6 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 [2024-11-20 09:57:37.507948] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:07.089 starting I/O failed: -6 00:24:07.089 starting I/O failed: -6 00:24:07.089 starting I/O failed: -6 00:24:07.089 starting I/O failed: -6 00:24:07.089 starting I/O failed: -6 00:24:07.089 starting I/O failed: -6 00:24:07.089 starting I/O failed: -6 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 starting I/O failed: -6 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 starting I/O failed: -6 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 starting I/O failed: -6 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 starting I/O failed: -6 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 starting I/O failed: -6 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 starting I/O failed: -6 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 starting I/O failed: -6 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 starting I/O failed: -6 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 starting I/O failed: -6 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 starting I/O failed: -6 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 starting I/O failed: -6 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 starting I/O failed: -6 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 starting I/O failed: -6 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 starting I/O failed: -6 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 starting I/O failed: -6 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 starting I/O failed: -6 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 starting I/O failed: -6 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 starting I/O failed: -6 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 starting I/O failed: -6 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 starting I/O failed: -6 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 starting I/O failed: -6 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 starting I/O failed: -6 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 starting I/O failed: -6 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 starting I/O failed: -6 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 starting I/O failed: -6 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 starting I/O failed: -6 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 starting I/O failed: -6 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 starting I/O failed: -6 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 starting I/O failed: -6 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 [2024-11-20 09:57:37.509283] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 starting I/O failed: -6 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 starting I/O failed: -6 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 starting I/O failed: -6 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 starting I/O failed: -6 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 starting I/O failed: -6 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 starting I/O failed: -6 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 starting I/O failed: -6 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 starting I/O failed: -6 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 starting I/O failed: -6 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 starting I/O failed: -6 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 starting I/O failed: -6 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 starting I/O failed: -6 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 starting I/O failed: -6 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 starting I/O failed: -6 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 starting I/O failed: -6 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 starting I/O failed: -6 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 starting I/O failed: -6 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 starting I/O failed: -6 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 starting I/O failed: -6 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 starting I/O failed: -6 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 starting I/O failed: -6 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 starting I/O failed: -6 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 starting I/O failed: -6 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 starting I/O failed: -6 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 starting I/O failed: -6 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 starting I/O failed: -6 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 starting I/O failed: -6 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 starting I/O failed: -6 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 starting I/O failed: -6 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 starting I/O failed: -6 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 starting I/O failed: -6 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 starting I/O failed: -6 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 starting I/O failed: -6 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 starting I/O failed: -6 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 starting I/O failed: -6 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 starting I/O failed: -6 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 starting I/O failed: -6 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 starting I/O failed: -6 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 starting I/O failed: -6 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 starting I/O failed: -6 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 starting I/O failed: -6 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.089 starting I/O failed: -6 00:24:07.089 Write completed with error (sct=0, sc=8) 00:24:07.090 starting I/O failed: -6 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 starting I/O failed: -6 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 starting I/O failed: -6 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 starting I/O failed: -6 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 starting I/O failed: -6 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 starting I/O failed: -6 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 starting I/O failed: -6 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 starting I/O failed: -6 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 starting I/O failed: -6 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 starting I/O failed: -6 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 starting I/O failed: -6 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 starting I/O failed: -6 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 starting I/O failed: -6 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 starting I/O failed: -6 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 starting I/O failed: -6 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 starting I/O failed: -6 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 starting I/O failed: -6 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 starting I/O failed: -6 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 starting I/O failed: -6 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 starting I/O failed: -6 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 starting I/O failed: -6 00:24:07.090 [2024-11-20 09:57:37.511850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:07.090 NVMe io qpair process completion error 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 starting I/O failed: -6 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 starting I/O failed: -6 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 starting I/O failed: -6 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 starting I/O failed: -6 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 starting I/O failed: -6 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 starting I/O failed: -6 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 starting I/O failed: -6 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 starting I/O failed: -6 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 starting I/O failed: -6 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 starting I/O failed: -6 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 [2024-11-20 09:57:37.512960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:07.090 starting I/O failed: -6 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 starting I/O failed: -6 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 starting I/O failed: -6 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 starting I/O failed: -6 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 starting I/O failed: -6 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 starting I/O failed: -6 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 starting I/O failed: -6 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 starting I/O failed: -6 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 starting I/O failed: -6 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 starting I/O failed: -6 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 starting I/O failed: -6 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 starting I/O failed: -6 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 starting I/O failed: -6 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 starting I/O failed: -6 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 starting I/O failed: -6 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 starting I/O failed: -6 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 starting I/O failed: -6 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 starting I/O failed: -6 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 starting I/O failed: -6 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 starting I/O failed: -6 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 starting I/O failed: -6 00:24:07.090 [2024-11-20 09:57:37.513798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 starting I/O failed: -6 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 starting I/O failed: -6 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 starting I/O failed: -6 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 starting I/O failed: -6 00:24:07.090 Write completed with error (sct=0, sc=8) 00:24:07.090 starting I/O failed: -6 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 starting I/O failed: -6 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 starting I/O failed: -6 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 starting I/O failed: -6 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 starting I/O failed: -6 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 starting I/O failed: -6 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 starting I/O failed: -6 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 starting I/O failed: -6 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 starting I/O failed: -6 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 starting I/O failed: -6 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 starting I/O failed: -6 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 starting I/O failed: -6 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 starting I/O failed: -6 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 starting I/O failed: -6 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 starting I/O failed: -6 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 starting I/O failed: -6 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 starting I/O failed: -6 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 starting I/O failed: -6 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 starting I/O failed: -6 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 starting I/O failed: -6 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 starting I/O failed: -6 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 starting I/O failed: -6 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 starting I/O failed: -6 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 starting I/O failed: -6 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 starting I/O failed: -6 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 starting I/O failed: -6 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 starting I/O failed: -6 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 starting I/O failed: -6 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 starting I/O failed: -6 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 starting I/O failed: -6 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 starting I/O failed: -6 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 starting I/O failed: -6 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 [2024-11-20 09:57:37.514735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 starting I/O failed: -6 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 starting I/O failed: -6 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 starting I/O failed: -6 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 starting I/O failed: -6 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 starting I/O failed: -6 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 starting I/O failed: -6 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 starting I/O failed: -6 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 starting I/O failed: -6 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 starting I/O failed: -6 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 starting I/O failed: -6 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 starting I/O failed: -6 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 starting I/O failed: -6 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 starting I/O failed: -6 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 starting I/O failed: -6 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 starting I/O failed: -6 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 starting I/O failed: -6 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 starting I/O failed: -6 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 starting I/O failed: -6 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 starting I/O failed: -6 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 starting I/O failed: -6 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 starting I/O failed: -6 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 starting I/O failed: -6 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 starting I/O failed: -6 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 starting I/O failed: -6 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 starting I/O failed: -6 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 starting I/O failed: -6 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 starting I/O failed: -6 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 starting I/O failed: -6 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 starting I/O failed: -6 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 starting I/O failed: -6 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 starting I/O failed: -6 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 starting I/O failed: -6 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 starting I/O failed: -6 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 starting I/O failed: -6 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 starting I/O failed: -6 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 starting I/O failed: -6 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 starting I/O failed: -6 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 starting I/O failed: -6 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 starting I/O failed: -6 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 starting I/O failed: -6 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 starting I/O failed: -6 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 starting I/O failed: -6 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 starting I/O failed: -6 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 starting I/O failed: -6 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 starting I/O failed: -6 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 starting I/O failed: -6 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 starting I/O failed: -6 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 starting I/O failed: -6 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 starting I/O failed: -6 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 starting I/O failed: -6 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 starting I/O failed: -6 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 starting I/O failed: -6 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 starting I/O failed: -6 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 starting I/O failed: -6 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 starting I/O failed: -6 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 starting I/O failed: -6 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 starting I/O failed: -6 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 starting I/O failed: -6 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 starting I/O failed: -6 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 starting I/O failed: -6 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 starting I/O failed: -6 00:24:07.091 [2024-11-20 09:57:37.516393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:07.091 NVMe io qpair process completion error 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 starting I/O failed: -6 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 starting I/O failed: -6 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 starting I/O failed: -6 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.091 Write completed with error (sct=0, sc=8) 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 starting I/O failed: -6 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 starting I/O failed: -6 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 starting I/O failed: -6 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 starting I/O failed: -6 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 starting I/O failed: -6 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 starting I/O failed: -6 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 starting I/O failed: -6 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 starting I/O failed: -6 00:24:07.092 [2024-11-20 09:57:37.517547] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:07.092 starting I/O failed: -6 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 starting I/O failed: -6 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 starting I/O failed: -6 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 starting I/O failed: -6 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 starting I/O failed: -6 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 starting I/O failed: -6 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 starting I/O failed: -6 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 starting I/O failed: -6 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 starting I/O failed: -6 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 starting I/O failed: -6 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 starting I/O failed: -6 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 starting I/O failed: -6 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 starting I/O failed: -6 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 starting I/O failed: -6 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 starting I/O failed: -6 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 starting I/O failed: -6 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 starting I/O failed: -6 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 starting I/O failed: -6 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 starting I/O failed: -6 00:24:07.092 [2024-11-20 09:57:37.518379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 starting I/O failed: -6 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 starting I/O failed: -6 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 starting I/O failed: -6 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 starting I/O failed: -6 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 starting I/O failed: -6 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 starting I/O failed: -6 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 starting I/O failed: -6 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 starting I/O failed: -6 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 starting I/O failed: -6 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 starting I/O failed: -6 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 starting I/O failed: -6 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 starting I/O failed: -6 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 starting I/O failed: -6 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 starting I/O failed: -6 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 starting I/O failed: -6 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 starting I/O failed: -6 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 starting I/O failed: -6 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 starting I/O failed: -6 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 starting I/O failed: -6 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 starting I/O failed: -6 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 starting I/O failed: -6 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 starting I/O failed: -6 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 starting I/O failed: -6 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 starting I/O failed: -6 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 starting I/O failed: -6 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 starting I/O failed: -6 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 starting I/O failed: -6 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 starting I/O failed: -6 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 starting I/O failed: -6 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 starting I/O failed: -6 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 starting I/O failed: -6 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 starting I/O failed: -6 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 starting I/O failed: -6 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 starting I/O failed: -6 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 starting I/O failed: -6 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 starting I/O failed: -6 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.092 [2024-11-20 09:57:37.519326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:07.092 Write completed with error (sct=0, sc=8) 00:24:07.093 starting I/O failed: -6 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 starting I/O failed: -6 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 starting I/O failed: -6 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 starting I/O failed: -6 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 starting I/O failed: -6 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 starting I/O failed: -6 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 starting I/O failed: -6 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 starting I/O failed: -6 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 starting I/O failed: -6 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 starting I/O failed: -6 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 starting I/O failed: -6 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 starting I/O failed: -6 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 starting I/O failed: -6 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 starting I/O failed: -6 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 starting I/O failed: -6 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 starting I/O failed: -6 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 starting I/O failed: -6 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 starting I/O failed: -6 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 starting I/O failed: -6 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 starting I/O failed: -6 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 starting I/O failed: -6 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 starting I/O failed: -6 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 starting I/O failed: -6 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 starting I/O failed: -6 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 starting I/O failed: -6 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 starting I/O failed: -6 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 starting I/O failed: -6 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 starting I/O failed: -6 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 starting I/O failed: -6 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 starting I/O failed: -6 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 starting I/O failed: -6 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 starting I/O failed: -6 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 starting I/O failed: -6 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 starting I/O failed: -6 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 starting I/O failed: -6 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 starting I/O failed: -6 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 starting I/O failed: -6 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 starting I/O failed: -6 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 starting I/O failed: -6 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 starting I/O failed: -6 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 starting I/O failed: -6 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 starting I/O failed: -6 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 starting I/O failed: -6 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 starting I/O failed: -6 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 starting I/O failed: -6 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 starting I/O failed: -6 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 starting I/O failed: -6 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 starting I/O failed: -6 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 starting I/O failed: -6 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 starting I/O failed: -6 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 starting I/O failed: -6 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 starting I/O failed: -6 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 starting I/O failed: -6 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 starting I/O failed: -6 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 starting I/O failed: -6 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 starting I/O failed: -6 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 starting I/O failed: -6 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 starting I/O failed: -6 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 starting I/O failed: -6 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 starting I/O failed: -6 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 starting I/O failed: -6 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 starting I/O failed: -6 00:24:07.093 [2024-11-20 09:57:37.521330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:07.093 NVMe io qpair process completion error 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 Write completed with error (sct=0, sc=8) 00:24:07.093 Initializing NVMe Controllers 00:24:07.093 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:24:07.093 Controller IO queue size 128, less than required. 00:24:07.093 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:07.093 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:24:07.093 Controller IO queue size 128, less than required. 00:24:07.094 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:07.094 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:24:07.094 Controller IO queue size 128, less than required. 00:24:07.094 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:07.094 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:24:07.094 Controller IO queue size 128, less than required. 00:24:07.094 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:07.094 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:24:07.094 Controller IO queue size 128, less than required. 00:24:07.094 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:07.094 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:24:07.094 Controller IO queue size 128, less than required. 00:24:07.094 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:07.094 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:07.094 Controller IO queue size 128, less than required. 00:24:07.094 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:07.094 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:24:07.094 Controller IO queue size 128, less than required. 00:24:07.094 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:07.094 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:24:07.094 Controller IO queue size 128, less than required. 00:24:07.094 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:07.094 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:24:07.094 Controller IO queue size 128, less than required. 00:24:07.094 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:07.094 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:24:07.094 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:24:07.094 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:24:07.094 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:24:07.094 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:24:07.094 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:24:07.094 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:07.094 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:24:07.094 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:24:07.094 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:24:07.094 Initialization complete. Launching workers. 00:24:07.094 ======================================================== 00:24:07.094 Latency(us) 00:24:07.094 Device Information : IOPS MiB/s Average min max 00:24:07.094 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1917.36 82.39 66775.16 692.98 124142.00 00:24:07.094 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1914.76 82.27 66895.77 809.51 152561.60 00:24:07.094 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1915.84 82.32 66881.30 685.15 118193.99 00:24:07.094 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1903.68 81.80 67343.71 855.96 129730.62 00:24:07.094 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1902.16 81.73 67419.34 727.34 131282.48 00:24:07.094 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1847.63 79.39 68974.88 924.86 119854.60 00:24:07.094 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1886.73 81.07 67294.63 943.39 117422.70 00:24:07.094 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1887.38 81.10 67288.66 687.76 121902.42 00:24:07.094 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1877.61 80.68 67678.74 744.26 121339.40 00:24:07.094 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1890.86 81.25 67232.46 669.38 118268.96 00:24:07.094 ======================================================== 00:24:07.094 Total : 18944.00 814.00 67372.54 669.38 152561.60 00:24:07.094 00:24:07.094 [2024-11-20 09:57:37.529416] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505bc0 is same with the state(6) to be set 00:24:07.094 [2024-11-20 09:57:37.529462] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505890 is same with the state(6) to be set 00:24:07.094 [2024-11-20 09:57:37.529493] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ef0 is same with the state(6) to be set 00:24:07.094 [2024-11-20 09:57:37.529522] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x506740 is same with the state(6) to be set 00:24:07.094 [2024-11-20 09:57:37.529550] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x506410 is same with the state(6) to be set 00:24:07.094 [2024-11-20 09:57:37.529579] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x507900 is same with the state(6) to be set 00:24:07.094 [2024-11-20 09:57:37.529607] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x507720 is same with the state(6) to be set 00:24:07.094 [2024-11-20 09:57:37.529636] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x506a70 is same with the state(6) to be set 00:24:07.094 [2024-11-20 09:57:37.529666] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x507ae0 is same with the state(6) to be set 00:24:07.094 [2024-11-20 09:57:37.529698] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505560 is same with the state(6) to be set 00:24:07.094 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:24:07.094 09:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:24:08.034 09:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 1443968 00:24:08.034 09:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:24:08.034 09:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1443968 00:24:08.034 09:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:24:08.034 09:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:08.034 09:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:24:08.034 09:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:08.034 09:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 1443968 00:24:08.034 09:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:24:08.034 09:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:08.034 09:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:08.034 09:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:08.035 09:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:24:08.035 09:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:24:08.035 09:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:08.035 09:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:08.035 09:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:24:08.035 09:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:08.035 09:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:24:08.035 09:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:08.035 09:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:24:08.035 09:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:08.035 09:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:08.035 rmmod nvme_tcp 00:24:08.035 rmmod nvme_fabrics 00:24:08.035 rmmod nvme_keyring 00:24:08.035 09:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:08.035 09:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:24:08.035 09:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:24:08.035 09:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 1443699 ']' 00:24:08.035 09:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 1443699 00:24:08.035 09:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 1443699 ']' 00:24:08.035 09:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 1443699 00:24:08.035 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1443699) - No such process 00:24:08.035 09:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 1443699 is not found' 00:24:08.035 Process with pid 1443699 is not found 00:24:08.035 09:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:08.035 09:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:08.035 09:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:08.035 09:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:24:08.035 09:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:24:08.035 09:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:08.035 09:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:24:08.035 09:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:08.035 09:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:08.035 09:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:08.035 09:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:08.035 09:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:10.579 09:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:10.579 00:24:10.579 real 0m10.273s 00:24:10.579 user 0m28.121s 00:24:10.579 sys 0m3.868s 00:24:10.579 09:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:10.579 09:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:10.579 ************************************ 00:24:10.579 END TEST nvmf_shutdown_tc4 00:24:10.579 ************************************ 00:24:10.579 09:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:24:10.579 00:24:10.579 real 0m43.216s 00:24:10.579 user 1m44.050s 00:24:10.579 sys 0m13.762s 00:24:10.579 09:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:10.579 09:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:10.579 ************************************ 00:24:10.579 END TEST nvmf_shutdown 00:24:10.579 ************************************ 00:24:10.579 09:57:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:24:10.579 09:57:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:10.579 09:57:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:10.579 09:57:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:10.579 ************************************ 00:24:10.579 START TEST nvmf_nsid 00:24:10.579 ************************************ 00:24:10.579 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:24:10.579 * Looking for test storage... 00:24:10.579 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:10.579 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:10.579 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:24:10.579 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:10.579 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:10.579 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:10.579 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:10.579 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:10.579 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:24:10.579 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:24:10.579 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:24:10.579 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:24:10.579 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:24:10.579 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:24:10.579 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:24:10.579 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:10.579 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:24:10.579 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:24:10.579 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:10.579 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:10.579 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:24:10.579 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:24:10.579 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:10.579 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:24:10.579 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:24:10.579 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:24:10.579 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:24:10.579 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:10.579 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:24:10.579 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:24:10.579 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:10.579 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:10.579 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:24:10.579 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:10.579 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:10.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:10.579 --rc genhtml_branch_coverage=1 00:24:10.579 --rc genhtml_function_coverage=1 00:24:10.579 --rc genhtml_legend=1 00:24:10.580 --rc geninfo_all_blocks=1 00:24:10.580 --rc geninfo_unexecuted_blocks=1 00:24:10.580 00:24:10.580 ' 00:24:10.580 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:10.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:10.580 --rc genhtml_branch_coverage=1 00:24:10.580 --rc genhtml_function_coverage=1 00:24:10.580 --rc genhtml_legend=1 00:24:10.580 --rc geninfo_all_blocks=1 00:24:10.580 --rc geninfo_unexecuted_blocks=1 00:24:10.580 00:24:10.580 ' 00:24:10.580 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:10.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:10.580 --rc genhtml_branch_coverage=1 00:24:10.580 --rc genhtml_function_coverage=1 00:24:10.580 --rc genhtml_legend=1 00:24:10.580 --rc geninfo_all_blocks=1 00:24:10.580 --rc geninfo_unexecuted_blocks=1 00:24:10.580 00:24:10.580 ' 00:24:10.580 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:10.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:10.580 --rc genhtml_branch_coverage=1 00:24:10.580 --rc genhtml_function_coverage=1 00:24:10.580 --rc genhtml_legend=1 00:24:10.580 --rc geninfo_all_blocks=1 00:24:10.580 --rc geninfo_unexecuted_blocks=1 00:24:10.580 00:24:10.580 ' 00:24:10.580 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:10.580 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:24:10.580 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:10.580 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:10.580 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:10.580 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:10.580 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:10.580 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:10.580 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:10.580 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:10.580 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:10.580 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:10.580 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:10.580 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:10.580 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:10.580 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:10.580 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:10.580 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:10.580 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:10.580 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:24:10.580 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:10.580 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:10.580 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:10.580 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.580 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.580 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.580 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:24:10.580 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.580 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:24:10.580 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:10.580 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:10.580 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:10.580 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:10.580 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:10.580 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:10.580 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:10.580 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:10.580 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:10.580 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:10.580 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:24:10.580 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:24:10.580 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:24:10.580 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:24:10.580 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:24:10.580 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:24:10.580 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:10.580 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:10.580 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:10.580 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:10.580 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:10.580 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:10.580 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:10.580 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:10.580 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:10.580 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:10.580 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:24:10.580 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:18.915 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:18.915 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:24:18.915 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:18.915 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:18.915 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:18.915 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:18.915 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:18.915 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:24:18.915 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:18.915 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:24:18.915 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:24:18.915 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:24:18.915 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:24:18.915 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:24:18.915 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:24:18.915 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:18.915 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:18.915 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:18.915 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:18.915 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:18.915 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:18.915 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:18.915 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:18.915 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:18.915 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:18.915 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:18.915 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:18.915 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:18.915 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:18.915 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:18.915 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:18.915 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:18.915 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:18.915 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:18.915 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:18.915 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:18.915 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:18.915 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:18.915 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:18.915 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:18.915 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:18.915 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:18.915 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:18.915 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:18.915 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:18.915 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:18.915 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:18.915 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:18.915 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:18.915 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:18.915 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:18.915 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:18.915 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:18.915 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:18.915 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:18.915 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:18.915 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:18.915 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:18.915 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:18.915 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:18.915 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:18.915 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:18.915 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:18.915 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:18.915 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:18.915 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:18.915 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:18.915 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:18.915 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:18.915 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:18.915 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:18.915 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:18.915 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:18.915 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:24:18.915 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:18.915 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:18.915 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:18.915 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:18.915 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:18.915 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:18.915 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:18.915 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:18.915 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:18.915 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:18.915 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:18.915 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:18.915 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:18.915 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:18.915 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:18.915 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:18.915 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:18.915 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:18.915 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:18.915 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:18.915 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:18.915 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:18.915 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:18.915 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:18.915 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:18.915 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:18.915 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:18.916 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.678 ms 00:24:18.916 00:24:18.916 --- 10.0.0.2 ping statistics --- 00:24:18.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:18.916 rtt min/avg/max/mdev = 0.678/0.678/0.678/0.000 ms 00:24:18.916 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:18.916 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:18.916 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:24:18.916 00:24:18.916 --- 10.0.0.1 ping statistics --- 00:24:18.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:18.916 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:24:18.916 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:18.916 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:24:18.916 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:18.916 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:18.916 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:18.916 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:18.916 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:18.916 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:18.916 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:18.916 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:24:18.916 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:18.916 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:18.916 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:18.916 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=1449423 00:24:18.916 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 1449423 00:24:18.916 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:24:18.916 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 1449423 ']' 00:24:18.916 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:18.916 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:18.916 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:18.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:18.916 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:18.916 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:18.916 [2024-11-20 09:57:48.834397] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:24:18.916 [2024-11-20 09:57:48.834463] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:18.916 [2024-11-20 09:57:48.936561] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:18.916 [2024-11-20 09:57:48.987811] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:18.916 [2024-11-20 09:57:48.987863] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:18.916 [2024-11-20 09:57:48.987872] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:18.916 [2024-11-20 09:57:48.987879] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:18.916 [2024-11-20 09:57:48.987885] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:18.916 [2024-11-20 09:57:48.988686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:18.916 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:18.916 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:24:18.916 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:18.916 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:18.916 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:18.916 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:18.916 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:24:18.916 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=1449662 00:24:18.916 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:24:18.916 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:24:18.916 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:24:18.916 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:24:18.916 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:18.916 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:18.916 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:18.916 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:18.916 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:18.916 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:18.916 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:18.916 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:18.916 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:18.916 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:24:18.916 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:24:18.916 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=032c7015-caeb-4a04-9f29-78eff3a3af44 00:24:18.916 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:24:18.916 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=9c217b68-03b1-42f6-ab42-0dc925611c3b 00:24:18.916 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:24:18.916 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=ec163c4f-3b2f-4283-ae8d-6d1fd498d013 00:24:18.916 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:24:18.916 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.916 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:18.916 null0 00:24:18.916 null1 00:24:18.916 null2 00:24:18.916 [2024-11-20 09:57:49.755107] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:18.916 [2024-11-20 09:57:49.763054] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:24:18.916 [2024-11-20 09:57:49.763118] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1449662 ] 00:24:18.916 [2024-11-20 09:57:49.779448] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:18.916 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.916 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 1449662 /var/tmp/tgt2.sock 00:24:18.916 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 1449662 ']' 00:24:18.916 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:24:18.916 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:18.916 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:24:18.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:24:18.916 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:18.916 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:19.178 [2024-11-20 09:57:49.855147] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:19.178 [2024-11-20 09:57:49.907462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:19.439 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:19.439 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:24:19.439 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:24:19.700 [2024-11-20 09:57:50.473942] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:19.700 [2024-11-20 09:57:50.490127] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:24:19.700 nvme0n1 nvme0n2 00:24:19.700 nvme1n1 00:24:19.700 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:24:19.700 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:24:19.700 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:21.083 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:24:21.084 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:24:21.084 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:24:21.084 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:24:21.084 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:24:21.084 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:24:21.084 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:24:21.084 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:24:21.084 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:24:21.084 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:24:21.084 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:24:21.084 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:24:21.084 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:24:22.470 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:24:22.470 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:24:22.470 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:24:22.470 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:24:22.470 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:24:22.470 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 032c7015-caeb-4a04-9f29-78eff3a3af44 00:24:22.470 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:24:22.470 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:24:22.470 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:24:22.470 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:24:22.470 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:24:22.470 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=032c7015caeb4a049f2978eff3a3af44 00:24:22.470 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 032C7015CAEB4A049F2978EFF3A3AF44 00:24:22.470 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 032C7015CAEB4A049F2978EFF3A3AF44 == \0\3\2\C\7\0\1\5\C\A\E\B\4\A\0\4\9\F\2\9\7\8\E\F\F\3\A\3\A\F\4\4 ]] 00:24:22.470 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:24:22.470 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:24:22.470 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:24:22.470 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:24:22.470 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:24:22.470 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:24:22.470 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:24:22.470 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 9c217b68-03b1-42f6-ab42-0dc925611c3b 00:24:22.470 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:24:22.470 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:24:22.470 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:24:22.470 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:24:22.470 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:24:22.470 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=9c217b6803b142f6ab420dc925611c3b 00:24:22.470 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 9C217B6803B142F6AB420DC925611C3B 00:24:22.470 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 9C217B6803B142F6AB420DC925611C3B == \9\C\2\1\7\B\6\8\0\3\B\1\4\2\F\6\A\B\4\2\0\D\C\9\2\5\6\1\1\C\3\B ]] 00:24:22.470 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:24:22.470 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:24:22.470 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:24:22.470 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:24:22.470 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:24:22.470 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:24:22.470 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:24:22.470 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid ec163c4f-3b2f-4283-ae8d-6d1fd498d013 00:24:22.470 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:24:22.470 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:24:22.470 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:24:22.470 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:24:22.470 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:24:22.470 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=ec163c4f3b2f4283ae8d6d1fd498d013 00:24:22.470 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo EC163C4F3B2F4283AE8D6D1FD498D013 00:24:22.470 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ EC163C4F3B2F4283AE8D6D1FD498D013 == \E\C\1\6\3\C\4\F\3\B\2\F\4\2\8\3\A\E\8\D\6\D\1\F\D\4\9\8\D\0\1\3 ]] 00:24:22.470 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:24:22.731 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:24:22.731 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:24:22.731 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 1449662 00:24:22.731 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 1449662 ']' 00:24:22.731 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 1449662 00:24:22.731 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:24:22.731 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:22.731 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1449662 00:24:22.731 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:22.731 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:22.731 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1449662' 00:24:22.731 killing process with pid 1449662 00:24:22.731 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 1449662 00:24:22.731 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 1449662 00:24:22.992 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:24:22.992 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:22.992 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:24:22.992 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:22.992 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:24:22.992 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:22.992 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:22.992 rmmod nvme_tcp 00:24:22.992 rmmod nvme_fabrics 00:24:22.992 rmmod nvme_keyring 00:24:22.992 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:22.992 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:24:22.992 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:24:22.992 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 1449423 ']' 00:24:22.992 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 1449423 00:24:22.992 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 1449423 ']' 00:24:22.992 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 1449423 00:24:22.992 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:24:22.993 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:22.993 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1449423 00:24:22.993 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:22.993 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:22.993 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1449423' 00:24:22.993 killing process with pid 1449423 00:24:22.993 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 1449423 00:24:22.993 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 1449423 00:24:22.993 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:22.993 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:22.993 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:22.993 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:24:22.993 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:24:23.254 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:23.254 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:24:23.254 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:23.254 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:23.254 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:23.254 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:23.254 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:25.170 09:57:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:25.170 00:24:25.170 real 0m14.971s 00:24:25.170 user 0m11.420s 00:24:25.170 sys 0m6.891s 00:24:25.170 09:57:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:25.170 09:57:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:25.170 ************************************ 00:24:25.170 END TEST nvmf_nsid 00:24:25.170 ************************************ 00:24:25.170 09:57:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:24:25.170 00:24:25.170 real 13m4.207s 00:24:25.170 user 27m16.015s 00:24:25.170 sys 3m55.834s 00:24:25.170 09:57:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:25.170 09:57:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:25.170 ************************************ 00:24:25.170 END TEST nvmf_target_extra 00:24:25.170 ************************************ 00:24:25.170 09:57:56 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:24:25.170 09:57:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:25.170 09:57:56 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:25.170 09:57:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:25.431 ************************************ 00:24:25.431 START TEST nvmf_host 00:24:25.431 ************************************ 00:24:25.431 09:57:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:24:25.431 * Looking for test storage... 00:24:25.431 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:24:25.431 09:57:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:25.431 09:57:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:24:25.431 09:57:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:25.431 09:57:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:25.431 09:57:56 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:25.431 09:57:56 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:25.431 09:57:56 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:25.431 09:57:56 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:24:25.431 09:57:56 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:24:25.431 09:57:56 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:24:25.431 09:57:56 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:24:25.431 09:57:56 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:24:25.431 09:57:56 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:24:25.431 09:57:56 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:24:25.431 09:57:56 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:25.431 09:57:56 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:24:25.431 09:57:56 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:24:25.431 09:57:56 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:25.431 09:57:56 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:25.431 09:57:56 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:24:25.431 09:57:56 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:24:25.431 09:57:56 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:25.431 09:57:56 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:24:25.431 09:57:56 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:24:25.431 09:57:56 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:24:25.431 09:57:56 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:24:25.431 09:57:56 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:25.431 09:57:56 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:24:25.431 09:57:56 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:24:25.431 09:57:56 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:25.431 09:57:56 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:25.431 09:57:56 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:24:25.431 09:57:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:25.431 09:57:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:25.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:25.431 --rc genhtml_branch_coverage=1 00:24:25.431 --rc genhtml_function_coverage=1 00:24:25.431 --rc genhtml_legend=1 00:24:25.431 --rc geninfo_all_blocks=1 00:24:25.431 --rc geninfo_unexecuted_blocks=1 00:24:25.431 00:24:25.431 ' 00:24:25.431 09:57:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:25.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:25.431 --rc genhtml_branch_coverage=1 00:24:25.431 --rc genhtml_function_coverage=1 00:24:25.431 --rc genhtml_legend=1 00:24:25.431 --rc geninfo_all_blocks=1 00:24:25.431 --rc geninfo_unexecuted_blocks=1 00:24:25.431 00:24:25.431 ' 00:24:25.431 09:57:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:25.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:25.432 --rc genhtml_branch_coverage=1 00:24:25.432 --rc genhtml_function_coverage=1 00:24:25.432 --rc genhtml_legend=1 00:24:25.432 --rc geninfo_all_blocks=1 00:24:25.432 --rc geninfo_unexecuted_blocks=1 00:24:25.432 00:24:25.432 ' 00:24:25.432 09:57:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:25.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:25.432 --rc genhtml_branch_coverage=1 00:24:25.432 --rc genhtml_function_coverage=1 00:24:25.432 --rc genhtml_legend=1 00:24:25.432 --rc geninfo_all_blocks=1 00:24:25.432 --rc geninfo_unexecuted_blocks=1 00:24:25.432 00:24:25.432 ' 00:24:25.432 09:57:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:25.432 09:57:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:24:25.432 09:57:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:25.432 09:57:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:25.432 09:57:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:25.432 09:57:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:25.432 09:57:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:25.432 09:57:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:25.432 09:57:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:25.432 09:57:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:25.432 09:57:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:25.432 09:57:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:25.432 09:57:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:25.432 09:57:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:25.432 09:57:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:25.432 09:57:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:25.432 09:57:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:25.432 09:57:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:25.432 09:57:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:25.432 09:57:56 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:25.432 09:57:56 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:25.693 09:57:56 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:25.693 09:57:56 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:25.693 09:57:56 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.693 09:57:56 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.693 09:57:56 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.693 09:57:56 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:24:25.694 09:57:56 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.694 09:57:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:24:25.694 09:57:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:25.694 09:57:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:25.694 09:57:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:25.694 09:57:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:25.694 09:57:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:25.694 09:57:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:25.694 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:25.694 09:57:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:25.694 09:57:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:25.694 09:57:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:25.694 09:57:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:24:25.694 09:57:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:24:25.694 09:57:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:24:25.694 09:57:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:24:25.694 09:57:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:25.694 09:57:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:25.694 09:57:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.694 ************************************ 00:24:25.694 START TEST nvmf_multicontroller 00:24:25.694 ************************************ 00:24:25.694 09:57:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:24:25.694 * Looking for test storage... 00:24:25.694 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:25.694 09:57:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:25.694 09:57:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:24:25.694 09:57:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:25.694 09:57:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:25.694 09:57:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:25.694 09:57:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:25.694 09:57:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:25.694 09:57:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:24:25.694 09:57:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:24:25.694 09:57:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:24:25.694 09:57:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:24:25.694 09:57:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:24:25.694 09:57:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:24:25.694 09:57:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:24:25.694 09:57:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:25.694 09:57:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:24:25.694 09:57:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:24:25.694 09:57:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:25.694 09:57:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:25.694 09:57:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:24:25.694 09:57:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:24:25.694 09:57:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:25.694 09:57:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:24:25.694 09:57:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:24:25.694 09:57:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:24:25.694 09:57:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:24:25.694 09:57:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:25.694 09:57:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:24:25.694 09:57:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:24:25.694 09:57:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:25.694 09:57:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:25.694 09:57:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:24:25.694 09:57:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:25.694 09:57:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:25.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:25.694 --rc genhtml_branch_coverage=1 00:24:25.694 --rc genhtml_function_coverage=1 00:24:25.694 --rc genhtml_legend=1 00:24:25.694 --rc geninfo_all_blocks=1 00:24:25.694 --rc geninfo_unexecuted_blocks=1 00:24:25.694 00:24:25.694 ' 00:24:25.694 09:57:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:25.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:25.694 --rc genhtml_branch_coverage=1 00:24:25.694 --rc genhtml_function_coverage=1 00:24:25.694 --rc genhtml_legend=1 00:24:25.694 --rc geninfo_all_blocks=1 00:24:25.694 --rc geninfo_unexecuted_blocks=1 00:24:25.694 00:24:25.694 ' 00:24:25.694 09:57:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:25.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:25.694 --rc genhtml_branch_coverage=1 00:24:25.694 --rc genhtml_function_coverage=1 00:24:25.694 --rc genhtml_legend=1 00:24:25.694 --rc geninfo_all_blocks=1 00:24:25.694 --rc geninfo_unexecuted_blocks=1 00:24:25.694 00:24:25.694 ' 00:24:25.694 09:57:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:25.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:25.694 --rc genhtml_branch_coverage=1 00:24:25.694 --rc genhtml_function_coverage=1 00:24:25.694 --rc genhtml_legend=1 00:24:25.694 --rc geninfo_all_blocks=1 00:24:25.694 --rc geninfo_unexecuted_blocks=1 00:24:25.694 00:24:25.694 ' 00:24:25.694 09:57:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:25.694 09:57:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:24:25.694 09:57:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:25.694 09:57:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:25.694 09:57:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:25.694 09:57:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:25.694 09:57:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:25.694 09:57:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:25.694 09:57:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:25.694 09:57:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:25.694 09:57:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:25.694 09:57:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:25.957 09:57:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:25.957 09:57:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:25.957 09:57:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:25.957 09:57:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:25.957 09:57:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:25.957 09:57:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:25.957 09:57:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:25.957 09:57:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:24:25.957 09:57:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:25.957 09:57:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:25.957 09:57:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:25.957 09:57:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.957 09:57:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.957 09:57:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.957 09:57:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:24:25.957 09:57:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.957 09:57:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:24:25.957 09:57:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:25.957 09:57:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:25.957 09:57:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:25.957 09:57:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:25.957 09:57:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:25.957 09:57:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:25.957 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:25.957 09:57:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:25.957 09:57:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:25.957 09:57:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:25.957 09:57:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:25.957 09:57:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:25.957 09:57:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:24:25.957 09:57:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:24:25.957 09:57:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:25.957 09:57:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:24:25.957 09:57:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:24:25.957 09:57:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:25.957 09:57:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:25.957 09:57:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:25.957 09:57:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:25.957 09:57:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:25.957 09:57:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:25.957 09:57:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:25.957 09:57:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:25.957 09:57:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:25.957 09:57:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:25.957 09:57:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:24:25.957 09:57:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:34.097 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:34.097 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:24:34.097 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:34.097 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:34.097 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:34.097 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:34.097 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:34.097 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:24:34.097 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:34.097 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:24:34.097 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:24:34.097 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:24:34.097 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:24:34.097 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:24:34.097 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:24:34.097 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:34.097 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:34.097 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:34.097 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:34.097 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:34.097 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:34.097 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:34.097 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:34.097 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:34.097 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:34.097 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:34.097 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:34.097 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:34.097 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:34.097 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:34.097 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:34.097 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:34.097 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:34.097 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:34.097 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:34.097 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:34.097 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:34.097 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:34.097 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:34.097 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:34.097 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:34.097 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:34.097 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:34.097 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:34.097 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:34.097 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:34.097 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:34.097 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:34.097 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:34.097 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:34.097 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:34.097 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:34.097 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:34.097 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:34.097 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:34.097 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:34.097 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:34.097 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:34.097 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:34.097 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:34.097 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:34.097 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:34.097 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:34.097 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:34.097 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:34.097 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:34.097 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:34.097 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:34.097 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:34.097 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:34.097 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:34.097 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:34.097 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:34.097 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:24:34.097 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:34.097 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:34.097 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:34.097 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:34.097 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:34.097 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:34.097 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:34.097 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:34.097 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:34.097 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:34.097 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:34.097 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:34.097 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:34.097 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:34.097 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:34.098 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:34.098 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:34.098 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:34.098 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:34.098 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:34.098 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:34.098 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:34.098 09:58:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:34.098 09:58:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:34.098 09:58:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:34.098 09:58:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:34.098 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:34.098 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.668 ms 00:24:34.098 00:24:34.098 --- 10.0.0.2 ping statistics --- 00:24:34.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:34.098 rtt min/avg/max/mdev = 0.668/0.668/0.668/0.000 ms 00:24:34.098 09:58:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:34.098 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:34.098 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:24:34.098 00:24:34.098 --- 10.0.0.1 ping statistics --- 00:24:34.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:34.098 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:24:34.098 09:58:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:34.098 09:58:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:24:34.098 09:58:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:34.098 09:58:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:34.098 09:58:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:34.098 09:58:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:34.098 09:58:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:34.098 09:58:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:34.098 09:58:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:34.098 09:58:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:24:34.098 09:58:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:34.098 09:58:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:34.098 09:58:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:34.098 09:58:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=1454760 00:24:34.098 09:58:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 1454760 00:24:34.098 09:58:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:34.098 09:58:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 1454760 ']' 00:24:34.098 09:58:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:34.098 09:58:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:34.098 09:58:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:34.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:34.098 09:58:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:34.098 09:58:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:34.098 [2024-11-20 09:58:04.218985] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:24:34.098 [2024-11-20 09:58:04.219050] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:34.098 [2024-11-20 09:58:04.319721] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:34.098 [2024-11-20 09:58:04.371930] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:34.098 [2024-11-20 09:58:04.371979] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:34.098 [2024-11-20 09:58:04.371987] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:34.098 [2024-11-20 09:58:04.371995] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:34.098 [2024-11-20 09:58:04.372001] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:34.098 [2024-11-20 09:58:04.373868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:34.098 [2024-11-20 09:58:04.374030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:34.098 [2024-11-20 09:58:04.374033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:34.359 09:58:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:34.359 09:58:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:24:34.359 09:58:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:34.359 09:58:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:34.359 09:58:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:34.359 09:58:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:34.360 09:58:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:34.360 09:58:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.360 09:58:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:34.360 [2024-11-20 09:58:05.084637] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:34.360 09:58:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.360 09:58:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:34.360 09:58:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.360 09:58:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:34.360 Malloc0 00:24:34.360 09:58:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.360 09:58:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:34.360 09:58:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.360 09:58:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:34.360 09:58:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.360 09:58:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:34.360 09:58:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.360 09:58:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:34.360 09:58:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.360 09:58:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:34.360 09:58:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.360 09:58:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:34.360 [2024-11-20 09:58:05.160570] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:34.360 09:58:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.360 09:58:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:34.360 09:58:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.360 09:58:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:34.360 [2024-11-20 09:58:05.172468] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:34.360 09:58:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.360 09:58:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:34.360 09:58:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.360 09:58:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:34.360 Malloc1 00:24:34.360 09:58:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.360 09:58:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:24:34.360 09:58:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.360 09:58:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:34.360 09:58:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.360 09:58:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:24:34.360 09:58:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.360 09:58:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:34.360 09:58:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.360 09:58:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:34.360 09:58:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.360 09:58:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:34.360 09:58:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.360 09:58:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:24:34.360 09:58:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.360 09:58:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:34.360 09:58:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.360 09:58:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1454984 00:24:34.360 09:58:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:24:34.360 09:58:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:34.360 09:58:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1454984 /var/tmp/bdevperf.sock 00:24:34.360 09:58:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 1454984 ']' 00:24:34.360 09:58:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:34.360 09:58:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:34.360 09:58:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:34.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:34.360 09:58:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:34.360 09:58:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:35.301 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:35.301 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:24:35.301 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:24:35.301 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.301 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:35.562 NVMe0n1 00:24:35.562 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.562 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:35.562 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:24:35.562 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.562 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:35.562 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.562 1 00:24:35.562 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:24:35.562 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:24:35.562 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:24:35.562 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:35.562 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:35.562 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:35.562 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:35.562 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:24:35.562 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.562 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:35.562 request: 00:24:35.562 { 00:24:35.562 "name": "NVMe0", 00:24:35.562 "trtype": "tcp", 00:24:35.562 "traddr": "10.0.0.2", 00:24:35.562 "adrfam": "ipv4", 00:24:35.562 "trsvcid": "4420", 00:24:35.562 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:35.562 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:24:35.562 "hostaddr": "10.0.0.1", 00:24:35.562 "prchk_reftag": false, 00:24:35.562 "prchk_guard": false, 00:24:35.562 "hdgst": false, 00:24:35.562 "ddgst": false, 00:24:35.562 "allow_unrecognized_csi": false, 00:24:35.562 "method": "bdev_nvme_attach_controller", 00:24:35.562 "req_id": 1 00:24:35.562 } 00:24:35.562 Got JSON-RPC error response 00:24:35.562 response: 00:24:35.562 { 00:24:35.562 "code": -114, 00:24:35.562 "message": "A controller named NVMe0 already exists with the specified network path" 00:24:35.562 } 00:24:35.562 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:35.562 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:24:35.562 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:35.562 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:35.562 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:35.562 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:24:35.562 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:24:35.562 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:24:35.562 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:35.562 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:35.562 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:35.562 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:35.562 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:24:35.562 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.562 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:35.562 request: 00:24:35.562 { 00:24:35.562 "name": "NVMe0", 00:24:35.562 "trtype": "tcp", 00:24:35.562 "traddr": "10.0.0.2", 00:24:35.562 "adrfam": "ipv4", 00:24:35.562 "trsvcid": "4420", 00:24:35.562 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:35.562 "hostaddr": "10.0.0.1", 00:24:35.562 "prchk_reftag": false, 00:24:35.562 "prchk_guard": false, 00:24:35.562 "hdgst": false, 00:24:35.562 "ddgst": false, 00:24:35.562 "allow_unrecognized_csi": false, 00:24:35.562 "method": "bdev_nvme_attach_controller", 00:24:35.562 "req_id": 1 00:24:35.562 } 00:24:35.562 Got JSON-RPC error response 00:24:35.562 response: 00:24:35.562 { 00:24:35.562 "code": -114, 00:24:35.562 "message": "A controller named NVMe0 already exists with the specified network path" 00:24:35.562 } 00:24:35.562 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:35.562 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:24:35.562 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:35.562 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:35.562 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:35.563 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:24:35.563 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:24:35.563 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:24:35.563 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:35.563 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:35.563 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:35.563 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:35.563 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:24:35.563 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.563 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:35.563 request: 00:24:35.563 { 00:24:35.563 "name": "NVMe0", 00:24:35.563 "trtype": "tcp", 00:24:35.563 "traddr": "10.0.0.2", 00:24:35.563 "adrfam": "ipv4", 00:24:35.563 "trsvcid": "4420", 00:24:35.563 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:35.563 "hostaddr": "10.0.0.1", 00:24:35.563 "prchk_reftag": false, 00:24:35.563 "prchk_guard": false, 00:24:35.563 "hdgst": false, 00:24:35.563 "ddgst": false, 00:24:35.563 "multipath": "disable", 00:24:35.563 "allow_unrecognized_csi": false, 00:24:35.563 "method": "bdev_nvme_attach_controller", 00:24:35.563 "req_id": 1 00:24:35.563 } 00:24:35.563 Got JSON-RPC error response 00:24:35.563 response: 00:24:35.563 { 00:24:35.563 "code": -114, 00:24:35.563 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:24:35.563 } 00:24:35.563 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:35.563 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:24:35.563 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:35.563 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:35.563 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:35.563 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:24:35.563 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:24:35.563 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:24:35.563 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:35.563 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:35.563 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:35.563 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:35.563 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:24:35.563 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.563 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:35.824 request: 00:24:35.824 { 00:24:35.824 "name": "NVMe0", 00:24:35.824 "trtype": "tcp", 00:24:35.824 "traddr": "10.0.0.2", 00:24:35.824 "adrfam": "ipv4", 00:24:35.824 "trsvcid": "4420", 00:24:35.824 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:35.824 "hostaddr": "10.0.0.1", 00:24:35.824 "prchk_reftag": false, 00:24:35.824 "prchk_guard": false, 00:24:35.824 "hdgst": false, 00:24:35.824 "ddgst": false, 00:24:35.824 "multipath": "failover", 00:24:35.824 "allow_unrecognized_csi": false, 00:24:35.824 "method": "bdev_nvme_attach_controller", 00:24:35.824 "req_id": 1 00:24:35.824 } 00:24:35.824 Got JSON-RPC error response 00:24:35.824 response: 00:24:35.824 { 00:24:35.824 "code": -114, 00:24:35.824 "message": "A controller named NVMe0 already exists with the specified network path" 00:24:35.824 } 00:24:35.824 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:35.824 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:24:35.824 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:35.824 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:35.824 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:35.824 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:35.824 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.824 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:35.824 NVMe0n1 00:24:35.824 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.824 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:35.824 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.824 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:35.824 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.824 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:24:35.824 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.824 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:36.085 00:24:36.085 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.085 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:36.085 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:24:36.085 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.085 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:36.085 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.085 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:24:36.085 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:37.025 { 00:24:37.025 "results": [ 00:24:37.025 { 00:24:37.025 "job": "NVMe0n1", 00:24:37.025 "core_mask": "0x1", 00:24:37.025 "workload": "write", 00:24:37.025 "status": "finished", 00:24:37.025 "queue_depth": 128, 00:24:37.025 "io_size": 4096, 00:24:37.025 "runtime": 1.007353, 00:24:37.025 "iops": 27314.15898895422, 00:24:37.025 "mibps": 106.69593355060242, 00:24:37.025 "io_failed": 0, 00:24:37.025 "io_timeout": 0, 00:24:37.025 "avg_latency_us": 4671.462569022958, 00:24:37.025 "min_latency_us": 2102.6133333333332, 00:24:37.025 "max_latency_us": 13544.106666666667 00:24:37.025 } 00:24:37.025 ], 00:24:37.025 "core_count": 1 00:24:37.025 } 00:24:37.286 09:58:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:24:37.286 09:58:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.286 09:58:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:37.286 09:58:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.286 09:58:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:24:37.286 09:58:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 1454984 00:24:37.286 09:58:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 1454984 ']' 00:24:37.286 09:58:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 1454984 00:24:37.286 09:58:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:24:37.286 09:58:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:37.286 09:58:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1454984 00:24:37.286 09:58:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:37.286 09:58:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:37.286 09:58:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1454984' 00:24:37.286 killing process with pid 1454984 00:24:37.286 09:58:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 1454984 00:24:37.286 09:58:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 1454984 00:24:37.286 09:58:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:37.286 09:58:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.286 09:58:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:37.286 09:58:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.286 09:58:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:37.286 09:58:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.286 09:58:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:37.286 09:58:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.286 09:58:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:24:37.286 09:58:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:37.286 09:58:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:24:37.286 09:58:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:24:37.286 09:58:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:24:37.286 09:58:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:24:37.286 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:24:37.286 [2024-11-20 09:58:05.303639] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:24:37.286 [2024-11-20 09:58:05.303722] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1454984 ] 00:24:37.286 [2024-11-20 09:58:05.399117] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:37.286 [2024-11-20 09:58:05.453438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:37.286 [2024-11-20 09:58:06.797142] bdev.c:4700:bdev_name_add: *ERROR*: Bdev name af407722-b48e-4afe-93de-59482acead8f already exists 00:24:37.286 [2024-11-20 09:58:06.797194] bdev.c:7838:bdev_register: *ERROR*: Unable to add uuid:af407722-b48e-4afe-93de-59482acead8f alias for bdev NVMe1n1 00:24:37.286 [2024-11-20 09:58:06.797204] bdev_nvme.c:4658:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:24:37.286 Running I/O for 1 seconds... 00:24:37.286 27307.00 IOPS, 106.67 MiB/s 00:24:37.286 Latency(us) 00:24:37.286 [2024-11-20T08:58:08.202Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:37.286 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:24:37.286 NVMe0n1 : 1.01 27314.16 106.70 0.00 0.00 4671.46 2102.61 13544.11 00:24:37.286 [2024-11-20T08:58:08.202Z] =================================================================================================================== 00:24:37.286 [2024-11-20T08:58:08.203Z] Total : 27314.16 106.70 0.00 0.00 4671.46 2102.61 13544.11 00:24:37.287 Received shutdown signal, test time was about 1.000000 seconds 00:24:37.287 00:24:37.287 Latency(us) 00:24:37.287 [2024-11-20T08:58:08.203Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:37.287 [2024-11-20T08:58:08.203Z] =================================================================================================================== 00:24:37.287 [2024-11-20T08:58:08.203Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:37.287 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:24:37.287 09:58:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:37.547 09:58:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:24:37.547 09:58:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:24:37.547 09:58:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:37.547 09:58:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:24:37.547 09:58:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:37.547 09:58:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:24:37.547 09:58:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:37.547 09:58:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:37.547 rmmod nvme_tcp 00:24:37.547 rmmod nvme_fabrics 00:24:37.547 rmmod nvme_keyring 00:24:37.547 09:58:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:37.547 09:58:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:24:37.547 09:58:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:24:37.547 09:58:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 1454760 ']' 00:24:37.547 09:58:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 1454760 00:24:37.547 09:58:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 1454760 ']' 00:24:37.547 09:58:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 1454760 00:24:37.547 09:58:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:24:37.547 09:58:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:37.547 09:58:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1454760 00:24:37.547 09:58:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:37.547 09:58:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:37.547 09:58:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1454760' 00:24:37.547 killing process with pid 1454760 00:24:37.547 09:58:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 1454760 00:24:37.547 09:58:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 1454760 00:24:37.808 09:58:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:37.808 09:58:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:37.808 09:58:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:37.808 09:58:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:24:37.808 09:58:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:24:37.808 09:58:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:37.808 09:58:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:24:37.808 09:58:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:37.808 09:58:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:37.808 09:58:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:37.808 09:58:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:37.808 09:58:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:39.724 09:58:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:39.724 00:24:39.724 real 0m14.164s 00:24:39.724 user 0m17.671s 00:24:39.724 sys 0m6.589s 00:24:39.724 09:58:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:39.724 09:58:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:39.724 ************************************ 00:24:39.724 END TEST nvmf_multicontroller 00:24:39.724 ************************************ 00:24:39.724 09:58:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:39.724 09:58:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:39.724 09:58:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:39.724 09:58:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.987 ************************************ 00:24:39.987 START TEST nvmf_aer 00:24:39.987 ************************************ 00:24:39.987 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:39.987 * Looking for test storage... 00:24:39.987 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:39.987 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:39.987 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:24:39.987 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:39.987 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:39.987 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:39.987 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:39.987 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:39.987 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:24:39.987 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:24:39.987 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:24:39.987 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:24:39.987 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:24:39.987 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:24:39.987 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:24:39.987 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:39.987 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:24:39.987 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:24:39.987 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:39.987 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:39.987 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:24:39.987 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:24:39.987 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:39.987 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:24:39.987 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:24:39.987 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:24:39.987 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:24:39.987 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:39.987 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:24:39.987 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:24:39.987 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:39.987 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:39.987 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:24:39.987 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:39.987 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:39.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:39.987 --rc genhtml_branch_coverage=1 00:24:39.987 --rc genhtml_function_coverage=1 00:24:39.987 --rc genhtml_legend=1 00:24:39.987 --rc geninfo_all_blocks=1 00:24:39.987 --rc geninfo_unexecuted_blocks=1 00:24:39.987 00:24:39.987 ' 00:24:39.987 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:39.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:39.987 --rc genhtml_branch_coverage=1 00:24:39.987 --rc genhtml_function_coverage=1 00:24:39.987 --rc genhtml_legend=1 00:24:39.987 --rc geninfo_all_blocks=1 00:24:39.987 --rc geninfo_unexecuted_blocks=1 00:24:39.987 00:24:39.987 ' 00:24:39.987 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:39.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:39.987 --rc genhtml_branch_coverage=1 00:24:39.987 --rc genhtml_function_coverage=1 00:24:39.987 --rc genhtml_legend=1 00:24:39.987 --rc geninfo_all_blocks=1 00:24:39.987 --rc geninfo_unexecuted_blocks=1 00:24:39.987 00:24:39.987 ' 00:24:39.987 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:39.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:39.987 --rc genhtml_branch_coverage=1 00:24:39.987 --rc genhtml_function_coverage=1 00:24:39.987 --rc genhtml_legend=1 00:24:39.987 --rc geninfo_all_blocks=1 00:24:39.987 --rc geninfo_unexecuted_blocks=1 00:24:39.987 00:24:39.987 ' 00:24:39.987 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:39.987 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:24:39.987 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:39.987 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:39.987 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:39.987 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:39.987 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:39.987 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:39.987 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:39.987 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:39.987 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:39.987 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:39.987 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:39.987 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:39.987 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:39.987 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:39.987 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:39.987 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:39.987 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:39.987 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:24:39.987 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:39.987 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:39.987 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:39.987 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.987 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.987 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.987 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:24:39.987 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.987 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:24:39.987 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:39.987 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:39.987 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:39.987 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:39.987 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:39.987 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:39.987 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:39.987 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:39.987 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:39.988 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:39.988 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:24:39.988 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:39.988 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:39.988 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:39.988 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:39.988 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:39.988 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:39.988 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:39.988 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:39.988 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:39.988 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:39.988 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:24:39.988 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:48.137 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:48.137 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:24:48.137 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:48.137 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:48.137 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:48.137 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:48.137 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:48.137 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:24:48.137 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:48.137 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:24:48.137 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:24:48.137 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:24:48.137 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:24:48.137 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:24:48.137 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:24:48.137 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:48.137 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:48.137 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:48.137 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:48.137 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:48.137 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:48.137 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:48.137 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:48.137 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:48.137 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:48.137 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:48.137 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:48.137 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:48.137 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:48.137 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:48.137 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:48.137 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:48.137 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:48.137 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:48.137 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:48.137 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:48.137 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:48.137 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:48.137 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:48.138 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:48.138 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:48.138 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:48.138 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:48.138 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:48.138 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:48.138 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:48.138 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:48.138 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:48.138 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:48.138 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:48.138 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:48.138 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:48.138 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:48.138 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:48.138 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:48.138 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:48.138 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:48.138 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:48.138 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:48.138 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:48.138 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:48.138 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:48.138 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:48.138 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:48.138 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:48.138 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:48.138 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:48.138 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:48.138 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:48.138 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:48.138 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:48.138 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:48.138 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:48.138 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:24:48.138 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:48.138 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:48.138 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:48.138 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:48.138 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:48.138 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:48.138 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:48.138 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:48.138 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:48.138 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:48.138 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:48.138 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:48.138 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:48.138 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:48.138 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:48.138 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:48.138 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:48.138 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:48.138 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:48.138 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:48.138 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:48.138 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:48.138 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:48.138 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:48.138 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:48.138 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:48.138 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:48.138 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.611 ms 00:24:48.138 00:24:48.138 --- 10.0.0.2 ping statistics --- 00:24:48.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:48.138 rtt min/avg/max/mdev = 0.611/0.611/0.611/0.000 ms 00:24:48.138 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:48.138 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:48.138 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.293 ms 00:24:48.138 00:24:48.138 --- 10.0.0.1 ping statistics --- 00:24:48.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:48.138 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:24:48.138 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:48.138 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:24:48.138 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:48.138 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:48.138 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:48.138 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:48.138 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:48.138 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:48.138 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:48.138 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:24:48.138 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:48.138 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:48.138 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:48.138 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=1459793 00:24:48.138 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 1459793 00:24:48.138 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:48.138 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 1459793 ']' 00:24:48.138 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:48.138 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:48.138 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:48.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:48.138 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:48.138 09:58:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:48.138 [2024-11-20 09:58:18.463521] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:24:48.139 [2024-11-20 09:58:18.463588] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:48.139 [2024-11-20 09:58:18.562748] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:48.139 [2024-11-20 09:58:18.617067] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:48.139 [2024-11-20 09:58:18.617123] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:48.139 [2024-11-20 09:58:18.617132] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:48.139 [2024-11-20 09:58:18.617139] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:48.139 [2024-11-20 09:58:18.617145] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:48.139 [2024-11-20 09:58:18.619556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:48.139 [2024-11-20 09:58:18.619714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:48.139 [2024-11-20 09:58:18.619875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:48.139 [2024-11-20 09:58:18.619875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:48.400 09:58:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:48.400 09:58:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:24:48.400 09:58:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:48.400 09:58:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:48.400 09:58:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:48.661 09:58:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:48.661 09:58:19 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:48.661 09:58:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.661 09:58:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:48.661 [2024-11-20 09:58:19.343755] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:48.661 09:58:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.661 09:58:19 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:24:48.661 09:58:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.661 09:58:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:48.661 Malloc0 00:24:48.661 09:58:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.661 09:58:19 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:24:48.661 09:58:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.661 09:58:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:48.661 09:58:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.661 09:58:19 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:48.661 09:58:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.661 09:58:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:48.661 09:58:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.661 09:58:19 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:48.661 09:58:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.661 09:58:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:48.661 [2024-11-20 09:58:19.422348] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:48.661 09:58:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.661 09:58:19 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:24:48.661 09:58:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.661 09:58:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:48.661 [ 00:24:48.661 { 00:24:48.661 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:48.661 "subtype": "Discovery", 00:24:48.661 "listen_addresses": [], 00:24:48.661 "allow_any_host": true, 00:24:48.661 "hosts": [] 00:24:48.661 }, 00:24:48.661 { 00:24:48.661 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:48.661 "subtype": "NVMe", 00:24:48.661 "listen_addresses": [ 00:24:48.661 { 00:24:48.661 "trtype": "TCP", 00:24:48.661 "adrfam": "IPv4", 00:24:48.661 "traddr": "10.0.0.2", 00:24:48.661 "trsvcid": "4420" 00:24:48.661 } 00:24:48.661 ], 00:24:48.661 "allow_any_host": true, 00:24:48.661 "hosts": [], 00:24:48.661 "serial_number": "SPDK00000000000001", 00:24:48.661 "model_number": "SPDK bdev Controller", 00:24:48.661 "max_namespaces": 2, 00:24:48.661 "min_cntlid": 1, 00:24:48.661 "max_cntlid": 65519, 00:24:48.661 "namespaces": [ 00:24:48.661 { 00:24:48.661 "nsid": 1, 00:24:48.661 "bdev_name": "Malloc0", 00:24:48.661 "name": "Malloc0", 00:24:48.661 "nguid": "8CB5CDA1596245C2BB2D5EB143D20546", 00:24:48.661 "uuid": "8cb5cda1-5962-45c2-bb2d-5eb143d20546" 00:24:48.661 } 00:24:48.661 ] 00:24:48.661 } 00:24:48.661 ] 00:24:48.661 09:58:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.661 09:58:19 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:24:48.661 09:58:19 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:24:48.661 09:58:19 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=1459915 00:24:48.661 09:58:19 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:24:48.661 09:58:19 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:24:48.661 09:58:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:24:48.661 09:58:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:48.661 09:58:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:24:48.662 09:58:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:24:48.662 09:58:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:24:48.662 09:58:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:48.662 09:58:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:24:48.662 09:58:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:24:48.662 09:58:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:24:48.923 09:58:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:48.923 09:58:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:48.923 09:58:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:24:48.923 09:58:19 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:24:48.923 09:58:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.923 09:58:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:48.923 Malloc1 00:24:48.923 09:58:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.923 09:58:19 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:24:48.923 09:58:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.923 09:58:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:48.923 09:58:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.923 09:58:19 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:24:48.923 09:58:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.923 09:58:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:48.923 Asynchronous Event Request test 00:24:48.923 Attaching to 10.0.0.2 00:24:48.923 Attached to 10.0.0.2 00:24:48.923 Registering asynchronous event callbacks... 00:24:48.923 Starting namespace attribute notice tests for all controllers... 00:24:48.923 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:24:48.923 aer_cb - Changed Namespace 00:24:48.923 Cleaning up... 00:24:48.923 [ 00:24:48.923 { 00:24:48.923 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:48.923 "subtype": "Discovery", 00:24:48.923 "listen_addresses": [], 00:24:48.923 "allow_any_host": true, 00:24:48.923 "hosts": [] 00:24:48.923 }, 00:24:48.923 { 00:24:48.923 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:48.923 "subtype": "NVMe", 00:24:48.923 "listen_addresses": [ 00:24:48.923 { 00:24:48.923 "trtype": "TCP", 00:24:48.923 "adrfam": "IPv4", 00:24:48.923 "traddr": "10.0.0.2", 00:24:48.923 "trsvcid": "4420" 00:24:48.923 } 00:24:48.923 ], 00:24:48.923 "allow_any_host": true, 00:24:48.923 "hosts": [], 00:24:48.923 "serial_number": "SPDK00000000000001", 00:24:48.923 "model_number": "SPDK bdev Controller", 00:24:48.923 "max_namespaces": 2, 00:24:48.923 "min_cntlid": 1, 00:24:48.924 "max_cntlid": 65519, 00:24:48.924 "namespaces": [ 00:24:48.924 { 00:24:48.924 "nsid": 1, 00:24:48.924 "bdev_name": "Malloc0", 00:24:48.924 "name": "Malloc0", 00:24:48.924 "nguid": "8CB5CDA1596245C2BB2D5EB143D20546", 00:24:48.924 "uuid": "8cb5cda1-5962-45c2-bb2d-5eb143d20546" 00:24:48.924 }, 00:24:48.924 { 00:24:48.924 "nsid": 2, 00:24:48.924 "bdev_name": "Malloc1", 00:24:48.924 "name": "Malloc1", 00:24:48.924 "nguid": "97E19153231843389F5C0C36EFFF516A", 00:24:48.924 "uuid": "97e19153-2318-4338-9f5c-0c36efff516a" 00:24:48.924 } 00:24:48.924 ] 00:24:48.924 } 00:24:48.924 ] 00:24:48.924 09:58:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.924 09:58:19 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 1459915 00:24:48.924 09:58:19 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:24:48.924 09:58:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.924 09:58:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:48.924 09:58:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.924 09:58:19 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:24:48.924 09:58:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.924 09:58:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:48.924 09:58:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.924 09:58:19 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:48.924 09:58:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.924 09:58:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:48.924 09:58:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.924 09:58:19 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:24:48.924 09:58:19 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:24:48.924 09:58:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:48.924 09:58:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:24:48.924 09:58:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:48.924 09:58:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:24:48.924 09:58:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:48.924 09:58:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:48.924 rmmod nvme_tcp 00:24:49.185 rmmod nvme_fabrics 00:24:49.185 rmmod nvme_keyring 00:24:49.185 09:58:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:49.185 09:58:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:24:49.185 09:58:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:24:49.185 09:58:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 1459793 ']' 00:24:49.185 09:58:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 1459793 00:24:49.185 09:58:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 1459793 ']' 00:24:49.185 09:58:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 1459793 00:24:49.185 09:58:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:24:49.185 09:58:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:49.185 09:58:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1459793 00:24:49.185 09:58:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:49.185 09:58:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:49.185 09:58:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1459793' 00:24:49.185 killing process with pid 1459793 00:24:49.185 09:58:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 1459793 00:24:49.185 09:58:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 1459793 00:24:49.446 09:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:49.446 09:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:49.446 09:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:49.446 09:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:24:49.446 09:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:24:49.446 09:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:49.446 09:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:24:49.446 09:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:49.446 09:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:49.446 09:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:49.446 09:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:49.446 09:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:51.360 09:58:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:51.360 00:24:51.360 real 0m11.571s 00:24:51.360 user 0m8.278s 00:24:51.360 sys 0m6.186s 00:24:51.360 09:58:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:51.360 09:58:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:51.360 ************************************ 00:24:51.360 END TEST nvmf_aer 00:24:51.360 ************************************ 00:24:51.360 09:58:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:51.360 09:58:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:51.360 09:58:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:51.360 09:58:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.642 ************************************ 00:24:51.642 START TEST nvmf_async_init 00:24:51.642 ************************************ 00:24:51.642 09:58:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:51.642 * Looking for test storage... 00:24:51.642 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:51.642 09:58:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:51.642 09:58:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:24:51.642 09:58:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:51.642 09:58:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:51.642 09:58:22 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:51.642 09:58:22 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:51.642 09:58:22 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:51.642 09:58:22 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:24:51.642 09:58:22 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:24:51.642 09:58:22 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:24:51.642 09:58:22 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:24:51.642 09:58:22 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:24:51.642 09:58:22 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:24:51.642 09:58:22 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:24:51.642 09:58:22 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:51.642 09:58:22 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:24:51.642 09:58:22 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:24:51.642 09:58:22 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:51.642 09:58:22 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:51.642 09:58:22 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:24:51.642 09:58:22 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:24:51.642 09:58:22 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:51.642 09:58:22 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:24:51.642 09:58:22 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:24:51.642 09:58:22 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:24:51.642 09:58:22 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:24:51.642 09:58:22 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:51.642 09:58:22 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:24:51.643 09:58:22 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:24:51.643 09:58:22 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:51.643 09:58:22 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:51.643 09:58:22 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:24:51.643 09:58:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:51.643 09:58:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:51.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:51.643 --rc genhtml_branch_coverage=1 00:24:51.643 --rc genhtml_function_coverage=1 00:24:51.643 --rc genhtml_legend=1 00:24:51.643 --rc geninfo_all_blocks=1 00:24:51.643 --rc geninfo_unexecuted_blocks=1 00:24:51.643 00:24:51.643 ' 00:24:51.643 09:58:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:51.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:51.643 --rc genhtml_branch_coverage=1 00:24:51.643 --rc genhtml_function_coverage=1 00:24:51.643 --rc genhtml_legend=1 00:24:51.643 --rc geninfo_all_blocks=1 00:24:51.643 --rc geninfo_unexecuted_blocks=1 00:24:51.643 00:24:51.643 ' 00:24:51.643 09:58:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:51.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:51.643 --rc genhtml_branch_coverage=1 00:24:51.643 --rc genhtml_function_coverage=1 00:24:51.643 --rc genhtml_legend=1 00:24:51.643 --rc geninfo_all_blocks=1 00:24:51.643 --rc geninfo_unexecuted_blocks=1 00:24:51.643 00:24:51.643 ' 00:24:51.643 09:58:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:51.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:51.643 --rc genhtml_branch_coverage=1 00:24:51.643 --rc genhtml_function_coverage=1 00:24:51.643 --rc genhtml_legend=1 00:24:51.643 --rc geninfo_all_blocks=1 00:24:51.643 --rc geninfo_unexecuted_blocks=1 00:24:51.643 00:24:51.643 ' 00:24:51.643 09:58:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:51.643 09:58:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:24:51.643 09:58:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:51.643 09:58:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:51.643 09:58:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:51.643 09:58:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:51.643 09:58:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:51.643 09:58:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:51.643 09:58:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:51.643 09:58:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:51.643 09:58:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:51.643 09:58:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:51.643 09:58:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:51.643 09:58:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:51.643 09:58:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:51.643 09:58:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:51.643 09:58:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:51.643 09:58:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:51.643 09:58:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:51.643 09:58:22 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:24:51.643 09:58:22 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:51.643 09:58:22 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:51.643 09:58:22 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:51.643 09:58:22 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.643 09:58:22 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.643 09:58:22 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.643 09:58:22 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:24:51.643 09:58:22 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.643 09:58:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:24:51.643 09:58:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:51.643 09:58:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:51.643 09:58:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:51.643 09:58:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:51.643 09:58:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:51.643 09:58:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:51.643 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:51.643 09:58:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:51.643 09:58:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:51.643 09:58:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:51.643 09:58:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:24:51.643 09:58:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:24:51.643 09:58:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:24:51.643 09:58:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:24:51.643 09:58:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:24:51.643 09:58:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:24:51.643 09:58:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=b50fde51a0334f36a11fc49b04dfbbfa 00:24:51.643 09:58:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:24:51.643 09:58:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:51.643 09:58:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:51.643 09:58:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:51.643 09:58:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:51.643 09:58:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:51.643 09:58:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:51.643 09:58:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:51.643 09:58:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:51.904 09:58:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:51.904 09:58:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:51.904 09:58:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:24:51.904 09:58:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:00.050 09:58:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:00.050 09:58:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:25:00.050 09:58:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:00.050 09:58:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:00.050 09:58:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:00.050 09:58:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:00.050 09:58:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:00.050 09:58:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:25:00.050 09:58:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:00.050 09:58:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:25:00.050 09:58:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:25:00.050 09:58:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:25:00.050 09:58:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:25:00.050 09:58:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:25:00.050 09:58:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:25:00.050 09:58:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:00.050 09:58:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:00.050 09:58:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:00.050 09:58:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:00.050 09:58:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:00.050 09:58:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:00.050 09:58:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:00.050 09:58:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:00.050 09:58:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:00.050 09:58:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:00.050 09:58:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:00.050 09:58:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:00.050 09:58:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:00.050 09:58:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:00.050 09:58:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:00.050 09:58:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:00.050 09:58:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:00.050 09:58:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:00.050 09:58:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:00.051 09:58:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:00.051 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:00.051 09:58:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:00.051 09:58:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:00.051 09:58:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:00.051 09:58:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:00.051 09:58:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:00.051 09:58:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:00.051 09:58:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:00.051 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:00.051 09:58:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:00.051 09:58:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:00.051 09:58:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:00.051 09:58:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:00.051 09:58:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:00.051 09:58:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:00.051 09:58:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:00.051 09:58:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:00.051 09:58:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:00.051 09:58:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:00.051 09:58:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:00.051 09:58:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:00.051 09:58:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:00.051 09:58:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:00.051 09:58:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:00.051 09:58:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:00.051 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:00.051 09:58:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:00.051 09:58:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:00.051 09:58:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:00.051 09:58:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:00.051 09:58:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:00.051 09:58:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:00.051 09:58:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:00.051 09:58:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:00.051 09:58:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:00.051 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:00.051 09:58:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:00.051 09:58:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:00.051 09:58:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:25:00.051 09:58:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:00.051 09:58:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:00.051 09:58:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:00.051 09:58:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:00.051 09:58:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:00.051 09:58:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:00.051 09:58:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:00.051 09:58:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:00.051 09:58:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:00.051 09:58:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:00.051 09:58:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:00.051 09:58:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:00.051 09:58:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:00.051 09:58:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:00.051 09:58:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:00.051 09:58:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:00.051 09:58:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:00.051 09:58:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:00.051 09:58:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:00.051 09:58:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:00.051 09:58:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:00.051 09:58:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:00.051 09:58:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:00.051 09:58:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:00.051 09:58:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:00.051 09:58:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:00.051 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:00.051 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.537 ms 00:25:00.051 00:25:00.051 --- 10.0.0.2 ping statistics --- 00:25:00.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:00.051 rtt min/avg/max/mdev = 0.537/0.537/0.537/0.000 ms 00:25:00.051 09:58:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:00.051 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:00.051 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:25:00.051 00:25:00.051 --- 10.0.0.1 ping statistics --- 00:25:00.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:00.051 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:25:00.051 09:58:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:00.051 09:58:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:25:00.051 09:58:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:00.051 09:58:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:00.051 09:58:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:00.051 09:58:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:00.051 09:58:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:00.051 09:58:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:00.051 09:58:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:00.051 09:58:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:25:00.051 09:58:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:00.051 09:58:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:00.051 09:58:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:00.051 09:58:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=1464175 00:25:00.051 09:58:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 1464175 00:25:00.051 09:58:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:25:00.051 09:58:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 1464175 ']' 00:25:00.051 09:58:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:00.051 09:58:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:00.051 09:58:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:00.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:00.051 09:58:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:00.051 09:58:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:00.051 [2024-11-20 09:58:30.133153] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:25:00.052 [2024-11-20 09:58:30.133231] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:00.052 [2024-11-20 09:58:30.234679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:00.052 [2024-11-20 09:58:30.289058] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:00.052 [2024-11-20 09:58:30.289115] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:00.052 [2024-11-20 09:58:30.289124] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:00.052 [2024-11-20 09:58:30.289131] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:00.052 [2024-11-20 09:58:30.289137] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:00.052 [2024-11-20 09:58:30.289914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:00.052 09:58:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:00.052 09:58:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:25:00.052 09:58:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:00.052 09:58:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:00.052 09:58:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:00.312 09:58:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:00.312 09:58:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:25:00.312 09:58:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.312 09:58:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:00.312 [2024-11-20 09:58:31.004019] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:00.312 09:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.312 09:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:25:00.312 09:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.312 09:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:00.312 null0 00:25:00.312 09:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.312 09:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:25:00.312 09:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.312 09:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:00.312 09:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.312 09:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:25:00.312 09:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.312 09:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:00.312 09:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.312 09:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g b50fde51a0334f36a11fc49b04dfbbfa 00:25:00.312 09:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.312 09:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:00.312 09:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.312 09:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:00.313 09:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.313 09:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:00.313 [2024-11-20 09:58:31.064409] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:00.313 09:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.313 09:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:25:00.313 09:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.313 09:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:00.573 nvme0n1 00:25:00.573 09:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.573 09:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:00.573 09:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.573 09:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:00.573 [ 00:25:00.573 { 00:25:00.573 "name": "nvme0n1", 00:25:00.573 "aliases": [ 00:25:00.573 "b50fde51-a033-4f36-a11f-c49b04dfbbfa" 00:25:00.573 ], 00:25:00.573 "product_name": "NVMe disk", 00:25:00.573 "block_size": 512, 00:25:00.573 "num_blocks": 2097152, 00:25:00.573 "uuid": "b50fde51-a033-4f36-a11f-c49b04dfbbfa", 00:25:00.573 "numa_id": 0, 00:25:00.573 "assigned_rate_limits": { 00:25:00.573 "rw_ios_per_sec": 0, 00:25:00.573 "rw_mbytes_per_sec": 0, 00:25:00.573 "r_mbytes_per_sec": 0, 00:25:00.573 "w_mbytes_per_sec": 0 00:25:00.573 }, 00:25:00.573 "claimed": false, 00:25:00.573 "zoned": false, 00:25:00.573 "supported_io_types": { 00:25:00.573 "read": true, 00:25:00.573 "write": true, 00:25:00.573 "unmap": false, 00:25:00.573 "flush": true, 00:25:00.573 "reset": true, 00:25:00.573 "nvme_admin": true, 00:25:00.573 "nvme_io": true, 00:25:00.573 "nvme_io_md": false, 00:25:00.573 "write_zeroes": true, 00:25:00.573 "zcopy": false, 00:25:00.573 "get_zone_info": false, 00:25:00.573 "zone_management": false, 00:25:00.573 "zone_append": false, 00:25:00.573 "compare": true, 00:25:00.573 "compare_and_write": true, 00:25:00.573 "abort": true, 00:25:00.573 "seek_hole": false, 00:25:00.573 "seek_data": false, 00:25:00.573 "copy": true, 00:25:00.573 "nvme_iov_md": false 00:25:00.573 }, 00:25:00.573 "memory_domains": [ 00:25:00.573 { 00:25:00.573 "dma_device_id": "system", 00:25:00.573 "dma_device_type": 1 00:25:00.573 } 00:25:00.573 ], 00:25:00.573 "driver_specific": { 00:25:00.573 "nvme": [ 00:25:00.573 { 00:25:00.573 "trid": { 00:25:00.573 "trtype": "TCP", 00:25:00.573 "adrfam": "IPv4", 00:25:00.573 "traddr": "10.0.0.2", 00:25:00.573 "trsvcid": "4420", 00:25:00.573 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:00.573 }, 00:25:00.573 "ctrlr_data": { 00:25:00.573 "cntlid": 1, 00:25:00.573 "vendor_id": "0x8086", 00:25:00.573 "model_number": "SPDK bdev Controller", 00:25:00.573 "serial_number": "00000000000000000000", 00:25:00.573 "firmware_revision": "25.01", 00:25:00.573 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:00.573 "oacs": { 00:25:00.573 "security": 0, 00:25:00.573 "format": 0, 00:25:00.573 "firmware": 0, 00:25:00.573 "ns_manage": 0 00:25:00.573 }, 00:25:00.573 "multi_ctrlr": true, 00:25:00.573 "ana_reporting": false 00:25:00.573 }, 00:25:00.573 "vs": { 00:25:00.573 "nvme_version": "1.3" 00:25:00.573 }, 00:25:00.573 "ns_data": { 00:25:00.573 "id": 1, 00:25:00.573 "can_share": true 00:25:00.573 } 00:25:00.573 } 00:25:00.573 ], 00:25:00.573 "mp_policy": "active_passive" 00:25:00.573 } 00:25:00.573 } 00:25:00.573 ] 00:25:00.573 09:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.573 09:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:25:00.573 09:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.573 09:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:00.573 [2024-11-20 09:58:31.340872] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:00.573 [2024-11-20 09:58:31.340959] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21b7ce0 (9): Bad file descriptor 00:25:00.573 [2024-11-20 09:58:31.473280] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:25:00.573 09:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.573 09:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:00.573 09:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.573 09:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:00.833 [ 00:25:00.833 { 00:25:00.834 "name": "nvme0n1", 00:25:00.834 "aliases": [ 00:25:00.834 "b50fde51-a033-4f36-a11f-c49b04dfbbfa" 00:25:00.834 ], 00:25:00.834 "product_name": "NVMe disk", 00:25:00.834 "block_size": 512, 00:25:00.834 "num_blocks": 2097152, 00:25:00.834 "uuid": "b50fde51-a033-4f36-a11f-c49b04dfbbfa", 00:25:00.834 "numa_id": 0, 00:25:00.834 "assigned_rate_limits": { 00:25:00.834 "rw_ios_per_sec": 0, 00:25:00.834 "rw_mbytes_per_sec": 0, 00:25:00.834 "r_mbytes_per_sec": 0, 00:25:00.834 "w_mbytes_per_sec": 0 00:25:00.834 }, 00:25:00.834 "claimed": false, 00:25:00.834 "zoned": false, 00:25:00.834 "supported_io_types": { 00:25:00.834 "read": true, 00:25:00.834 "write": true, 00:25:00.834 "unmap": false, 00:25:00.834 "flush": true, 00:25:00.834 "reset": true, 00:25:00.834 "nvme_admin": true, 00:25:00.834 "nvme_io": true, 00:25:00.834 "nvme_io_md": false, 00:25:00.834 "write_zeroes": true, 00:25:00.834 "zcopy": false, 00:25:00.834 "get_zone_info": false, 00:25:00.834 "zone_management": false, 00:25:00.834 "zone_append": false, 00:25:00.834 "compare": true, 00:25:00.834 "compare_and_write": true, 00:25:00.834 "abort": true, 00:25:00.834 "seek_hole": false, 00:25:00.834 "seek_data": false, 00:25:00.834 "copy": true, 00:25:00.834 "nvme_iov_md": false 00:25:00.834 }, 00:25:00.834 "memory_domains": [ 00:25:00.834 { 00:25:00.834 "dma_device_id": "system", 00:25:00.834 "dma_device_type": 1 00:25:00.834 } 00:25:00.834 ], 00:25:00.834 "driver_specific": { 00:25:00.834 "nvme": [ 00:25:00.834 { 00:25:00.834 "trid": { 00:25:00.834 "trtype": "TCP", 00:25:00.834 "adrfam": "IPv4", 00:25:00.834 "traddr": "10.0.0.2", 00:25:00.834 "trsvcid": "4420", 00:25:00.834 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:00.834 }, 00:25:00.834 "ctrlr_data": { 00:25:00.834 "cntlid": 2, 00:25:00.834 "vendor_id": "0x8086", 00:25:00.834 "model_number": "SPDK bdev Controller", 00:25:00.834 "serial_number": "00000000000000000000", 00:25:00.834 "firmware_revision": "25.01", 00:25:00.834 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:00.834 "oacs": { 00:25:00.834 "security": 0, 00:25:00.834 "format": 0, 00:25:00.834 "firmware": 0, 00:25:00.834 "ns_manage": 0 00:25:00.834 }, 00:25:00.834 "multi_ctrlr": true, 00:25:00.834 "ana_reporting": false 00:25:00.834 }, 00:25:00.834 "vs": { 00:25:00.834 "nvme_version": "1.3" 00:25:00.834 }, 00:25:00.834 "ns_data": { 00:25:00.834 "id": 1, 00:25:00.834 "can_share": true 00:25:00.834 } 00:25:00.834 } 00:25:00.834 ], 00:25:00.834 "mp_policy": "active_passive" 00:25:00.834 } 00:25:00.834 } 00:25:00.834 ] 00:25:00.834 09:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.834 09:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:00.834 09:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.834 09:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:00.834 09:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.834 09:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:25:00.834 09:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.AKeQbvFSSJ 00:25:00.834 09:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:25:00.834 09:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.AKeQbvFSSJ 00:25:00.834 09:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.AKeQbvFSSJ 00:25:00.834 09:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.834 09:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:00.834 09:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.834 09:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:25:00.834 09:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.834 09:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:00.834 09:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.834 09:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:25:00.834 09:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.834 09:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:00.834 [2024-11-20 09:58:31.565577] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:00.834 [2024-11-20 09:58:31.565751] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:00.834 09:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.834 09:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:25:00.834 09:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.834 09:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:00.834 09:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.834 09:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:25:00.834 09:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.834 09:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:00.834 [2024-11-20 09:58:31.589653] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:00.834 nvme0n1 00:25:00.834 09:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.834 09:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:00.834 09:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.834 09:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:00.834 [ 00:25:00.834 { 00:25:00.834 "name": "nvme0n1", 00:25:00.834 "aliases": [ 00:25:00.834 "b50fde51-a033-4f36-a11f-c49b04dfbbfa" 00:25:00.834 ], 00:25:00.834 "product_name": "NVMe disk", 00:25:00.834 "block_size": 512, 00:25:00.834 "num_blocks": 2097152, 00:25:00.834 "uuid": "b50fde51-a033-4f36-a11f-c49b04dfbbfa", 00:25:00.834 "numa_id": 0, 00:25:00.834 "assigned_rate_limits": { 00:25:00.834 "rw_ios_per_sec": 0, 00:25:00.834 "rw_mbytes_per_sec": 0, 00:25:00.834 "r_mbytes_per_sec": 0, 00:25:00.834 "w_mbytes_per_sec": 0 00:25:00.834 }, 00:25:00.834 "claimed": false, 00:25:00.834 "zoned": false, 00:25:00.834 "supported_io_types": { 00:25:00.834 "read": true, 00:25:00.834 "write": true, 00:25:00.834 "unmap": false, 00:25:00.834 "flush": true, 00:25:00.834 "reset": true, 00:25:00.834 "nvme_admin": true, 00:25:00.834 "nvme_io": true, 00:25:00.834 "nvme_io_md": false, 00:25:00.834 "write_zeroes": true, 00:25:00.834 "zcopy": false, 00:25:00.834 "get_zone_info": false, 00:25:00.834 "zone_management": false, 00:25:00.834 "zone_append": false, 00:25:00.834 "compare": true, 00:25:00.834 "compare_and_write": true, 00:25:00.834 "abort": true, 00:25:00.834 "seek_hole": false, 00:25:00.834 "seek_data": false, 00:25:00.834 "copy": true, 00:25:00.834 "nvme_iov_md": false 00:25:00.834 }, 00:25:00.834 "memory_domains": [ 00:25:00.834 { 00:25:00.834 "dma_device_id": "system", 00:25:00.834 "dma_device_type": 1 00:25:00.834 } 00:25:00.834 ], 00:25:00.834 "driver_specific": { 00:25:00.834 "nvme": [ 00:25:00.834 { 00:25:00.834 "trid": { 00:25:00.834 "trtype": "TCP", 00:25:00.834 "adrfam": "IPv4", 00:25:00.834 "traddr": "10.0.0.2", 00:25:00.834 "trsvcid": "4421", 00:25:00.834 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:00.834 }, 00:25:00.834 "ctrlr_data": { 00:25:00.834 "cntlid": 3, 00:25:00.834 "vendor_id": "0x8086", 00:25:00.834 "model_number": "SPDK bdev Controller", 00:25:00.834 "serial_number": "00000000000000000000", 00:25:00.834 "firmware_revision": "25.01", 00:25:00.834 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:00.834 "oacs": { 00:25:00.834 "security": 0, 00:25:00.834 "format": 0, 00:25:00.834 "firmware": 0, 00:25:00.834 "ns_manage": 0 00:25:00.834 }, 00:25:00.834 "multi_ctrlr": true, 00:25:00.834 "ana_reporting": false 00:25:00.834 }, 00:25:00.834 "vs": { 00:25:00.834 "nvme_version": "1.3" 00:25:00.834 }, 00:25:00.834 "ns_data": { 00:25:00.834 "id": 1, 00:25:00.834 "can_share": true 00:25:00.834 } 00:25:00.834 } 00:25:00.834 ], 00:25:00.834 "mp_policy": "active_passive" 00:25:00.834 } 00:25:00.834 } 00:25:00.834 ] 00:25:00.834 09:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.834 09:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:00.834 09:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.834 09:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:00.834 09:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.834 09:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.AKeQbvFSSJ 00:25:00.834 09:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:25:00.835 09:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:25:00.835 09:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:00.835 09:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:25:00.835 09:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:00.835 09:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:25:00.835 09:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:00.835 09:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:00.835 rmmod nvme_tcp 00:25:00.835 rmmod nvme_fabrics 00:25:01.095 rmmod nvme_keyring 00:25:01.095 09:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:01.095 09:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:25:01.095 09:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:25:01.095 09:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 1464175 ']' 00:25:01.095 09:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 1464175 00:25:01.095 09:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 1464175 ']' 00:25:01.095 09:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 1464175 00:25:01.095 09:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:25:01.095 09:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:01.095 09:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1464175 00:25:01.095 09:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:01.095 09:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:01.095 09:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1464175' 00:25:01.095 killing process with pid 1464175 00:25:01.095 09:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 1464175 00:25:01.095 09:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 1464175 00:25:01.095 09:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:01.095 09:58:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:01.095 09:58:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:01.095 09:58:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:25:01.095 09:58:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:25:01.095 09:58:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:01.095 09:58:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:25:01.355 09:58:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:01.355 09:58:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:01.355 09:58:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:01.355 09:58:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:01.355 09:58:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:03.267 09:58:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:03.267 00:25:03.267 real 0m11.796s 00:25:03.267 user 0m4.274s 00:25:03.267 sys 0m6.109s 00:25:03.267 09:58:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:03.267 09:58:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:03.267 ************************************ 00:25:03.267 END TEST nvmf_async_init 00:25:03.267 ************************************ 00:25:03.267 09:58:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:25:03.267 09:58:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:03.267 09:58:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:03.267 09:58:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.267 ************************************ 00:25:03.267 START TEST dma 00:25:03.267 ************************************ 00:25:03.267 09:58:34 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:25:03.528 * Looking for test storage... 00:25:03.528 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:03.528 09:58:34 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:03.528 09:58:34 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 00:25:03.528 09:58:34 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:03.528 09:58:34 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:03.528 09:58:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:03.528 09:58:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:03.528 09:58:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:03.529 09:58:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:25:03.529 09:58:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:25:03.529 09:58:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:25:03.529 09:58:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:25:03.529 09:58:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:25:03.529 09:58:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:25:03.529 09:58:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:25:03.529 09:58:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:03.529 09:58:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:25:03.529 09:58:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:25:03.529 09:58:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:03.529 09:58:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:03.529 09:58:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:25:03.529 09:58:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:25:03.529 09:58:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:03.529 09:58:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:25:03.529 09:58:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:25:03.529 09:58:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:25:03.529 09:58:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:25:03.529 09:58:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:03.529 09:58:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:25:03.529 09:58:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:25:03.529 09:58:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:03.529 09:58:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:03.529 09:58:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:25:03.529 09:58:34 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:03.529 09:58:34 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:03.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:03.529 --rc genhtml_branch_coverage=1 00:25:03.529 --rc genhtml_function_coverage=1 00:25:03.529 --rc genhtml_legend=1 00:25:03.529 --rc geninfo_all_blocks=1 00:25:03.529 --rc geninfo_unexecuted_blocks=1 00:25:03.529 00:25:03.529 ' 00:25:03.529 09:58:34 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:03.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:03.529 --rc genhtml_branch_coverage=1 00:25:03.529 --rc genhtml_function_coverage=1 00:25:03.529 --rc genhtml_legend=1 00:25:03.529 --rc geninfo_all_blocks=1 00:25:03.529 --rc geninfo_unexecuted_blocks=1 00:25:03.529 00:25:03.529 ' 00:25:03.529 09:58:34 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:03.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:03.529 --rc genhtml_branch_coverage=1 00:25:03.529 --rc genhtml_function_coverage=1 00:25:03.529 --rc genhtml_legend=1 00:25:03.529 --rc geninfo_all_blocks=1 00:25:03.529 --rc geninfo_unexecuted_blocks=1 00:25:03.529 00:25:03.529 ' 00:25:03.529 09:58:34 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:03.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:03.529 --rc genhtml_branch_coverage=1 00:25:03.529 --rc genhtml_function_coverage=1 00:25:03.529 --rc genhtml_legend=1 00:25:03.529 --rc geninfo_all_blocks=1 00:25:03.529 --rc geninfo_unexecuted_blocks=1 00:25:03.529 00:25:03.529 ' 00:25:03.529 09:58:34 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:03.529 09:58:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:25:03.529 09:58:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:03.529 09:58:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:03.529 09:58:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:03.529 09:58:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:03.529 09:58:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:03.529 09:58:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:03.529 09:58:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:03.529 09:58:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:03.529 09:58:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:03.529 09:58:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:03.529 09:58:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:03.529 09:58:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:03.529 09:58:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:03.529 09:58:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:03.529 09:58:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:03.529 09:58:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:03.529 09:58:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:03.529 09:58:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:25:03.529 09:58:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:03.529 09:58:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:03.529 09:58:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:03.529 09:58:34 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.529 09:58:34 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.529 09:58:34 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.529 09:58:34 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:25:03.529 09:58:34 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.529 09:58:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:25:03.529 09:58:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:03.529 09:58:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:03.529 09:58:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:03.529 09:58:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:03.529 09:58:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:03.530 09:58:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:03.530 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:03.530 09:58:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:03.530 09:58:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:03.530 09:58:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:03.530 09:58:34 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:25:03.530 09:58:34 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:25:03.530 00:25:03.530 real 0m0.236s 00:25:03.530 user 0m0.140s 00:25:03.530 sys 0m0.113s 00:25:03.530 09:58:34 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:03.530 09:58:34 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:25:03.530 ************************************ 00:25:03.530 END TEST dma 00:25:03.530 ************************************ 00:25:03.791 09:58:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:25:03.791 09:58:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:03.791 09:58:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:03.791 09:58:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.791 ************************************ 00:25:03.791 START TEST nvmf_identify 00:25:03.791 ************************************ 00:25:03.791 09:58:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:25:03.791 * Looking for test storage... 00:25:03.791 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:03.791 09:58:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:03.791 09:58:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:25:03.791 09:58:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:03.791 09:58:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:03.791 09:58:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:03.791 09:58:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:03.791 09:58:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:03.791 09:58:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:25:03.791 09:58:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:25:03.791 09:58:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:25:03.791 09:58:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:25:03.791 09:58:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:25:03.791 09:58:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:25:03.791 09:58:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:25:03.791 09:58:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:03.791 09:58:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:25:03.791 09:58:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:25:03.791 09:58:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:03.791 09:58:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:03.791 09:58:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:25:03.791 09:58:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:25:03.791 09:58:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:03.791 09:58:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:25:03.791 09:58:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:25:03.791 09:58:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:25:03.791 09:58:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:25:03.791 09:58:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:03.791 09:58:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:25:03.791 09:58:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:25:03.791 09:58:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:03.791 09:58:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:03.791 09:58:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:25:03.791 09:58:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:03.791 09:58:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:03.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:03.791 --rc genhtml_branch_coverage=1 00:25:03.791 --rc genhtml_function_coverage=1 00:25:03.791 --rc genhtml_legend=1 00:25:03.791 --rc geninfo_all_blocks=1 00:25:03.791 --rc geninfo_unexecuted_blocks=1 00:25:03.791 00:25:03.791 ' 00:25:03.791 09:58:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:03.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:03.791 --rc genhtml_branch_coverage=1 00:25:03.791 --rc genhtml_function_coverage=1 00:25:03.791 --rc genhtml_legend=1 00:25:03.791 --rc geninfo_all_blocks=1 00:25:03.791 --rc geninfo_unexecuted_blocks=1 00:25:03.791 00:25:03.791 ' 00:25:03.791 09:58:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:03.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:03.791 --rc genhtml_branch_coverage=1 00:25:03.791 --rc genhtml_function_coverage=1 00:25:03.791 --rc genhtml_legend=1 00:25:03.791 --rc geninfo_all_blocks=1 00:25:03.791 --rc geninfo_unexecuted_blocks=1 00:25:03.791 00:25:03.791 ' 00:25:03.791 09:58:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:03.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:03.791 --rc genhtml_branch_coverage=1 00:25:03.791 --rc genhtml_function_coverage=1 00:25:03.791 --rc genhtml_legend=1 00:25:03.791 --rc geninfo_all_blocks=1 00:25:03.791 --rc geninfo_unexecuted_blocks=1 00:25:03.791 00:25:03.791 ' 00:25:03.791 09:58:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:03.791 09:58:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:25:04.052 09:58:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:04.052 09:58:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:04.052 09:58:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:04.052 09:58:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:04.052 09:58:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:04.052 09:58:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:04.052 09:58:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:04.052 09:58:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:04.052 09:58:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:04.052 09:58:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:04.052 09:58:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:04.052 09:58:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:04.052 09:58:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:04.052 09:58:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:04.052 09:58:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:04.052 09:58:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:04.052 09:58:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:04.052 09:58:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:25:04.052 09:58:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:04.052 09:58:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:04.052 09:58:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:04.052 09:58:34 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:04.052 09:58:34 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:04.052 09:58:34 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:04.052 09:58:34 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:25:04.052 09:58:34 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:04.052 09:58:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:25:04.052 09:58:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:04.052 09:58:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:04.052 09:58:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:04.052 09:58:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:04.052 09:58:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:04.052 09:58:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:04.052 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:04.052 09:58:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:04.052 09:58:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:04.052 09:58:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:04.052 09:58:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:04.052 09:58:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:04.052 09:58:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:25:04.052 09:58:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:04.052 09:58:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:04.052 09:58:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:04.052 09:58:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:04.052 09:58:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:04.052 09:58:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:04.052 09:58:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:04.052 09:58:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:04.052 09:58:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:04.052 09:58:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:04.052 09:58:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:25:04.052 09:58:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:12.190 09:58:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:12.190 09:58:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:25:12.190 09:58:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:12.190 09:58:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:12.190 09:58:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:12.190 09:58:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:12.190 09:58:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:12.190 09:58:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:25:12.190 09:58:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:12.190 09:58:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:25:12.190 09:58:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:25:12.190 09:58:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:25:12.190 09:58:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:25:12.190 09:58:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:25:12.190 09:58:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:25:12.190 09:58:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:12.190 09:58:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:12.190 09:58:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:12.191 09:58:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:12.191 09:58:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:12.191 09:58:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:12.191 09:58:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:12.191 09:58:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:12.191 09:58:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:12.191 09:58:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:12.191 09:58:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:12.191 09:58:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:12.191 09:58:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:12.191 09:58:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:12.191 09:58:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:12.191 09:58:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:12.191 09:58:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:12.191 09:58:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:12.191 09:58:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:12.191 09:58:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:12.191 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:12.191 09:58:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:12.191 09:58:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:12.191 09:58:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:12.191 09:58:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:12.191 09:58:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:12.191 09:58:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:12.191 09:58:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:12.191 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:12.191 09:58:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:12.191 09:58:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:12.191 09:58:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:12.191 09:58:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:12.191 09:58:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:12.191 09:58:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:12.191 09:58:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:12.191 09:58:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:12.191 09:58:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:12.191 09:58:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:12.191 09:58:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:12.191 09:58:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:12.191 09:58:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:12.191 09:58:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:12.191 09:58:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:12.191 09:58:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:12.191 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:12.191 09:58:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:12.191 09:58:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:12.191 09:58:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:12.191 09:58:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:12.191 09:58:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:12.191 09:58:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:12.191 09:58:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:12.191 09:58:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:12.191 09:58:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:12.191 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:12.191 09:58:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:12.191 09:58:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:12.191 09:58:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:25:12.191 09:58:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:12.191 09:58:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:12.191 09:58:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:12.191 09:58:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:12.191 09:58:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:12.191 09:58:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:12.191 09:58:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:12.191 09:58:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:12.191 09:58:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:12.191 09:58:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:12.191 09:58:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:12.191 09:58:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:12.191 09:58:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:12.191 09:58:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:12.191 09:58:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:12.191 09:58:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:12.191 09:58:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:12.191 09:58:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:12.191 09:58:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:12.191 09:58:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:12.191 09:58:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:12.191 09:58:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:12.191 09:58:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:12.191 09:58:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:12.191 09:58:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:12.191 09:58:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:12.191 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:12.191 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.651 ms 00:25:12.191 00:25:12.191 --- 10.0.0.2 ping statistics --- 00:25:12.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:12.191 rtt min/avg/max/mdev = 0.651/0.651/0.651/0.000 ms 00:25:12.191 09:58:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:12.191 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:12.191 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:25:12.191 00:25:12.191 --- 10.0.0.1 ping statistics --- 00:25:12.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:12.191 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:25:12.191 09:58:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:12.191 09:58:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:25:12.191 09:58:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:12.191 09:58:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:12.191 09:58:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:12.191 09:58:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:12.191 09:58:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:12.191 09:58:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:12.191 09:58:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:12.191 09:58:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:25:12.191 09:58:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:12.191 09:58:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:12.191 09:58:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1468908 00:25:12.191 09:58:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:12.191 09:58:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:12.191 09:58:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1468908 00:25:12.191 09:58:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 1468908 ']' 00:25:12.191 09:58:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:12.191 09:58:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:12.191 09:58:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:12.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:12.191 09:58:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:12.192 09:58:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:12.192 [2024-11-20 09:58:42.340963] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:25:12.192 [2024-11-20 09:58:42.341030] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:12.192 [2024-11-20 09:58:42.441942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:12.192 [2024-11-20 09:58:42.496196] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:12.192 [2024-11-20 09:58:42.496247] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:12.192 [2024-11-20 09:58:42.496256] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:12.192 [2024-11-20 09:58:42.496263] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:12.192 [2024-11-20 09:58:42.496269] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:12.192 [2024-11-20 09:58:42.498703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:12.192 [2024-11-20 09:58:42.498865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:12.192 [2024-11-20 09:58:42.499024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:12.192 [2024-11-20 09:58:42.499025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:12.453 09:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:12.453 09:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:25:12.453 09:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:12.453 09:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.453 09:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:12.453 [2024-11-20 09:58:43.178996] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:12.453 09:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.453 09:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:25:12.453 09:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:12.453 09:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:12.453 09:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:12.453 09:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.453 09:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:12.453 Malloc0 00:25:12.453 09:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.453 09:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:12.453 09:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.453 09:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:12.453 09:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.453 09:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:25:12.453 09:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.453 09:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:12.453 09:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.453 09:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:12.453 09:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.453 09:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:12.453 [2024-11-20 09:58:43.297251] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:12.453 09:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.453 09:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:12.453 09:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.453 09:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:12.453 09:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.453 09:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:25:12.453 09:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.453 09:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:12.453 [ 00:25:12.453 { 00:25:12.453 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:12.453 "subtype": "Discovery", 00:25:12.453 "listen_addresses": [ 00:25:12.453 { 00:25:12.453 "trtype": "TCP", 00:25:12.453 "adrfam": "IPv4", 00:25:12.453 "traddr": "10.0.0.2", 00:25:12.453 "trsvcid": "4420" 00:25:12.453 } 00:25:12.453 ], 00:25:12.453 "allow_any_host": true, 00:25:12.453 "hosts": [] 00:25:12.453 }, 00:25:12.453 { 00:25:12.453 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:12.453 "subtype": "NVMe", 00:25:12.453 "listen_addresses": [ 00:25:12.453 { 00:25:12.453 "trtype": "TCP", 00:25:12.453 "adrfam": "IPv4", 00:25:12.453 "traddr": "10.0.0.2", 00:25:12.453 "trsvcid": "4420" 00:25:12.453 } 00:25:12.453 ], 00:25:12.453 "allow_any_host": true, 00:25:12.453 "hosts": [], 00:25:12.453 "serial_number": "SPDK00000000000001", 00:25:12.453 "model_number": "SPDK bdev Controller", 00:25:12.453 "max_namespaces": 32, 00:25:12.453 "min_cntlid": 1, 00:25:12.453 "max_cntlid": 65519, 00:25:12.453 "namespaces": [ 00:25:12.453 { 00:25:12.453 "nsid": 1, 00:25:12.453 "bdev_name": "Malloc0", 00:25:12.453 "name": "Malloc0", 00:25:12.453 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:25:12.453 "eui64": "ABCDEF0123456789", 00:25:12.453 "uuid": "c613b24c-6848-4210-9b97-c3e2aee56a35" 00:25:12.453 } 00:25:12.453 ] 00:25:12.453 } 00:25:12.453 ] 00:25:12.453 09:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.454 09:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:25:12.454 [2024-11-20 09:58:43.359487] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:25:12.454 [2024-11-20 09:58:43.359527] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1469108 ] 00:25:12.717 [2024-11-20 09:58:43.415937] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:25:12.717 [2024-11-20 09:58:43.416005] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:25:12.717 [2024-11-20 09:58:43.416011] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:25:12.717 [2024-11-20 09:58:43.416031] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:25:12.717 [2024-11-20 09:58:43.416044] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:25:12.717 [2024-11-20 09:58:43.416874] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:25:12.717 [2024-11-20 09:58:43.416927] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1717690 0 00:25:12.717 [2024-11-20 09:58:43.427180] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:25:12.717 [2024-11-20 09:58:43.427198] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:25:12.717 [2024-11-20 09:58:43.427203] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:25:12.717 [2024-11-20 09:58:43.427207] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:25:12.717 [2024-11-20 09:58:43.427249] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.717 [2024-11-20 09:58:43.427255] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.717 [2024-11-20 09:58:43.427260] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1717690) 00:25:12.717 [2024-11-20 09:58:43.427276] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:25:12.717 [2024-11-20 09:58:43.427301] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1779100, cid 0, qid 0 00:25:12.717 [2024-11-20 09:58:43.435178] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.717 [2024-11-20 09:58:43.435190] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.717 [2024-11-20 09:58:43.435194] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.717 [2024-11-20 09:58:43.435199] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1779100) on tqpair=0x1717690 00:25:12.717 [2024-11-20 09:58:43.435210] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:12.717 [2024-11-20 09:58:43.435218] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:25:12.717 [2024-11-20 09:58:43.435223] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:25:12.717 [2024-11-20 09:58:43.435240] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.717 [2024-11-20 09:58:43.435245] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.717 [2024-11-20 09:58:43.435248] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1717690) 00:25:12.717 [2024-11-20 09:58:43.435258] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.717 [2024-11-20 09:58:43.435273] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1779100, cid 0, qid 0 00:25:12.717 [2024-11-20 09:58:43.435530] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.717 [2024-11-20 09:58:43.435539] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.717 [2024-11-20 09:58:43.435542] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.717 [2024-11-20 09:58:43.435546] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1779100) on tqpair=0x1717690 00:25:12.717 [2024-11-20 09:58:43.435552] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:25:12.717 [2024-11-20 09:58:43.435559] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:25:12.717 [2024-11-20 09:58:43.435567] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.717 [2024-11-20 09:58:43.435570] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.717 [2024-11-20 09:58:43.435574] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1717690) 00:25:12.717 [2024-11-20 09:58:43.435581] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.717 [2024-11-20 09:58:43.435597] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1779100, cid 0, qid 0 00:25:12.717 [2024-11-20 09:58:43.435762] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.717 [2024-11-20 09:58:43.435768] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.717 [2024-11-20 09:58:43.435772] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.717 [2024-11-20 09:58:43.435775] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1779100) on tqpair=0x1717690 00:25:12.717 [2024-11-20 09:58:43.435781] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:25:12.717 [2024-11-20 09:58:43.435790] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:25:12.717 [2024-11-20 09:58:43.435797] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.717 [2024-11-20 09:58:43.435800] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.717 [2024-11-20 09:58:43.435804] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1717690) 00:25:12.717 [2024-11-20 09:58:43.435811] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.717 [2024-11-20 09:58:43.435821] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1779100, cid 0, qid 0 00:25:12.717 [2024-11-20 09:58:43.436002] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.717 [2024-11-20 09:58:43.436009] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.717 [2024-11-20 09:58:43.436012] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.717 [2024-11-20 09:58:43.436016] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1779100) on tqpair=0x1717690 00:25:12.717 [2024-11-20 09:58:43.436022] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:12.717 [2024-11-20 09:58:43.436032] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.717 [2024-11-20 09:58:43.436036] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.717 [2024-11-20 09:58:43.436039] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1717690) 00:25:12.717 [2024-11-20 09:58:43.436046] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.717 [2024-11-20 09:58:43.436056] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1779100, cid 0, qid 0 00:25:12.717 [2024-11-20 09:58:43.436262] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.717 [2024-11-20 09:58:43.436269] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.717 [2024-11-20 09:58:43.436272] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.717 [2024-11-20 09:58:43.436276] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1779100) on tqpair=0x1717690 00:25:12.717 [2024-11-20 09:58:43.436281] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:25:12.717 [2024-11-20 09:58:43.436286] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:25:12.717 [2024-11-20 09:58:43.436294] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:12.717 [2024-11-20 09:58:43.436406] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:25:12.717 [2024-11-20 09:58:43.436411] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:12.717 [2024-11-20 09:58:43.436421] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.717 [2024-11-20 09:58:43.436425] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.717 [2024-11-20 09:58:43.436431] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1717690) 00:25:12.717 [2024-11-20 09:58:43.436438] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.717 [2024-11-20 09:58:43.436448] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1779100, cid 0, qid 0 00:25:12.717 [2024-11-20 09:58:43.436665] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.717 [2024-11-20 09:58:43.436672] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.717 [2024-11-20 09:58:43.436675] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.718 [2024-11-20 09:58:43.436679] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1779100) on tqpair=0x1717690 00:25:12.718 [2024-11-20 09:58:43.436684] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:12.718 [2024-11-20 09:58:43.436693] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.718 [2024-11-20 09:58:43.436697] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.718 [2024-11-20 09:58:43.436701] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1717690) 00:25:12.718 [2024-11-20 09:58:43.436708] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.718 [2024-11-20 09:58:43.436718] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1779100, cid 0, qid 0 00:25:12.718 [2024-11-20 09:58:43.436888] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.718 [2024-11-20 09:58:43.436895] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.718 [2024-11-20 09:58:43.436898] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.718 [2024-11-20 09:58:43.436902] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1779100) on tqpair=0x1717690 00:25:12.718 [2024-11-20 09:58:43.436907] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:12.718 [2024-11-20 09:58:43.436911] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:25:12.718 [2024-11-20 09:58:43.436919] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:25:12.718 [2024-11-20 09:58:43.436936] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:25:12.718 [2024-11-20 09:58:43.436947] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.718 [2024-11-20 09:58:43.436950] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1717690) 00:25:12.718 [2024-11-20 09:58:43.436957] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.718 [2024-11-20 09:58:43.436968] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1779100, cid 0, qid 0 00:25:12.718 [2024-11-20 09:58:43.437204] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:12.718 [2024-11-20 09:58:43.437210] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:12.718 [2024-11-20 09:58:43.437214] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:12.718 [2024-11-20 09:58:43.437219] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1717690): datao=0, datal=4096, cccid=0 00:25:12.718 [2024-11-20 09:58:43.437224] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1779100) on tqpair(0x1717690): expected_datao=0, payload_size=4096 00:25:12.718 [2024-11-20 09:58:43.437228] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.718 [2024-11-20 09:58:43.437249] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:12.718 [2024-11-20 09:58:43.437257] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:12.718 [2024-11-20 09:58:43.437406] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.718 [2024-11-20 09:58:43.437412] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.718 [2024-11-20 09:58:43.437415] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.718 [2024-11-20 09:58:43.437419] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1779100) on tqpair=0x1717690 00:25:12.718 [2024-11-20 09:58:43.437429] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:25:12.718 [2024-11-20 09:58:43.437434] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:25:12.718 [2024-11-20 09:58:43.437439] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:25:12.718 [2024-11-20 09:58:43.437448] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:25:12.718 [2024-11-20 09:58:43.437453] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:25:12.718 [2024-11-20 09:58:43.437458] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:25:12.718 [2024-11-20 09:58:43.437468] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:25:12.718 [2024-11-20 09:58:43.437476] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.718 [2024-11-20 09:58:43.437480] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.718 [2024-11-20 09:58:43.437484] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1717690) 00:25:12.718 [2024-11-20 09:58:43.437494] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:12.718 [2024-11-20 09:58:43.437505] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1779100, cid 0, qid 0 00:25:12.718 [2024-11-20 09:58:43.437680] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.718 [2024-11-20 09:58:43.437687] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.718 [2024-11-20 09:58:43.437690] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.718 [2024-11-20 09:58:43.437694] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1779100) on tqpair=0x1717690 00:25:12.718 [2024-11-20 09:58:43.437703] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.718 [2024-11-20 09:58:43.437706] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.718 [2024-11-20 09:58:43.437710] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1717690) 00:25:12.718 [2024-11-20 09:58:43.437716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.718 [2024-11-20 09:58:43.437723] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.718 [2024-11-20 09:58:43.437727] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.718 [2024-11-20 09:58:43.437730] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1717690) 00:25:12.718 [2024-11-20 09:58:43.437736] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.718 [2024-11-20 09:58:43.437742] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.718 [2024-11-20 09:58:43.437746] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.718 [2024-11-20 09:58:43.437749] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1717690) 00:25:12.718 [2024-11-20 09:58:43.437755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.718 [2024-11-20 09:58:43.437761] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.718 [2024-11-20 09:58:43.437767] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.718 [2024-11-20 09:58:43.437771] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1717690) 00:25:12.718 [2024-11-20 09:58:43.437777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.718 [2024-11-20 09:58:43.437782] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:25:12.718 [2024-11-20 09:58:43.437790] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:12.718 [2024-11-20 09:58:43.437797] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.718 [2024-11-20 09:58:43.437801] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1717690) 00:25:12.718 [2024-11-20 09:58:43.437807] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.718 [2024-11-20 09:58:43.437821] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1779100, cid 0, qid 0 00:25:12.718 [2024-11-20 09:58:43.437827] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1779280, cid 1, qid 0 00:25:12.718 [2024-11-20 09:58:43.437832] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1779400, cid 2, qid 0 00:25:12.718 [2024-11-20 09:58:43.437836] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1779580, cid 3, qid 0 00:25:12.718 [2024-11-20 09:58:43.437841] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1779700, cid 4, qid 0 00:25:12.718 [2024-11-20 09:58:43.438083] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.718 [2024-11-20 09:58:43.438090] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.718 [2024-11-20 09:58:43.438093] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.718 [2024-11-20 09:58:43.438097] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1779700) on tqpair=0x1717690 00:25:12.719 [2024-11-20 09:58:43.438105] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:25:12.719 [2024-11-20 09:58:43.438110] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:25:12.719 [2024-11-20 09:58:43.438121] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.719 [2024-11-20 09:58:43.438125] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1717690) 00:25:12.719 [2024-11-20 09:58:43.438132] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.719 [2024-11-20 09:58:43.438143] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1779700, cid 4, qid 0 00:25:12.719 [2024-11-20 09:58:43.438335] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:12.719 [2024-11-20 09:58:43.438343] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:12.719 [2024-11-20 09:58:43.438346] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:12.719 [2024-11-20 09:58:43.438350] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1717690): datao=0, datal=4096, cccid=4 00:25:12.719 [2024-11-20 09:58:43.438355] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1779700) on tqpair(0x1717690): expected_datao=0, payload_size=4096 00:25:12.719 [2024-11-20 09:58:43.438359] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.719 [2024-11-20 09:58:43.438373] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:12.719 [2024-11-20 09:58:43.438377] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:12.719 [2024-11-20 09:58:43.483176] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.719 [2024-11-20 09:58:43.483188] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.719 [2024-11-20 09:58:43.483196] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.719 [2024-11-20 09:58:43.483201] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1779700) on tqpair=0x1717690 00:25:12.719 [2024-11-20 09:58:43.483217] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:25:12.719 [2024-11-20 09:58:43.483248] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.719 [2024-11-20 09:58:43.483253] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1717690) 00:25:12.719 [2024-11-20 09:58:43.483261] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.719 [2024-11-20 09:58:43.483270] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.719 [2024-11-20 09:58:43.483273] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.719 [2024-11-20 09:58:43.483277] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1717690) 00:25:12.719 [2024-11-20 09:58:43.483283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.719 [2024-11-20 09:58:43.483301] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1779700, cid 4, qid 0 00:25:12.719 [2024-11-20 09:58:43.483306] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1779880, cid 5, qid 0 00:25:12.719 [2024-11-20 09:58:43.483600] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:12.719 [2024-11-20 09:58:43.483607] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:12.719 [2024-11-20 09:58:43.483611] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:12.719 [2024-11-20 09:58:43.483615] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1717690): datao=0, datal=1024, cccid=4 00:25:12.719 [2024-11-20 09:58:43.483619] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1779700) on tqpair(0x1717690): expected_datao=0, payload_size=1024 00:25:12.719 [2024-11-20 09:58:43.483624] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.719 [2024-11-20 09:58:43.483630] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:12.719 [2024-11-20 09:58:43.483634] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:12.719 [2024-11-20 09:58:43.483640] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.719 [2024-11-20 09:58:43.483646] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.719 [2024-11-20 09:58:43.483650] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.719 [2024-11-20 09:58:43.483653] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1779880) on tqpair=0x1717690 00:25:12.719 [2024-11-20 09:58:43.528176] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.719 [2024-11-20 09:58:43.528189] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.719 [2024-11-20 09:58:43.528192] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.719 [2024-11-20 09:58:43.528197] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1779700) on tqpair=0x1717690 00:25:12.719 [2024-11-20 09:58:43.528211] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.719 [2024-11-20 09:58:43.528215] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1717690) 00:25:12.719 [2024-11-20 09:58:43.528223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.719 [2024-11-20 09:58:43.528242] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1779700, cid 4, qid 0 00:25:12.719 [2024-11-20 09:58:43.528522] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:12.719 [2024-11-20 09:58:43.528528] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:12.719 [2024-11-20 09:58:43.528532] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:12.719 [2024-11-20 09:58:43.528536] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1717690): datao=0, datal=3072, cccid=4 00:25:12.719 [2024-11-20 09:58:43.528545] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1779700) on tqpair(0x1717690): expected_datao=0, payload_size=3072 00:25:12.719 [2024-11-20 09:58:43.528550] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.719 [2024-11-20 09:58:43.528557] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:12.719 [2024-11-20 09:58:43.528561] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:12.719 [2024-11-20 09:58:43.528680] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.719 [2024-11-20 09:58:43.528686] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.719 [2024-11-20 09:58:43.528689] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.719 [2024-11-20 09:58:43.528693] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1779700) on tqpair=0x1717690 00:25:12.719 [2024-11-20 09:58:43.528702] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.719 [2024-11-20 09:58:43.528706] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1717690) 00:25:12.719 [2024-11-20 09:58:43.528713] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.719 [2024-11-20 09:58:43.528727] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1779700, cid 4, qid 0 00:25:12.719 [2024-11-20 09:58:43.528963] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:12.719 [2024-11-20 09:58:43.528969] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:12.719 [2024-11-20 09:58:43.528973] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:12.719 [2024-11-20 09:58:43.528976] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1717690): datao=0, datal=8, cccid=4 00:25:12.719 [2024-11-20 09:58:43.528981] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1779700) on tqpair(0x1717690): expected_datao=0, payload_size=8 00:25:12.719 [2024-11-20 09:58:43.528985] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.719 [2024-11-20 09:58:43.528992] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:12.719 [2024-11-20 09:58:43.528996] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:12.719 [2024-11-20 09:58:43.569386] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.719 [2024-11-20 09:58:43.569399] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.719 [2024-11-20 09:58:43.569403] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.719 [2024-11-20 09:58:43.569407] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1779700) on tqpair=0x1717690 00:25:12.719 ===================================================== 00:25:12.719 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:12.719 ===================================================== 00:25:12.719 Controller Capabilities/Features 00:25:12.719 ================================ 00:25:12.719 Vendor ID: 0000 00:25:12.719 Subsystem Vendor ID: 0000 00:25:12.719 Serial Number: .................... 00:25:12.719 Model Number: ........................................ 00:25:12.719 Firmware Version: 25.01 00:25:12.719 Recommended Arb Burst: 0 00:25:12.719 IEEE OUI Identifier: 00 00 00 00:25:12.719 Multi-path I/O 00:25:12.719 May have multiple subsystem ports: No 00:25:12.719 May have multiple controllers: No 00:25:12.720 Associated with SR-IOV VF: No 00:25:12.720 Max Data Transfer Size: 131072 00:25:12.720 Max Number of Namespaces: 0 00:25:12.720 Max Number of I/O Queues: 1024 00:25:12.720 NVMe Specification Version (VS): 1.3 00:25:12.720 NVMe Specification Version (Identify): 1.3 00:25:12.720 Maximum Queue Entries: 128 00:25:12.720 Contiguous Queues Required: Yes 00:25:12.720 Arbitration Mechanisms Supported 00:25:12.720 Weighted Round Robin: Not Supported 00:25:12.720 Vendor Specific: Not Supported 00:25:12.720 Reset Timeout: 15000 ms 00:25:12.720 Doorbell Stride: 4 bytes 00:25:12.720 NVM Subsystem Reset: Not Supported 00:25:12.720 Command Sets Supported 00:25:12.720 NVM Command Set: Supported 00:25:12.720 Boot Partition: Not Supported 00:25:12.720 Memory Page Size Minimum: 4096 bytes 00:25:12.720 Memory Page Size Maximum: 4096 bytes 00:25:12.720 Persistent Memory Region: Not Supported 00:25:12.720 Optional Asynchronous Events Supported 00:25:12.720 Namespace Attribute Notices: Not Supported 00:25:12.720 Firmware Activation Notices: Not Supported 00:25:12.720 ANA Change Notices: Not Supported 00:25:12.720 PLE Aggregate Log Change Notices: Not Supported 00:25:12.720 LBA Status Info Alert Notices: Not Supported 00:25:12.720 EGE Aggregate Log Change Notices: Not Supported 00:25:12.720 Normal NVM Subsystem Shutdown event: Not Supported 00:25:12.720 Zone Descriptor Change Notices: Not Supported 00:25:12.720 Discovery Log Change Notices: Supported 00:25:12.720 Controller Attributes 00:25:12.720 128-bit Host Identifier: Not Supported 00:25:12.720 Non-Operational Permissive Mode: Not Supported 00:25:12.720 NVM Sets: Not Supported 00:25:12.720 Read Recovery Levels: Not Supported 00:25:12.720 Endurance Groups: Not Supported 00:25:12.720 Predictable Latency Mode: Not Supported 00:25:12.720 Traffic Based Keep ALive: Not Supported 00:25:12.720 Namespace Granularity: Not Supported 00:25:12.720 SQ Associations: Not Supported 00:25:12.720 UUID List: Not Supported 00:25:12.720 Multi-Domain Subsystem: Not Supported 00:25:12.720 Fixed Capacity Management: Not Supported 00:25:12.720 Variable Capacity Management: Not Supported 00:25:12.720 Delete Endurance Group: Not Supported 00:25:12.720 Delete NVM Set: Not Supported 00:25:12.720 Extended LBA Formats Supported: Not Supported 00:25:12.720 Flexible Data Placement Supported: Not Supported 00:25:12.720 00:25:12.720 Controller Memory Buffer Support 00:25:12.720 ================================ 00:25:12.720 Supported: No 00:25:12.720 00:25:12.720 Persistent Memory Region Support 00:25:12.720 ================================ 00:25:12.720 Supported: No 00:25:12.720 00:25:12.720 Admin Command Set Attributes 00:25:12.720 ============================ 00:25:12.720 Security Send/Receive: Not Supported 00:25:12.720 Format NVM: Not Supported 00:25:12.720 Firmware Activate/Download: Not Supported 00:25:12.720 Namespace Management: Not Supported 00:25:12.720 Device Self-Test: Not Supported 00:25:12.720 Directives: Not Supported 00:25:12.720 NVMe-MI: Not Supported 00:25:12.720 Virtualization Management: Not Supported 00:25:12.720 Doorbell Buffer Config: Not Supported 00:25:12.720 Get LBA Status Capability: Not Supported 00:25:12.720 Command & Feature Lockdown Capability: Not Supported 00:25:12.720 Abort Command Limit: 1 00:25:12.720 Async Event Request Limit: 4 00:25:12.720 Number of Firmware Slots: N/A 00:25:12.720 Firmware Slot 1 Read-Only: N/A 00:25:12.720 Firmware Activation Without Reset: N/A 00:25:12.720 Multiple Update Detection Support: N/A 00:25:12.720 Firmware Update Granularity: No Information Provided 00:25:12.720 Per-Namespace SMART Log: No 00:25:12.720 Asymmetric Namespace Access Log Page: Not Supported 00:25:12.720 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:12.720 Command Effects Log Page: Not Supported 00:25:12.720 Get Log Page Extended Data: Supported 00:25:12.720 Telemetry Log Pages: Not Supported 00:25:12.720 Persistent Event Log Pages: Not Supported 00:25:12.720 Supported Log Pages Log Page: May Support 00:25:12.720 Commands Supported & Effects Log Page: Not Supported 00:25:12.720 Feature Identifiers & Effects Log Page:May Support 00:25:12.720 NVMe-MI Commands & Effects Log Page: May Support 00:25:12.720 Data Area 4 for Telemetry Log: Not Supported 00:25:12.720 Error Log Page Entries Supported: 128 00:25:12.720 Keep Alive: Not Supported 00:25:12.720 00:25:12.720 NVM Command Set Attributes 00:25:12.720 ========================== 00:25:12.720 Submission Queue Entry Size 00:25:12.720 Max: 1 00:25:12.720 Min: 1 00:25:12.720 Completion Queue Entry Size 00:25:12.720 Max: 1 00:25:12.720 Min: 1 00:25:12.720 Number of Namespaces: 0 00:25:12.720 Compare Command: Not Supported 00:25:12.720 Write Uncorrectable Command: Not Supported 00:25:12.720 Dataset Management Command: Not Supported 00:25:12.720 Write Zeroes Command: Not Supported 00:25:12.720 Set Features Save Field: Not Supported 00:25:12.720 Reservations: Not Supported 00:25:12.720 Timestamp: Not Supported 00:25:12.720 Copy: Not Supported 00:25:12.720 Volatile Write Cache: Not Present 00:25:12.720 Atomic Write Unit (Normal): 1 00:25:12.720 Atomic Write Unit (PFail): 1 00:25:12.720 Atomic Compare & Write Unit: 1 00:25:12.720 Fused Compare & Write: Supported 00:25:12.720 Scatter-Gather List 00:25:12.720 SGL Command Set: Supported 00:25:12.720 SGL Keyed: Supported 00:25:12.720 SGL Bit Bucket Descriptor: Not Supported 00:25:12.720 SGL Metadata Pointer: Not Supported 00:25:12.720 Oversized SGL: Not Supported 00:25:12.720 SGL Metadata Address: Not Supported 00:25:12.720 SGL Offset: Supported 00:25:12.720 Transport SGL Data Block: Not Supported 00:25:12.720 Replay Protected Memory Block: Not Supported 00:25:12.720 00:25:12.720 Firmware Slot Information 00:25:12.720 ========================= 00:25:12.720 Active slot: 0 00:25:12.720 00:25:12.720 00:25:12.720 Error Log 00:25:12.720 ========= 00:25:12.720 00:25:12.720 Active Namespaces 00:25:12.720 ================= 00:25:12.720 Discovery Log Page 00:25:12.720 ================== 00:25:12.720 Generation Counter: 2 00:25:12.720 Number of Records: 2 00:25:12.720 Record Format: 0 00:25:12.720 00:25:12.720 Discovery Log Entry 0 00:25:12.720 ---------------------- 00:25:12.720 Transport Type: 3 (TCP) 00:25:12.720 Address Family: 1 (IPv4) 00:25:12.720 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:12.720 Entry Flags: 00:25:12.720 Duplicate Returned Information: 1 00:25:12.720 Explicit Persistent Connection Support for Discovery: 1 00:25:12.721 Transport Requirements: 00:25:12.721 Secure Channel: Not Required 00:25:12.721 Port ID: 0 (0x0000) 00:25:12.721 Controller ID: 65535 (0xffff) 00:25:12.721 Admin Max SQ Size: 128 00:25:12.721 Transport Service Identifier: 4420 00:25:12.721 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:12.721 Transport Address: 10.0.0.2 00:25:12.721 Discovery Log Entry 1 00:25:12.721 ---------------------- 00:25:12.721 Transport Type: 3 (TCP) 00:25:12.721 Address Family: 1 (IPv4) 00:25:12.721 Subsystem Type: 2 (NVM Subsystem) 00:25:12.721 Entry Flags: 00:25:12.721 Duplicate Returned Information: 0 00:25:12.721 Explicit Persistent Connection Support for Discovery: 0 00:25:12.721 Transport Requirements: 00:25:12.721 Secure Channel: Not Required 00:25:12.721 Port ID: 0 (0x0000) 00:25:12.721 Controller ID: 65535 (0xffff) 00:25:12.721 Admin Max SQ Size: 128 00:25:12.721 Transport Service Identifier: 4420 00:25:12.721 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:25:12.721 Transport Address: 10.0.0.2 [2024-11-20 09:58:43.569512] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:25:12.721 [2024-11-20 09:58:43.569523] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1779100) on tqpair=0x1717690 00:25:12.721 [2024-11-20 09:58:43.569530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.721 [2024-11-20 09:58:43.569535] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1779280) on tqpair=0x1717690 00:25:12.721 [2024-11-20 09:58:43.569540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.721 [2024-11-20 09:58:43.569545] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1779400) on tqpair=0x1717690 00:25:12.721 [2024-11-20 09:58:43.569550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.721 [2024-11-20 09:58:43.569555] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1779580) on tqpair=0x1717690 00:25:12.721 [2024-11-20 09:58:43.569560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.721 [2024-11-20 09:58:43.569571] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.721 [2024-11-20 09:58:43.569576] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.721 [2024-11-20 09:58:43.569582] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1717690) 00:25:12.721 [2024-11-20 09:58:43.569590] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.721 [2024-11-20 09:58:43.569605] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1779580, cid 3, qid 0 00:25:12.721 [2024-11-20 09:58:43.569724] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.721 [2024-11-20 09:58:43.569731] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.721 [2024-11-20 09:58:43.569735] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.721 [2024-11-20 09:58:43.569739] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1779580) on tqpair=0x1717690 00:25:12.721 [2024-11-20 09:58:43.569746] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.721 [2024-11-20 09:58:43.569750] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.721 [2024-11-20 09:58:43.569753] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1717690) 00:25:12.721 [2024-11-20 09:58:43.569760] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.721 [2024-11-20 09:58:43.569774] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1779580, cid 3, qid 0 00:25:12.721 [2024-11-20 09:58:43.570003] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.721 [2024-11-20 09:58:43.570010] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.721 [2024-11-20 09:58:43.570013] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.721 [2024-11-20 09:58:43.570017] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1779580) on tqpair=0x1717690 00:25:12.721 [2024-11-20 09:58:43.570022] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:25:12.721 [2024-11-20 09:58:43.570027] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:25:12.721 [2024-11-20 09:58:43.570036] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.721 [2024-11-20 09:58:43.570040] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.721 [2024-11-20 09:58:43.570044] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1717690) 00:25:12.721 [2024-11-20 09:58:43.570051] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.721 [2024-11-20 09:58:43.570061] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1779580, cid 3, qid 0 00:25:12.721 [2024-11-20 09:58:43.570277] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.721 [2024-11-20 09:58:43.570284] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.721 [2024-11-20 09:58:43.570287] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.721 [2024-11-20 09:58:43.570291] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1779580) on tqpair=0x1717690 00:25:12.721 [2024-11-20 09:58:43.570302] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.721 [2024-11-20 09:58:43.570306] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.721 [2024-11-20 09:58:43.570309] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1717690) 00:25:12.721 [2024-11-20 09:58:43.570316] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.721 [2024-11-20 09:58:43.570327] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1779580, cid 3, qid 0 00:25:12.721 [2024-11-20 09:58:43.570530] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.721 [2024-11-20 09:58:43.570538] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.721 [2024-11-20 09:58:43.570541] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.721 [2024-11-20 09:58:43.570545] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1779580) on tqpair=0x1717690 00:25:12.721 [2024-11-20 09:58:43.570558] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.721 [2024-11-20 09:58:43.570562] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.721 [2024-11-20 09:58:43.570566] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1717690) 00:25:12.721 [2024-11-20 09:58:43.570573] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.721 [2024-11-20 09:58:43.570583] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1779580, cid 3, qid 0 00:25:12.721 [2024-11-20 09:58:43.570830] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.721 [2024-11-20 09:58:43.570837] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.721 [2024-11-20 09:58:43.570840] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.721 [2024-11-20 09:58:43.570844] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1779580) on tqpair=0x1717690 00:25:12.721 [2024-11-20 09:58:43.570855] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.721 [2024-11-20 09:58:43.570859] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.721 [2024-11-20 09:58:43.570862] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1717690) 00:25:12.721 [2024-11-20 09:58:43.570869] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.721 [2024-11-20 09:58:43.570880] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1779580, cid 3, qid 0 00:25:12.721 [2024-11-20 09:58:43.571066] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.721 [2024-11-20 09:58:43.571072] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.721 [2024-11-20 09:58:43.571076] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.721 [2024-11-20 09:58:43.571080] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1779580) on tqpair=0x1717690 00:25:12.721 [2024-11-20 09:58:43.571089] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.721 [2024-11-20 09:58:43.571093] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.721 [2024-11-20 09:58:43.571097] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1717690) 00:25:12.721 [2024-11-20 09:58:43.571104] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.721 [2024-11-20 09:58:43.571113] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1779580, cid 3, qid 0 00:25:12.721 [2024-11-20 09:58:43.571334] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.721 [2024-11-20 09:58:43.571341] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.721 [2024-11-20 09:58:43.571344] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.722 [2024-11-20 09:58:43.571348] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1779580) on tqpair=0x1717690 00:25:12.722 [2024-11-20 09:58:43.571358] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.722 [2024-11-20 09:58:43.571362] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.722 [2024-11-20 09:58:43.571365] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1717690) 00:25:12.722 [2024-11-20 09:58:43.571372] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.722 [2024-11-20 09:58:43.571382] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1779580, cid 3, qid 0 00:25:12.722 [2024-11-20 09:58:43.571586] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.722 [2024-11-20 09:58:43.571592] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.722 [2024-11-20 09:58:43.571595] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.722 [2024-11-20 09:58:43.571599] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1779580) on tqpair=0x1717690 00:25:12.722 [2024-11-20 09:58:43.571609] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.722 [2024-11-20 09:58:43.571616] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.722 [2024-11-20 09:58:43.571619] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1717690) 00:25:12.722 [2024-11-20 09:58:43.571626] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.722 [2024-11-20 09:58:43.571636] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1779580, cid 3, qid 0 00:25:12.722 [2024-11-20 09:58:43.571840] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.722 [2024-11-20 09:58:43.571846] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.722 [2024-11-20 09:58:43.571850] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.722 [2024-11-20 09:58:43.571854] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1779580) on tqpair=0x1717690 00:25:12.722 [2024-11-20 09:58:43.571863] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.722 [2024-11-20 09:58:43.571867] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.722 [2024-11-20 09:58:43.571871] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1717690) 00:25:12.722 [2024-11-20 09:58:43.571878] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.722 [2024-11-20 09:58:43.571888] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1779580, cid 3, qid 0 00:25:12.722 [2024-11-20 09:58:43.572074] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.722 [2024-11-20 09:58:43.572080] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.722 [2024-11-20 09:58:43.572083] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.722 [2024-11-20 09:58:43.572087] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1779580) on tqpair=0x1717690 00:25:12.722 [2024-11-20 09:58:43.572097] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.722 [2024-11-20 09:58:43.572101] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.722 [2024-11-20 09:58:43.572105] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1717690) 00:25:12.722 [2024-11-20 09:58:43.572111] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.722 [2024-11-20 09:58:43.572121] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1779580, cid 3, qid 0 00:25:12.722 [2024-11-20 09:58:43.572293] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.722 [2024-11-20 09:58:43.572299] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.722 [2024-11-20 09:58:43.572303] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.722 [2024-11-20 09:58:43.572307] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1779580) on tqpair=0x1717690 00:25:12.722 [2024-11-20 09:58:43.572316] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.722 [2024-11-20 09:58:43.572320] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.722 [2024-11-20 09:58:43.572324] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1717690) 00:25:12.722 [2024-11-20 09:58:43.572331] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.722 [2024-11-20 09:58:43.572341] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1779580, cid 3, qid 0 00:25:12.722 [2024-11-20 09:58:43.572543] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.722 [2024-11-20 09:58:43.572551] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.722 [2024-11-20 09:58:43.572554] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.722 [2024-11-20 09:58:43.572558] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1779580) on tqpair=0x1717690 00:25:12.722 [2024-11-20 09:58:43.572568] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.722 [2024-11-20 09:58:43.572572] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.722 [2024-11-20 09:58:43.572578] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1717690) 00:25:12.722 [2024-11-20 09:58:43.572585] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.722 [2024-11-20 09:58:43.572595] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1779580, cid 3, qid 0 00:25:12.722 [2024-11-20 09:58:43.572847] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.722 [2024-11-20 09:58:43.572853] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.722 [2024-11-20 09:58:43.572856] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.722 [2024-11-20 09:58:43.572860] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1779580) on tqpair=0x1717690 00:25:12.722 [2024-11-20 09:58:43.572870] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.722 [2024-11-20 09:58:43.572875] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.722 [2024-11-20 09:58:43.572878] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1717690) 00:25:12.722 [2024-11-20 09:58:43.572885] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.722 [2024-11-20 09:58:43.572895] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1779580, cid 3, qid 0 00:25:12.723 [2024-11-20 09:58:43.573101] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.723 [2024-11-20 09:58:43.573107] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.723 [2024-11-20 09:58:43.573110] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.723 [2024-11-20 09:58:43.573114] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1779580) on tqpair=0x1717690 00:25:12.723 [2024-11-20 09:58:43.573124] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.723 [2024-11-20 09:58:43.573128] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.723 [2024-11-20 09:58:43.573132] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1717690) 00:25:12.723 [2024-11-20 09:58:43.573138] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.723 [2024-11-20 09:58:43.573148] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1779580, cid 3, qid 0 00:25:12.723 [2024-11-20 09:58:43.577172] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.723 [2024-11-20 09:58:43.577181] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.723 [2024-11-20 09:58:43.577185] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.723 [2024-11-20 09:58:43.577189] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1779580) on tqpair=0x1717690 00:25:12.723 [2024-11-20 09:58:43.577197] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 7 milliseconds 00:25:12.723 00:25:12.723 09:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:25:12.723 [2024-11-20 09:58:43.625936] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:25:12.723 [2024-11-20 09:58:43.626011] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1469226 ] 00:25:12.988 [2024-11-20 09:58:43.679775] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:25:12.988 [2024-11-20 09:58:43.679839] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:25:12.988 [2024-11-20 09:58:43.679845] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:25:12.988 [2024-11-20 09:58:43.679865] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:25:12.988 [2024-11-20 09:58:43.679876] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:25:12.988 [2024-11-20 09:58:43.683492] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:25:12.988 [2024-11-20 09:58:43.683530] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xf49690 0 00:25:12.988 [2024-11-20 09:58:43.691173] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:25:12.988 [2024-11-20 09:58:43.691189] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:25:12.988 [2024-11-20 09:58:43.691193] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:25:12.988 [2024-11-20 09:58:43.691197] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:25:12.988 [2024-11-20 09:58:43.691234] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.988 [2024-11-20 09:58:43.691240] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.988 [2024-11-20 09:58:43.691244] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf49690) 00:25:12.988 [2024-11-20 09:58:43.691257] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:25:12.988 [2024-11-20 09:58:43.691281] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfab100, cid 0, qid 0 00:25:12.988 [2024-11-20 09:58:43.699172] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.988 [2024-11-20 09:58:43.699183] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.988 [2024-11-20 09:58:43.699187] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.988 [2024-11-20 09:58:43.699192] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfab100) on tqpair=0xf49690 00:25:12.988 [2024-11-20 09:58:43.699205] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:12.988 [2024-11-20 09:58:43.699213] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:25:12.988 [2024-11-20 09:58:43.699219] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:25:12.988 [2024-11-20 09:58:43.699233] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.988 [2024-11-20 09:58:43.699237] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.988 [2024-11-20 09:58:43.699241] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf49690) 00:25:12.988 [2024-11-20 09:58:43.699250] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.988 [2024-11-20 09:58:43.699266] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfab100, cid 0, qid 0 00:25:12.988 [2024-11-20 09:58:43.699446] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.988 [2024-11-20 09:58:43.699453] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.988 [2024-11-20 09:58:43.699456] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.988 [2024-11-20 09:58:43.699460] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfab100) on tqpair=0xf49690 00:25:12.988 [2024-11-20 09:58:43.699465] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:25:12.988 [2024-11-20 09:58:43.699473] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:25:12.988 [2024-11-20 09:58:43.699480] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.988 [2024-11-20 09:58:43.699484] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.988 [2024-11-20 09:58:43.699488] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf49690) 00:25:12.988 [2024-11-20 09:58:43.699499] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.988 [2024-11-20 09:58:43.699510] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfab100, cid 0, qid 0 00:25:12.988 [2024-11-20 09:58:43.699738] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.988 [2024-11-20 09:58:43.699744] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.988 [2024-11-20 09:58:43.699748] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.988 [2024-11-20 09:58:43.699752] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfab100) on tqpair=0xf49690 00:25:12.988 [2024-11-20 09:58:43.699757] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:25:12.988 [2024-11-20 09:58:43.699766] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:25:12.988 [2024-11-20 09:58:43.699772] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.988 [2024-11-20 09:58:43.699776] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.988 [2024-11-20 09:58:43.699780] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf49690) 00:25:12.988 [2024-11-20 09:58:43.699787] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.988 [2024-11-20 09:58:43.699798] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfab100, cid 0, qid 0 00:25:12.988 [2024-11-20 09:58:43.700002] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.989 [2024-11-20 09:58:43.700008] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.989 [2024-11-20 09:58:43.700012] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.989 [2024-11-20 09:58:43.700016] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfab100) on tqpair=0xf49690 00:25:12.989 [2024-11-20 09:58:43.700021] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:12.989 [2024-11-20 09:58:43.700030] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.989 [2024-11-20 09:58:43.700034] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.989 [2024-11-20 09:58:43.700038] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf49690) 00:25:12.989 [2024-11-20 09:58:43.700045] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.989 [2024-11-20 09:58:43.700056] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfab100, cid 0, qid 0 00:25:12.989 [2024-11-20 09:58:43.700243] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.989 [2024-11-20 09:58:43.700250] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.989 [2024-11-20 09:58:43.700253] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.989 [2024-11-20 09:58:43.700257] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfab100) on tqpair=0xf49690 00:25:12.989 [2024-11-20 09:58:43.700262] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:25:12.989 [2024-11-20 09:58:43.700267] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:25:12.989 [2024-11-20 09:58:43.700275] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:12.989 [2024-11-20 09:58:43.700383] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:25:12.989 [2024-11-20 09:58:43.700388] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:12.989 [2024-11-20 09:58:43.700397] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.989 [2024-11-20 09:58:43.700403] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.989 [2024-11-20 09:58:43.700406] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf49690) 00:25:12.989 [2024-11-20 09:58:43.700413] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.989 [2024-11-20 09:58:43.700424] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfab100, cid 0, qid 0 00:25:12.989 [2024-11-20 09:58:43.700612] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.989 [2024-11-20 09:58:43.700618] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.989 [2024-11-20 09:58:43.700622] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.989 [2024-11-20 09:58:43.700626] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfab100) on tqpair=0xf49690 00:25:12.989 [2024-11-20 09:58:43.700630] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:12.989 [2024-11-20 09:58:43.700641] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.989 [2024-11-20 09:58:43.700645] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.989 [2024-11-20 09:58:43.700648] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf49690) 00:25:12.989 [2024-11-20 09:58:43.700655] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.989 [2024-11-20 09:58:43.700666] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfab100, cid 0, qid 0 00:25:12.989 [2024-11-20 09:58:43.700852] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.989 [2024-11-20 09:58:43.700858] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.989 [2024-11-20 09:58:43.700862] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.989 [2024-11-20 09:58:43.700866] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfab100) on tqpair=0xf49690 00:25:12.989 [2024-11-20 09:58:43.700870] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:12.989 [2024-11-20 09:58:43.700875] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:25:12.989 [2024-11-20 09:58:43.700883] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:25:12.989 [2024-11-20 09:58:43.700891] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:25:12.989 [2024-11-20 09:58:43.700900] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.989 [2024-11-20 09:58:43.700904] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf49690) 00:25:12.989 [2024-11-20 09:58:43.700911] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.989 [2024-11-20 09:58:43.700922] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfab100, cid 0, qid 0 00:25:12.989 [2024-11-20 09:58:43.701174] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:12.989 [2024-11-20 09:58:43.701182] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:12.989 [2024-11-20 09:58:43.701185] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:12.989 [2024-11-20 09:58:43.701189] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf49690): datao=0, datal=4096, cccid=0 00:25:12.989 [2024-11-20 09:58:43.701194] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xfab100) on tqpair(0xf49690): expected_datao=0, payload_size=4096 00:25:12.989 [2024-11-20 09:58:43.701198] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.989 [2024-11-20 09:58:43.701213] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:12.989 [2024-11-20 09:58:43.701218] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:12.989 [2024-11-20 09:58:43.742356] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.989 [2024-11-20 09:58:43.742367] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.989 [2024-11-20 09:58:43.742370] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.989 [2024-11-20 09:58:43.742375] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfab100) on tqpair=0xf49690 00:25:12.989 [2024-11-20 09:58:43.742384] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:25:12.989 [2024-11-20 09:58:43.742389] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:25:12.989 [2024-11-20 09:58:43.742393] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:25:12.989 [2024-11-20 09:58:43.742401] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:25:12.989 [2024-11-20 09:58:43.742406] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:25:12.989 [2024-11-20 09:58:43.742412] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:25:12.989 [2024-11-20 09:58:43.742423] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:25:12.989 [2024-11-20 09:58:43.742430] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.989 [2024-11-20 09:58:43.742434] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.989 [2024-11-20 09:58:43.742438] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf49690) 00:25:12.989 [2024-11-20 09:58:43.742446] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:12.989 [2024-11-20 09:58:43.742459] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfab100, cid 0, qid 0 00:25:12.989 [2024-11-20 09:58:43.742617] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.989 [2024-11-20 09:58:43.742623] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.989 [2024-11-20 09:58:43.742627] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.989 [2024-11-20 09:58:43.742631] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfab100) on tqpair=0xf49690 00:25:12.989 [2024-11-20 09:58:43.742638] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.989 [2024-11-20 09:58:43.742642] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.989 [2024-11-20 09:58:43.742645] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf49690) 00:25:12.989 [2024-11-20 09:58:43.742652] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.990 [2024-11-20 09:58:43.742658] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.990 [2024-11-20 09:58:43.742662] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.990 [2024-11-20 09:58:43.742665] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xf49690) 00:25:12.990 [2024-11-20 09:58:43.742671] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.990 [2024-11-20 09:58:43.742677] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.990 [2024-11-20 09:58:43.742681] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.990 [2024-11-20 09:58:43.742684] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xf49690) 00:25:12.990 [2024-11-20 09:58:43.742690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.990 [2024-11-20 09:58:43.742696] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.990 [2024-11-20 09:58:43.742700] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.990 [2024-11-20 09:58:43.742707] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf49690) 00:25:12.990 [2024-11-20 09:58:43.742713] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.990 [2024-11-20 09:58:43.742718] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:25:12.990 [2024-11-20 09:58:43.742725] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:12.990 [2024-11-20 09:58:43.742732] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.990 [2024-11-20 09:58:43.742735] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf49690) 00:25:12.990 [2024-11-20 09:58:43.742742] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.990 [2024-11-20 09:58:43.742754] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfab100, cid 0, qid 0 00:25:12.990 [2024-11-20 09:58:43.742759] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfab280, cid 1, qid 0 00:25:12.990 [2024-11-20 09:58:43.742764] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfab400, cid 2, qid 0 00:25:12.990 [2024-11-20 09:58:43.742769] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfab580, cid 3, qid 0 00:25:12.990 [2024-11-20 09:58:43.742774] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfab700, cid 4, qid 0 00:25:12.990 [2024-11-20 09:58:43.743036] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.990 [2024-11-20 09:58:43.743042] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.990 [2024-11-20 09:58:43.743046] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.990 [2024-11-20 09:58:43.743050] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfab700) on tqpair=0xf49690 00:25:12.990 [2024-11-20 09:58:43.743057] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:25:12.990 [2024-11-20 09:58:43.743062] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:25:12.990 [2024-11-20 09:58:43.743071] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:25:12.990 [2024-11-20 09:58:43.743078] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:25:12.990 [2024-11-20 09:58:43.743084] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.990 [2024-11-20 09:58:43.743088] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.990 [2024-11-20 09:58:43.743092] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf49690) 00:25:12.990 [2024-11-20 09:58:43.743098] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:12.990 [2024-11-20 09:58:43.743109] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfab700, cid 4, qid 0 00:25:12.990 [2024-11-20 09:58:43.743294] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.990 [2024-11-20 09:58:43.743300] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.990 [2024-11-20 09:58:43.743304] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.990 [2024-11-20 09:58:43.743308] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfab700) on tqpair=0xf49690 00:25:12.990 [2024-11-20 09:58:43.743376] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:25:12.990 [2024-11-20 09:58:43.743386] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:25:12.990 [2024-11-20 09:58:43.743396] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.990 [2024-11-20 09:58:43.743400] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf49690) 00:25:12.990 [2024-11-20 09:58:43.743407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.990 [2024-11-20 09:58:43.743417] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfab700, cid 4, qid 0 00:25:12.990 [2024-11-20 09:58:43.743653] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:12.990 [2024-11-20 09:58:43.743659] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:12.990 [2024-11-20 09:58:43.743662] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:12.990 [2024-11-20 09:58:43.743666] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf49690): datao=0, datal=4096, cccid=4 00:25:12.990 [2024-11-20 09:58:43.743671] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xfab700) on tqpair(0xf49690): expected_datao=0, payload_size=4096 00:25:12.990 [2024-11-20 09:58:43.743675] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.990 [2024-11-20 09:58:43.743683] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:12.990 [2024-11-20 09:58:43.743686] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:12.990 [2024-11-20 09:58:43.743840] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.990 [2024-11-20 09:58:43.743846] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.990 [2024-11-20 09:58:43.743850] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.990 [2024-11-20 09:58:43.743854] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfab700) on tqpair=0xf49690 00:25:12.990 [2024-11-20 09:58:43.743863] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:25:12.990 [2024-11-20 09:58:43.743873] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:25:12.990 [2024-11-20 09:58:43.743882] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:25:12.990 [2024-11-20 09:58:43.743888] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.990 [2024-11-20 09:58:43.743892] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf49690) 00:25:12.990 [2024-11-20 09:58:43.743899] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.990 [2024-11-20 09:58:43.743910] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfab700, cid 4, qid 0 00:25:12.990 [2024-11-20 09:58:43.744174] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:12.990 [2024-11-20 09:58:43.744181] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:12.990 [2024-11-20 09:58:43.744185] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:12.990 [2024-11-20 09:58:43.744188] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf49690): datao=0, datal=4096, cccid=4 00:25:12.990 [2024-11-20 09:58:43.744193] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xfab700) on tqpair(0xf49690): expected_datao=0, payload_size=4096 00:25:12.990 [2024-11-20 09:58:43.744197] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.990 [2024-11-20 09:58:43.744204] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:12.990 [2024-11-20 09:58:43.744207] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:12.990 [2024-11-20 09:58:43.744367] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.990 [2024-11-20 09:58:43.744373] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.990 [2024-11-20 09:58:43.744376] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.990 [2024-11-20 09:58:43.744380] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfab700) on tqpair=0xf49690 00:25:12.990 [2024-11-20 09:58:43.744393] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:25:12.990 [2024-11-20 09:58:43.744405] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:25:12.990 [2024-11-20 09:58:43.744412] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.990 [2024-11-20 09:58:43.744416] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf49690) 00:25:12.990 [2024-11-20 09:58:43.744422] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.990 [2024-11-20 09:58:43.744433] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfab700, cid 4, qid 0 00:25:12.990 [2024-11-20 09:58:43.744681] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:12.990 [2024-11-20 09:58:43.744687] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:12.990 [2024-11-20 09:58:43.744691] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:12.990 [2024-11-20 09:58:43.744694] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf49690): datao=0, datal=4096, cccid=4 00:25:12.990 [2024-11-20 09:58:43.744699] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xfab700) on tqpair(0xf49690): expected_datao=0, payload_size=4096 00:25:12.990 [2024-11-20 09:58:43.744703] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.990 [2024-11-20 09:58:43.744710] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:12.990 [2024-11-20 09:58:43.744713] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:12.990 [2024-11-20 09:58:43.744890] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.990 [2024-11-20 09:58:43.744897] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.991 [2024-11-20 09:58:43.744900] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.991 [2024-11-20 09:58:43.744904] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfab700) on tqpair=0xf49690 00:25:12.991 [2024-11-20 09:58:43.744911] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:25:12.991 [2024-11-20 09:58:43.744919] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:25:12.991 [2024-11-20 09:58:43.744928] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:25:12.991 [2024-11-20 09:58:43.744934] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:25:12.991 [2024-11-20 09:58:43.744939] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:25:12.991 [2024-11-20 09:58:43.744945] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:25:12.991 [2024-11-20 09:58:43.744950] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:25:12.991 [2024-11-20 09:58:43.744955] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:25:12.991 [2024-11-20 09:58:43.744960] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:25:12.991 [2024-11-20 09:58:43.744975] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.991 [2024-11-20 09:58:43.744979] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf49690) 00:25:12.991 [2024-11-20 09:58:43.744985] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.991 [2024-11-20 09:58:43.744992] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.991 [2024-11-20 09:58:43.744998] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.991 [2024-11-20 09:58:43.745002] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xf49690) 00:25:12.991 [2024-11-20 09:58:43.745008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.991 [2024-11-20 09:58:43.745021] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfab700, cid 4, qid 0 00:25:12.991 [2024-11-20 09:58:43.745027] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfab880, cid 5, qid 0 00:25:12.991 [2024-11-20 09:58:43.745280] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.991 [2024-11-20 09:58:43.745286] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.991 [2024-11-20 09:58:43.745289] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.991 [2024-11-20 09:58:43.745293] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfab700) on tqpair=0xf49690 00:25:12.991 [2024-11-20 09:58:43.745300] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.991 [2024-11-20 09:58:43.745306] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.991 [2024-11-20 09:58:43.745309] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.991 [2024-11-20 09:58:43.745313] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfab880) on tqpair=0xf49690 00:25:12.991 [2024-11-20 09:58:43.745323] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.991 [2024-11-20 09:58:43.745327] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xf49690) 00:25:12.991 [2024-11-20 09:58:43.745333] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.991 [2024-11-20 09:58:43.745344] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfab880, cid 5, qid 0 00:25:12.991 [2024-11-20 09:58:43.745530] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.991 [2024-11-20 09:58:43.745536] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.991 [2024-11-20 09:58:43.745540] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.991 [2024-11-20 09:58:43.745544] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfab880) on tqpair=0xf49690 00:25:12.991 [2024-11-20 09:58:43.745553] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.991 [2024-11-20 09:58:43.745557] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xf49690) 00:25:12.991 [2024-11-20 09:58:43.745563] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.991 [2024-11-20 09:58:43.745573] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfab880, cid 5, qid 0 00:25:12.991 [2024-11-20 09:58:43.745758] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.991 [2024-11-20 09:58:43.745764] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.991 [2024-11-20 09:58:43.745768] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.991 [2024-11-20 09:58:43.745772] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfab880) on tqpair=0xf49690 00:25:12.991 [2024-11-20 09:58:43.745781] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.991 [2024-11-20 09:58:43.745785] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xf49690) 00:25:12.991 [2024-11-20 09:58:43.745791] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.991 [2024-11-20 09:58:43.745801] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfab880, cid 5, qid 0 00:25:12.991 [2024-11-20 09:58:43.746018] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.991 [2024-11-20 09:58:43.746025] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.991 [2024-11-20 09:58:43.746028] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.991 [2024-11-20 09:58:43.746032] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfab880) on tqpair=0xf49690 00:25:12.991 [2024-11-20 09:58:43.746050] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.991 [2024-11-20 09:58:43.746055] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xf49690) 00:25:12.991 [2024-11-20 09:58:43.746061] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.991 [2024-11-20 09:58:43.746069] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.991 [2024-11-20 09:58:43.746072] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf49690) 00:25:12.991 [2024-11-20 09:58:43.746079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.991 [2024-11-20 09:58:43.746086] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.991 [2024-11-20 09:58:43.746090] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xf49690) 00:25:12.991 [2024-11-20 09:58:43.746096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.991 [2024-11-20 09:58:43.746104] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.991 [2024-11-20 09:58:43.746107] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xf49690) 00:25:12.991 [2024-11-20 09:58:43.746113] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.991 [2024-11-20 09:58:43.746125] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfab880, cid 5, qid 0 00:25:12.991 [2024-11-20 09:58:43.746130] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfab700, cid 4, qid 0 00:25:12.991 [2024-11-20 09:58:43.746135] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfaba00, cid 6, qid 0 00:25:12.991 [2024-11-20 09:58:43.746139] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfabb80, cid 7, qid 0 00:25:12.991 [2024-11-20 09:58:43.750176] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:12.991 [2024-11-20 09:58:43.750184] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:12.991 [2024-11-20 09:58:43.750187] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:12.991 [2024-11-20 09:58:43.750191] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf49690): datao=0, datal=8192, cccid=5 00:25:12.991 [2024-11-20 09:58:43.750196] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xfab880) on tqpair(0xf49690): expected_datao=0, payload_size=8192 00:25:12.991 [2024-11-20 09:58:43.750200] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.991 [2024-11-20 09:58:43.750208] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:12.991 [2024-11-20 09:58:43.750211] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:12.991 [2024-11-20 09:58:43.750217] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:12.991 [2024-11-20 09:58:43.750223] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:12.991 [2024-11-20 09:58:43.750226] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:12.991 [2024-11-20 09:58:43.750230] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf49690): datao=0, datal=512, cccid=4 00:25:12.991 [2024-11-20 09:58:43.750234] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xfab700) on tqpair(0xf49690): expected_datao=0, payload_size=512 00:25:12.991 [2024-11-20 09:58:43.750239] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.991 [2024-11-20 09:58:43.750245] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:12.991 [2024-11-20 09:58:43.750249] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:12.991 [2024-11-20 09:58:43.750254] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:12.992 [2024-11-20 09:58:43.750260] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:12.992 [2024-11-20 09:58:43.750269] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:12.992 [2024-11-20 09:58:43.750272] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf49690): datao=0, datal=512, cccid=6 00:25:12.992 [2024-11-20 09:58:43.750277] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xfaba00) on tqpair(0xf49690): expected_datao=0, payload_size=512 00:25:12.992 [2024-11-20 09:58:43.750281] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.992 [2024-11-20 09:58:43.750287] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:12.992 [2024-11-20 09:58:43.750291] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:12.992 [2024-11-20 09:58:43.750297] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:12.992 [2024-11-20 09:58:43.750302] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:12.992 [2024-11-20 09:58:43.750306] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:12.992 [2024-11-20 09:58:43.750309] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf49690): datao=0, datal=4096, cccid=7 00:25:12.992 [2024-11-20 09:58:43.750313] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xfabb80) on tqpair(0xf49690): expected_datao=0, payload_size=4096 00:25:12.992 [2024-11-20 09:58:43.750318] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.992 [2024-11-20 09:58:43.750324] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:12.992 [2024-11-20 09:58:43.750328] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:12.992 [2024-11-20 09:58:43.750333] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.992 [2024-11-20 09:58:43.750339] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.992 [2024-11-20 09:58:43.750342] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.992 [2024-11-20 09:58:43.750346] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfab880) on tqpair=0xf49690 00:25:12.992 [2024-11-20 09:58:43.750359] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.992 [2024-11-20 09:58:43.750365] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.992 [2024-11-20 09:58:43.750369] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.992 [2024-11-20 09:58:43.750373] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfab700) on tqpair=0xf49690 00:25:12.992 [2024-11-20 09:58:43.750383] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.992 [2024-11-20 09:58:43.750389] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.992 [2024-11-20 09:58:43.750393] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.992 [2024-11-20 09:58:43.750396] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfaba00) on tqpair=0xf49690 00:25:12.992 [2024-11-20 09:58:43.750404] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.992 [2024-11-20 09:58:43.750409] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.992 [2024-11-20 09:58:43.750413] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.992 [2024-11-20 09:58:43.750417] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfabb80) on tqpair=0xf49690 00:25:12.992 ===================================================== 00:25:12.992 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:12.992 ===================================================== 00:25:12.992 Controller Capabilities/Features 00:25:12.992 ================================ 00:25:12.992 Vendor ID: 8086 00:25:12.992 Subsystem Vendor ID: 8086 00:25:12.992 Serial Number: SPDK00000000000001 00:25:12.992 Model Number: SPDK bdev Controller 00:25:12.992 Firmware Version: 25.01 00:25:12.992 Recommended Arb Burst: 6 00:25:12.992 IEEE OUI Identifier: e4 d2 5c 00:25:12.992 Multi-path I/O 00:25:12.992 May have multiple subsystem ports: Yes 00:25:12.992 May have multiple controllers: Yes 00:25:12.992 Associated with SR-IOV VF: No 00:25:12.992 Max Data Transfer Size: 131072 00:25:12.992 Max Number of Namespaces: 32 00:25:12.992 Max Number of I/O Queues: 127 00:25:12.992 NVMe Specification Version (VS): 1.3 00:25:12.992 NVMe Specification Version (Identify): 1.3 00:25:12.992 Maximum Queue Entries: 128 00:25:12.992 Contiguous Queues Required: Yes 00:25:12.992 Arbitration Mechanisms Supported 00:25:12.992 Weighted Round Robin: Not Supported 00:25:12.992 Vendor Specific: Not Supported 00:25:12.992 Reset Timeout: 15000 ms 00:25:12.992 Doorbell Stride: 4 bytes 00:25:12.992 NVM Subsystem Reset: Not Supported 00:25:12.992 Command Sets Supported 00:25:12.992 NVM Command Set: Supported 00:25:12.992 Boot Partition: Not Supported 00:25:12.992 Memory Page Size Minimum: 4096 bytes 00:25:12.992 Memory Page Size Maximum: 4096 bytes 00:25:12.992 Persistent Memory Region: Not Supported 00:25:12.992 Optional Asynchronous Events Supported 00:25:12.992 Namespace Attribute Notices: Supported 00:25:12.992 Firmware Activation Notices: Not Supported 00:25:12.992 ANA Change Notices: Not Supported 00:25:12.992 PLE Aggregate Log Change Notices: Not Supported 00:25:12.992 LBA Status Info Alert Notices: Not Supported 00:25:12.992 EGE Aggregate Log Change Notices: Not Supported 00:25:12.992 Normal NVM Subsystem Shutdown event: Not Supported 00:25:12.992 Zone Descriptor Change Notices: Not Supported 00:25:12.992 Discovery Log Change Notices: Not Supported 00:25:12.992 Controller Attributes 00:25:12.992 128-bit Host Identifier: Supported 00:25:12.992 Non-Operational Permissive Mode: Not Supported 00:25:12.992 NVM Sets: Not Supported 00:25:12.992 Read Recovery Levels: Not Supported 00:25:12.992 Endurance Groups: Not Supported 00:25:12.992 Predictable Latency Mode: Not Supported 00:25:12.992 Traffic Based Keep ALive: Not Supported 00:25:12.992 Namespace Granularity: Not Supported 00:25:12.992 SQ Associations: Not Supported 00:25:12.992 UUID List: Not Supported 00:25:12.992 Multi-Domain Subsystem: Not Supported 00:25:12.992 Fixed Capacity Management: Not Supported 00:25:12.992 Variable Capacity Management: Not Supported 00:25:12.992 Delete Endurance Group: Not Supported 00:25:12.992 Delete NVM Set: Not Supported 00:25:12.992 Extended LBA Formats Supported: Not Supported 00:25:12.992 Flexible Data Placement Supported: Not Supported 00:25:12.992 00:25:12.992 Controller Memory Buffer Support 00:25:12.992 ================================ 00:25:12.992 Supported: No 00:25:12.992 00:25:12.992 Persistent Memory Region Support 00:25:12.992 ================================ 00:25:12.992 Supported: No 00:25:12.992 00:25:12.992 Admin Command Set Attributes 00:25:12.992 ============================ 00:25:12.992 Security Send/Receive: Not Supported 00:25:12.992 Format NVM: Not Supported 00:25:12.992 Firmware Activate/Download: Not Supported 00:25:12.992 Namespace Management: Not Supported 00:25:12.992 Device Self-Test: Not Supported 00:25:12.992 Directives: Not Supported 00:25:12.992 NVMe-MI: Not Supported 00:25:12.992 Virtualization Management: Not Supported 00:25:12.992 Doorbell Buffer Config: Not Supported 00:25:12.992 Get LBA Status Capability: Not Supported 00:25:12.992 Command & Feature Lockdown Capability: Not Supported 00:25:12.992 Abort Command Limit: 4 00:25:12.992 Async Event Request Limit: 4 00:25:12.992 Number of Firmware Slots: N/A 00:25:12.992 Firmware Slot 1 Read-Only: N/A 00:25:12.992 Firmware Activation Without Reset: N/A 00:25:12.992 Multiple Update Detection Support: N/A 00:25:12.992 Firmware Update Granularity: No Information Provided 00:25:12.992 Per-Namespace SMART Log: No 00:25:12.992 Asymmetric Namespace Access Log Page: Not Supported 00:25:12.992 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:25:12.992 Command Effects Log Page: Supported 00:25:12.992 Get Log Page Extended Data: Supported 00:25:12.992 Telemetry Log Pages: Not Supported 00:25:12.992 Persistent Event Log Pages: Not Supported 00:25:12.992 Supported Log Pages Log Page: May Support 00:25:12.992 Commands Supported & Effects Log Page: Not Supported 00:25:12.992 Feature Identifiers & Effects Log Page:May Support 00:25:12.992 NVMe-MI Commands & Effects Log Page: May Support 00:25:12.992 Data Area 4 for Telemetry Log: Not Supported 00:25:12.992 Error Log Page Entries Supported: 128 00:25:12.992 Keep Alive: Supported 00:25:12.992 Keep Alive Granularity: 10000 ms 00:25:12.992 00:25:12.992 NVM Command Set Attributes 00:25:12.992 ========================== 00:25:12.992 Submission Queue Entry Size 00:25:12.992 Max: 64 00:25:12.993 Min: 64 00:25:12.993 Completion Queue Entry Size 00:25:12.993 Max: 16 00:25:12.993 Min: 16 00:25:12.993 Number of Namespaces: 32 00:25:12.993 Compare Command: Supported 00:25:12.993 Write Uncorrectable Command: Not Supported 00:25:12.993 Dataset Management Command: Supported 00:25:12.993 Write Zeroes Command: Supported 00:25:12.993 Set Features Save Field: Not Supported 00:25:12.993 Reservations: Supported 00:25:12.993 Timestamp: Not Supported 00:25:12.993 Copy: Supported 00:25:12.993 Volatile Write Cache: Present 00:25:12.993 Atomic Write Unit (Normal): 1 00:25:12.993 Atomic Write Unit (PFail): 1 00:25:12.993 Atomic Compare & Write Unit: 1 00:25:12.993 Fused Compare & Write: Supported 00:25:12.993 Scatter-Gather List 00:25:12.993 SGL Command Set: Supported 00:25:12.993 SGL Keyed: Supported 00:25:12.993 SGL Bit Bucket Descriptor: Not Supported 00:25:12.993 SGL Metadata Pointer: Not Supported 00:25:12.993 Oversized SGL: Not Supported 00:25:12.993 SGL Metadata Address: Not Supported 00:25:12.993 SGL Offset: Supported 00:25:12.993 Transport SGL Data Block: Not Supported 00:25:12.993 Replay Protected Memory Block: Not Supported 00:25:12.993 00:25:12.993 Firmware Slot Information 00:25:12.993 ========================= 00:25:12.993 Active slot: 1 00:25:12.993 Slot 1 Firmware Revision: 25.01 00:25:12.993 00:25:12.993 00:25:12.993 Commands Supported and Effects 00:25:12.993 ============================== 00:25:12.993 Admin Commands 00:25:12.993 -------------- 00:25:12.993 Get Log Page (02h): Supported 00:25:12.993 Identify (06h): Supported 00:25:12.993 Abort (08h): Supported 00:25:12.993 Set Features (09h): Supported 00:25:12.993 Get Features (0Ah): Supported 00:25:12.993 Asynchronous Event Request (0Ch): Supported 00:25:12.993 Keep Alive (18h): Supported 00:25:12.993 I/O Commands 00:25:12.993 ------------ 00:25:12.993 Flush (00h): Supported LBA-Change 00:25:12.993 Write (01h): Supported LBA-Change 00:25:12.993 Read (02h): Supported 00:25:12.993 Compare (05h): Supported 00:25:12.993 Write Zeroes (08h): Supported LBA-Change 00:25:12.993 Dataset Management (09h): Supported LBA-Change 00:25:12.993 Copy (19h): Supported LBA-Change 00:25:12.993 00:25:12.993 Error Log 00:25:12.993 ========= 00:25:12.993 00:25:12.993 Arbitration 00:25:12.993 =========== 00:25:12.993 Arbitration Burst: 1 00:25:12.993 00:25:12.993 Power Management 00:25:12.993 ================ 00:25:12.993 Number of Power States: 1 00:25:12.993 Current Power State: Power State #0 00:25:12.993 Power State #0: 00:25:12.993 Max Power: 0.00 W 00:25:12.993 Non-Operational State: Operational 00:25:12.993 Entry Latency: Not Reported 00:25:12.993 Exit Latency: Not Reported 00:25:12.993 Relative Read Throughput: 0 00:25:12.993 Relative Read Latency: 0 00:25:12.993 Relative Write Throughput: 0 00:25:12.993 Relative Write Latency: 0 00:25:12.993 Idle Power: Not Reported 00:25:12.993 Active Power: Not Reported 00:25:12.993 Non-Operational Permissive Mode: Not Supported 00:25:12.993 00:25:12.993 Health Information 00:25:12.993 ================== 00:25:12.993 Critical Warnings: 00:25:12.993 Available Spare Space: OK 00:25:12.993 Temperature: OK 00:25:12.993 Device Reliability: OK 00:25:12.993 Read Only: No 00:25:12.993 Volatile Memory Backup: OK 00:25:12.993 Current Temperature: 0 Kelvin (-273 Celsius) 00:25:12.993 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:25:12.993 Available Spare: 0% 00:25:12.993 Available Spare Threshold: 0% 00:25:12.993 Life Percentage Used:[2024-11-20 09:58:43.750517] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.993 [2024-11-20 09:58:43.750523] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xf49690) 00:25:12.993 [2024-11-20 09:58:43.750530] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.993 [2024-11-20 09:58:43.750543] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfabb80, cid 7, qid 0 00:25:12.993 [2024-11-20 09:58:43.750764] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.993 [2024-11-20 09:58:43.750770] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.993 [2024-11-20 09:58:43.750774] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.993 [2024-11-20 09:58:43.750777] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfabb80) on tqpair=0xf49690 00:25:12.993 [2024-11-20 09:58:43.750812] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:25:12.993 [2024-11-20 09:58:43.750823] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfab100) on tqpair=0xf49690 00:25:12.993 [2024-11-20 09:58:43.750829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.993 [2024-11-20 09:58:43.750834] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfab280) on tqpair=0xf49690 00:25:12.993 [2024-11-20 09:58:43.750839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.993 [2024-11-20 09:58:43.750844] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfab400) on tqpair=0xf49690 00:25:12.993 [2024-11-20 09:58:43.750849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.993 [2024-11-20 09:58:43.750854] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfab580) on tqpair=0xf49690 00:25:12.993 [2024-11-20 09:58:43.750858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.993 [2024-11-20 09:58:43.750867] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.994 [2024-11-20 09:58:43.750871] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.994 [2024-11-20 09:58:43.750875] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf49690) 00:25:12.994 [2024-11-20 09:58:43.750881] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.994 [2024-11-20 09:58:43.750894] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfab580, cid 3, qid 0 00:25:12.994 [2024-11-20 09:58:43.751111] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.994 [2024-11-20 09:58:43.751117] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.994 [2024-11-20 09:58:43.751121] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.994 [2024-11-20 09:58:43.751125] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfab580) on tqpair=0xf49690 00:25:12.994 [2024-11-20 09:58:43.751132] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.994 [2024-11-20 09:58:43.751135] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.994 [2024-11-20 09:58:43.751139] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf49690) 00:25:12.994 [2024-11-20 09:58:43.751146] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.994 [2024-11-20 09:58:43.751169] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfab580, cid 3, qid 0 00:25:12.994 [2024-11-20 09:58:43.751389] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.994 [2024-11-20 09:58:43.751395] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.994 [2024-11-20 09:58:43.751398] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.994 [2024-11-20 09:58:43.751402] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfab580) on tqpair=0xf49690 00:25:12.994 [2024-11-20 09:58:43.751407] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:25:12.994 [2024-11-20 09:58:43.751412] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:25:12.994 [2024-11-20 09:58:43.751421] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.994 [2024-11-20 09:58:43.751425] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.994 [2024-11-20 09:58:43.751429] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf49690) 00:25:12.994 [2024-11-20 09:58:43.751435] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.994 [2024-11-20 09:58:43.751446] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfab580, cid 3, qid 0 00:25:12.994 [2024-11-20 09:58:43.751650] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.994 [2024-11-20 09:58:43.751656] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.994 [2024-11-20 09:58:43.751659] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.994 [2024-11-20 09:58:43.751663] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfab580) on tqpair=0xf49690 00:25:12.994 [2024-11-20 09:58:43.751674] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.994 [2024-11-20 09:58:43.751677] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.994 [2024-11-20 09:58:43.751681] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf49690) 00:25:12.994 [2024-11-20 09:58:43.751688] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.994 [2024-11-20 09:58:43.751698] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfab580, cid 3, qid 0 00:25:12.994 [2024-11-20 09:58:43.751888] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.994 [2024-11-20 09:58:43.751894] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.994 [2024-11-20 09:58:43.751898] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.994 [2024-11-20 09:58:43.751901] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfab580) on tqpair=0xf49690 00:25:12.994 [2024-11-20 09:58:43.751911] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.994 [2024-11-20 09:58:43.751915] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.994 [2024-11-20 09:58:43.751919] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf49690) 00:25:12.994 [2024-11-20 09:58:43.751925] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.994 [2024-11-20 09:58:43.751935] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfab580, cid 3, qid 0 00:25:12.994 [2024-11-20 09:58:43.752147] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.994 [2024-11-20 09:58:43.752154] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.994 [2024-11-20 09:58:43.752157] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.994 [2024-11-20 09:58:43.752167] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfab580) on tqpair=0xf49690 00:25:12.994 [2024-11-20 09:58:43.752177] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.994 [2024-11-20 09:58:43.752181] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.994 [2024-11-20 09:58:43.752184] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf49690) 00:25:12.994 [2024-11-20 09:58:43.752191] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.994 [2024-11-20 09:58:43.752201] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfab580, cid 3, qid 0 00:25:12.994 [2024-11-20 09:58:43.752375] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.994 [2024-11-20 09:58:43.752381] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.994 [2024-11-20 09:58:43.752384] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.994 [2024-11-20 09:58:43.752388] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfab580) on tqpair=0xf49690 00:25:12.994 [2024-11-20 09:58:43.752398] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.994 [2024-11-20 09:58:43.752402] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.994 [2024-11-20 09:58:43.752406] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf49690) 00:25:12.994 [2024-11-20 09:58:43.752413] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.994 [2024-11-20 09:58:43.752423] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfab580, cid 3, qid 0 00:25:12.994 [2024-11-20 09:58:43.752616] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.994 [2024-11-20 09:58:43.752625] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.994 [2024-11-20 09:58:43.752628] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.994 [2024-11-20 09:58:43.752632] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfab580) on tqpair=0xf49690 00:25:12.994 [2024-11-20 09:58:43.752642] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.994 [2024-11-20 09:58:43.752646] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.994 [2024-11-20 09:58:43.752649] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf49690) 00:25:12.994 [2024-11-20 09:58:43.752656] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.994 [2024-11-20 09:58:43.752667] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfab580, cid 3, qid 0 00:25:12.994 [2024-11-20 09:58:43.752832] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.994 [2024-11-20 09:58:43.752839] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.994 [2024-11-20 09:58:43.752842] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.994 [2024-11-20 09:58:43.752846] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfab580) on tqpair=0xf49690 00:25:12.994 [2024-11-20 09:58:43.752856] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.994 [2024-11-20 09:58:43.752860] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.994 [2024-11-20 09:58:43.752863] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf49690) 00:25:12.994 [2024-11-20 09:58:43.752870] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.994 [2024-11-20 09:58:43.752881] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfab580, cid 3, qid 0 00:25:12.994 [2024-11-20 09:58:43.753067] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.994 [2024-11-20 09:58:43.753073] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.994 [2024-11-20 09:58:43.753076] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.994 [2024-11-20 09:58:43.753080] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfab580) on tqpair=0xf49690 00:25:12.994 [2024-11-20 09:58:43.753090] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.994 [2024-11-20 09:58:43.753094] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.994 [2024-11-20 09:58:43.753097] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf49690) 00:25:12.994 [2024-11-20 09:58:43.753104] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.994 [2024-11-20 09:58:43.753115] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfab580, cid 3, qid 0 00:25:12.994 [2024-11-20 09:58:43.753332] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.995 [2024-11-20 09:58:43.753339] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.995 [2024-11-20 09:58:43.753342] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.995 [2024-11-20 09:58:43.753346] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfab580) on tqpair=0xf49690 00:25:12.995 [2024-11-20 09:58:43.753356] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.995 [2024-11-20 09:58:43.753360] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.995 [2024-11-20 09:58:43.753363] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf49690) 00:25:12.995 [2024-11-20 09:58:43.753370] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.995 [2024-11-20 09:58:43.753381] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfab580, cid 3, qid 0 00:25:12.995 [2024-11-20 09:58:43.753546] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.995 [2024-11-20 09:58:43.753552] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.995 [2024-11-20 09:58:43.753558] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.995 [2024-11-20 09:58:43.753562] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfab580) on tqpair=0xf49690 00:25:12.995 [2024-11-20 09:58:43.753571] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.995 [2024-11-20 09:58:43.753575] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.995 [2024-11-20 09:58:43.753579] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf49690) 00:25:12.995 [2024-11-20 09:58:43.753586] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.995 [2024-11-20 09:58:43.753596] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfab580, cid 3, qid 0 00:25:12.995 [2024-11-20 09:58:43.753807] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.995 [2024-11-20 09:58:43.753814] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.995 [2024-11-20 09:58:43.753817] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.995 [2024-11-20 09:58:43.753821] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfab580) on tqpair=0xf49690 00:25:12.995 [2024-11-20 09:58:43.753830] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.995 [2024-11-20 09:58:43.753834] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.995 [2024-11-20 09:58:43.753838] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf49690) 00:25:12.995 [2024-11-20 09:58:43.753845] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.995 [2024-11-20 09:58:43.753855] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfab580, cid 3, qid 0 00:25:12.995 [2024-11-20 09:58:43.754029] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.995 [2024-11-20 09:58:43.754035] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.995 [2024-11-20 09:58:43.754039] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.995 [2024-11-20 09:58:43.754043] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfab580) on tqpair=0xf49690 00:25:12.995 [2024-11-20 09:58:43.754052] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.995 [2024-11-20 09:58:43.754056] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.995 [2024-11-20 09:58:43.754060] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf49690) 00:25:12.995 [2024-11-20 09:58:43.754066] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.995 [2024-11-20 09:58:43.754077] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfab580, cid 3, qid 0 00:25:12.995 [2024-11-20 09:58:43.758169] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.995 [2024-11-20 09:58:43.758177] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.995 [2024-11-20 09:58:43.758181] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.995 [2024-11-20 09:58:43.758184] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfab580) on tqpair=0xf49690 00:25:12.995 [2024-11-20 09:58:43.758193] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 6 milliseconds 00:25:12.995 0% 00:25:12.995 Data Units Read: 0 00:25:12.995 Data Units Written: 0 00:25:12.995 Host Read Commands: 0 00:25:12.995 Host Write Commands: 0 00:25:12.995 Controller Busy Time: 0 minutes 00:25:12.995 Power Cycles: 0 00:25:12.995 Power On Hours: 0 hours 00:25:12.995 Unsafe Shutdowns: 0 00:25:12.995 Unrecoverable Media Errors: 0 00:25:12.995 Lifetime Error Log Entries: 0 00:25:12.995 Warning Temperature Time: 0 minutes 00:25:12.995 Critical Temperature Time: 0 minutes 00:25:12.995 00:25:12.995 Number of Queues 00:25:12.995 ================ 00:25:12.995 Number of I/O Submission Queues: 127 00:25:12.995 Number of I/O Completion Queues: 127 00:25:12.995 00:25:12.995 Active Namespaces 00:25:12.995 ================= 00:25:12.995 Namespace ID:1 00:25:12.995 Error Recovery Timeout: Unlimited 00:25:12.995 Command Set Identifier: NVM (00h) 00:25:12.995 Deallocate: Supported 00:25:12.995 Deallocated/Unwritten Error: Not Supported 00:25:12.995 Deallocated Read Value: Unknown 00:25:12.995 Deallocate in Write Zeroes: Not Supported 00:25:12.995 Deallocated Guard Field: 0xFFFF 00:25:12.995 Flush: Supported 00:25:12.995 Reservation: Supported 00:25:12.995 Namespace Sharing Capabilities: Multiple Controllers 00:25:12.995 Size (in LBAs): 131072 (0GiB) 00:25:12.995 Capacity (in LBAs): 131072 (0GiB) 00:25:12.995 Utilization (in LBAs): 131072 (0GiB) 00:25:12.995 NGUID: ABCDEF0123456789ABCDEF0123456789 00:25:12.995 EUI64: ABCDEF0123456789 00:25:12.995 UUID: c613b24c-6848-4210-9b97-c3e2aee56a35 00:25:12.995 Thin Provisioning: Not Supported 00:25:12.995 Per-NS Atomic Units: Yes 00:25:12.995 Atomic Boundary Size (Normal): 0 00:25:12.995 Atomic Boundary Size (PFail): 0 00:25:12.995 Atomic Boundary Offset: 0 00:25:12.995 Maximum Single Source Range Length: 65535 00:25:12.995 Maximum Copy Length: 65535 00:25:12.995 Maximum Source Range Count: 1 00:25:12.995 NGUID/EUI64 Never Reused: No 00:25:12.995 Namespace Write Protected: No 00:25:12.995 Number of LBA Formats: 1 00:25:12.995 Current LBA Format: LBA Format #00 00:25:12.995 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:12.995 00:25:12.995 09:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:25:12.995 09:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:12.995 09:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.995 09:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:12.995 09:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.995 09:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:25:12.995 09:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:25:12.995 09:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:12.995 09:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:25:12.995 09:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:12.995 09:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:25:12.995 09:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:12.995 09:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:12.995 rmmod nvme_tcp 00:25:12.995 rmmod nvme_fabrics 00:25:12.995 rmmod nvme_keyring 00:25:12.995 09:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:12.995 09:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:25:12.995 09:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:25:12.995 09:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 1468908 ']' 00:25:12.995 09:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 1468908 00:25:12.995 09:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 1468908 ']' 00:25:12.995 09:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 1468908 00:25:12.995 09:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:25:12.995 09:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:12.995 09:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1468908 00:25:13.256 09:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:13.256 09:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:13.256 09:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1468908' 00:25:13.256 killing process with pid 1468908 00:25:13.256 09:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 1468908 00:25:13.256 09:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 1468908 00:25:13.256 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:13.256 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:13.256 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:13.256 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:25:13.256 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:25:13.256 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:13.256 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:25:13.256 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:13.256 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:13.256 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:13.256 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:13.256 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:15.796 09:58:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:15.796 00:25:15.796 real 0m11.699s 00:25:15.796 user 0m8.631s 00:25:15.796 sys 0m6.246s 00:25:15.796 09:58:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:15.796 09:58:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:15.796 ************************************ 00:25:15.796 END TEST nvmf_identify 00:25:15.796 ************************************ 00:25:15.796 09:58:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:25:15.796 09:58:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:15.796 09:58:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:15.796 09:58:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.796 ************************************ 00:25:15.796 START TEST nvmf_perf 00:25:15.796 ************************************ 00:25:15.796 09:58:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:25:15.796 * Looking for test storage... 00:25:15.796 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:15.796 09:58:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:15.796 09:58:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:25:15.796 09:58:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:15.796 09:58:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:15.796 09:58:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:15.796 09:58:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:15.796 09:58:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:15.796 09:58:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:25:15.796 09:58:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:25:15.796 09:58:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:25:15.796 09:58:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:25:15.796 09:58:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:25:15.796 09:58:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:25:15.796 09:58:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:25:15.796 09:58:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:15.797 09:58:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:25:15.797 09:58:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:25:15.797 09:58:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:15.797 09:58:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:15.797 09:58:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:25:15.797 09:58:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:25:15.797 09:58:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:15.797 09:58:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:25:15.797 09:58:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:25:15.797 09:58:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:25:15.797 09:58:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:25:15.797 09:58:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:15.797 09:58:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:25:15.797 09:58:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:25:15.797 09:58:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:15.797 09:58:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:15.797 09:58:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:25:15.797 09:58:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:15.797 09:58:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:15.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:15.797 --rc genhtml_branch_coverage=1 00:25:15.797 --rc genhtml_function_coverage=1 00:25:15.797 --rc genhtml_legend=1 00:25:15.797 --rc geninfo_all_blocks=1 00:25:15.797 --rc geninfo_unexecuted_blocks=1 00:25:15.797 00:25:15.797 ' 00:25:15.797 09:58:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:15.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:15.797 --rc genhtml_branch_coverage=1 00:25:15.797 --rc genhtml_function_coverage=1 00:25:15.797 --rc genhtml_legend=1 00:25:15.797 --rc geninfo_all_blocks=1 00:25:15.797 --rc geninfo_unexecuted_blocks=1 00:25:15.797 00:25:15.797 ' 00:25:15.797 09:58:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:15.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:15.797 --rc genhtml_branch_coverage=1 00:25:15.797 --rc genhtml_function_coverage=1 00:25:15.797 --rc genhtml_legend=1 00:25:15.797 --rc geninfo_all_blocks=1 00:25:15.797 --rc geninfo_unexecuted_blocks=1 00:25:15.797 00:25:15.797 ' 00:25:15.797 09:58:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:15.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:15.797 --rc genhtml_branch_coverage=1 00:25:15.797 --rc genhtml_function_coverage=1 00:25:15.797 --rc genhtml_legend=1 00:25:15.797 --rc geninfo_all_blocks=1 00:25:15.797 --rc geninfo_unexecuted_blocks=1 00:25:15.797 00:25:15.797 ' 00:25:15.797 09:58:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:15.797 09:58:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:25:15.797 09:58:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:15.797 09:58:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:15.797 09:58:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:15.797 09:58:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:15.797 09:58:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:15.797 09:58:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:15.797 09:58:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:15.797 09:58:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:15.797 09:58:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:15.797 09:58:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:15.797 09:58:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:15.797 09:58:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:15.797 09:58:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:15.797 09:58:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:15.797 09:58:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:15.797 09:58:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:15.797 09:58:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:15.797 09:58:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:25:15.797 09:58:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:15.797 09:58:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:15.797 09:58:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:15.797 09:58:46 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.797 09:58:46 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.797 09:58:46 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.797 09:58:46 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:25:15.797 09:58:46 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.797 09:58:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:25:15.797 09:58:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:15.797 09:58:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:15.797 09:58:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:15.797 09:58:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:15.797 09:58:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:15.797 09:58:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:15.797 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:15.797 09:58:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:15.797 09:58:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:15.797 09:58:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:15.797 09:58:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:15.797 09:58:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:15.797 09:58:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:15.797 09:58:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:25:15.797 09:58:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:15.797 09:58:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:15.797 09:58:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:15.797 09:58:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:15.797 09:58:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:15.797 09:58:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:15.797 09:58:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:15.797 09:58:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:15.797 09:58:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:15.797 09:58:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:15.797 09:58:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:25:15.797 09:58:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:23.933 09:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:23.933 09:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:25:23.933 09:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:23.933 09:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:23.933 09:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:23.933 09:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:23.933 09:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:23.933 09:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:25:23.933 09:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:23.933 09:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:25:23.933 09:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:25:23.933 09:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:25:23.933 09:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:25:23.933 09:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:25:23.933 09:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:25:23.933 09:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:23.933 09:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:23.933 09:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:23.933 09:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:23.933 09:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:23.933 09:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:23.933 09:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:23.933 09:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:23.933 09:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:23.933 09:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:23.933 09:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:23.933 09:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:23.933 09:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:23.933 09:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:23.933 09:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:23.933 09:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:23.933 09:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:23.933 09:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:23.933 09:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:23.933 09:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:23.933 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:23.933 09:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:23.933 09:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:23.933 09:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:23.933 09:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:23.933 09:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:23.933 09:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:23.933 09:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:23.933 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:23.933 09:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:23.933 09:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:23.933 09:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:23.933 09:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:23.933 09:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:23.933 09:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:23.933 09:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:23.933 09:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:23.933 09:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:23.933 09:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:23.933 09:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:23.933 09:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:23.933 09:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:23.933 09:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:23.933 09:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:23.933 09:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:23.933 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:23.933 09:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:23.933 09:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:23.933 09:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:23.933 09:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:23.933 09:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:23.933 09:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:23.933 09:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:23.933 09:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:23.933 09:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:23.933 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:23.933 09:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:23.933 09:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:23.933 09:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:25:23.933 09:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:23.933 09:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:23.933 09:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:23.933 09:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:23.933 09:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:23.933 09:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:23.933 09:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:23.933 09:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:23.933 09:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:23.933 09:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:23.933 09:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:23.933 09:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:23.933 09:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:23.933 09:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:23.933 09:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:23.933 09:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:23.933 09:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:23.933 09:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:23.933 09:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:23.933 09:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:23.933 09:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:23.933 09:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:23.933 09:58:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:23.933 09:58:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:23.933 09:58:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:23.933 09:58:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:23.933 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:23.933 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.591 ms 00:25:23.933 00:25:23.933 --- 10.0.0.2 ping statistics --- 00:25:23.933 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:23.933 rtt min/avg/max/mdev = 0.591/0.591/0.591/0.000 ms 00:25:23.933 09:58:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:23.933 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:23.933 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:25:23.933 00:25:23.933 --- 10.0.0.1 ping statistics --- 00:25:23.933 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:23.933 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:25:23.933 09:58:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:23.933 09:58:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:25:23.933 09:58:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:23.933 09:58:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:23.933 09:58:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:23.933 09:58:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:23.934 09:58:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:23.934 09:58:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:23.934 09:58:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:23.934 09:58:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:25:23.934 09:58:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:23.934 09:58:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:23.934 09:58:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:23.934 09:58:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=1473388 00:25:23.934 09:58:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 1473388 00:25:23.934 09:58:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:23.934 09:58:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 1473388 ']' 00:25:23.934 09:58:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:23.934 09:58:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:23.934 09:58:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:23.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:23.934 09:58:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:23.934 09:58:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:23.934 [2024-11-20 09:58:54.171404] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:25:23.934 [2024-11-20 09:58:54.171508] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:23.934 [2024-11-20 09:58:54.287606] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:23.934 [2024-11-20 09:58:54.341232] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:23.934 [2024-11-20 09:58:54.341285] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:23.934 [2024-11-20 09:58:54.341294] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:23.934 [2024-11-20 09:58:54.341301] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:23.934 [2024-11-20 09:58:54.341307] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:23.934 [2024-11-20 09:58:54.343701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:23.934 [2024-11-20 09:58:54.343861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:23.934 [2024-11-20 09:58:54.344026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:23.934 [2024-11-20 09:58:54.344026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:24.193 09:58:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:24.193 09:58:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:25:24.193 09:58:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:24.193 09:58:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:24.193 09:58:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:24.193 09:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:24.193 09:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:25:24.193 09:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:25:24.763 09:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:25:24.763 09:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:25:25.023 09:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:25:25.024 09:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:25.284 09:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:25:25.284 09:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:25:25.284 09:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:25:25.284 09:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:25:25.284 09:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:25.284 [2024-11-20 09:58:56.146211] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:25.284 09:58:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:25.544 09:58:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:25.544 09:58:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:25.805 09:58:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:25.805 09:58:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:26.066 09:58:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:26.066 [2024-11-20 09:58:56.913254] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:26.066 09:58:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:26.326 09:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:25:26.326 09:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:25:26.326 09:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:25:26.326 09:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:25:27.707 Initializing NVMe Controllers 00:25:27.707 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:25:27.707 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:25:27.707 Initialization complete. Launching workers. 00:25:27.707 ======================================================== 00:25:27.707 Latency(us) 00:25:27.707 Device Information : IOPS MiB/s Average min max 00:25:27.707 PCIE (0000:65:00.0) NSID 1 from core 0: 77432.27 302.47 412.66 13.39 5956.85 00:25:27.707 ======================================================== 00:25:27.707 Total : 77432.27 302.47 412.66 13.39 5956.85 00:25:27.707 00:25:27.707 09:58:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:29.088 Initializing NVMe Controllers 00:25:29.088 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:29.088 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:29.088 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:29.088 Initialization complete. Launching workers. 00:25:29.088 ======================================================== 00:25:29.088 Latency(us) 00:25:29.088 Device Information : IOPS MiB/s Average min max 00:25:29.088 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 115.00 0.45 9050.20 244.79 45948.86 00:25:29.088 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 51.00 0.20 20519.58 5983.43 49882.83 00:25:29.088 ======================================================== 00:25:29.088 Total : 166.00 0.65 12573.93 244.79 49882.83 00:25:29.088 00:25:29.088 09:58:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:30.471 Initializing NVMe Controllers 00:25:30.471 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:30.471 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:30.471 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:30.471 Initialization complete. Launching workers. 00:25:30.471 ======================================================== 00:25:30.471 Latency(us) 00:25:30.471 Device Information : IOPS MiB/s Average min max 00:25:30.471 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11506.54 44.95 2784.46 465.44 42203.83 00:25:30.471 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3714.65 14.51 8671.66 5832.08 47822.04 00:25:30.471 ======================================================== 00:25:30.471 Total : 15221.19 59.46 4221.20 465.44 47822.04 00:25:30.471 00:25:30.471 09:59:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:25:30.471 09:59:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:25:30.471 09:59:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:33.012 Initializing NVMe Controllers 00:25:33.012 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:33.012 Controller IO queue size 128, less than required. 00:25:33.012 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:33.012 Controller IO queue size 128, less than required. 00:25:33.012 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:33.012 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:33.012 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:33.012 Initialization complete. Launching workers. 00:25:33.012 ======================================================== 00:25:33.012 Latency(us) 00:25:33.012 Device Information : IOPS MiB/s Average min max 00:25:33.012 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1793.73 448.43 72274.57 35096.97 111109.30 00:25:33.012 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 603.90 150.98 222666.70 55152.07 338891.11 00:25:33.012 ======================================================== 00:25:33.012 Total : 2397.63 599.41 110154.59 35096.97 338891.11 00:25:33.012 00:25:33.012 09:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:25:33.012 No valid NVMe controllers or AIO or URING devices found 00:25:33.012 Initializing NVMe Controllers 00:25:33.012 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:33.012 Controller IO queue size 128, less than required. 00:25:33.012 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:33.012 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:25:33.012 Controller IO queue size 128, less than required. 00:25:33.012 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:33.012 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:25:33.012 WARNING: Some requested NVMe devices were skipped 00:25:33.012 09:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:25:35.560 Initializing NVMe Controllers 00:25:35.560 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:35.560 Controller IO queue size 128, less than required. 00:25:35.560 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:35.560 Controller IO queue size 128, less than required. 00:25:35.560 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:35.560 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:35.560 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:35.560 Initialization complete. Launching workers. 00:25:35.560 00:25:35.560 ==================== 00:25:35.560 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:25:35.560 TCP transport: 00:25:35.560 polls: 37966 00:25:35.560 idle_polls: 22710 00:25:35.560 sock_completions: 15256 00:25:35.560 nvme_completions: 6745 00:25:35.560 submitted_requests: 10148 00:25:35.560 queued_requests: 1 00:25:35.560 00:25:35.560 ==================== 00:25:35.560 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:25:35.560 TCP transport: 00:25:35.560 polls: 52143 00:25:35.560 idle_polls: 33253 00:25:35.560 sock_completions: 18890 00:25:35.560 nvme_completions: 7157 00:25:35.560 submitted_requests: 10814 00:25:35.560 queued_requests: 1 00:25:35.560 ======================================================== 00:25:35.560 Latency(us) 00:25:35.560 Device Information : IOPS MiB/s Average min max 00:25:35.560 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1685.85 421.46 77803.47 42269.14 134967.91 00:25:35.560 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1788.84 447.21 72348.86 32774.16 118723.52 00:25:35.560 ======================================================== 00:25:35.560 Total : 3474.69 868.67 74995.33 32774.16 134967.91 00:25:35.560 00:25:35.560 09:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:25:35.560 09:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:35.560 09:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:25:35.560 09:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:25:35.560 09:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:25:35.560 09:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:35.560 09:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:25:35.560 09:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:35.560 09:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:25:35.560 09:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:35.560 09:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:35.560 rmmod nvme_tcp 00:25:35.560 rmmod nvme_fabrics 00:25:35.560 rmmod nvme_keyring 00:25:35.821 09:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:35.821 09:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:25:35.821 09:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:25:35.821 09:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 1473388 ']' 00:25:35.821 09:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 1473388 00:25:35.821 09:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 1473388 ']' 00:25:35.821 09:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 1473388 00:25:35.821 09:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:25:35.821 09:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:35.821 09:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1473388 00:25:35.821 09:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:35.821 09:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:35.821 09:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1473388' 00:25:35.821 killing process with pid 1473388 00:25:35.821 09:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 1473388 00:25:35.821 09:59:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 1473388 00:25:37.733 09:59:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:37.733 09:59:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:37.733 09:59:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:37.733 09:59:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:25:37.733 09:59:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:25:37.733 09:59:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:37.733 09:59:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:25:37.733 09:59:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:37.733 09:59:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:37.733 09:59:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:37.733 09:59:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:37.733 09:59:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:40.278 09:59:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:40.278 00:25:40.278 real 0m24.319s 00:25:40.278 user 0m58.319s 00:25:40.278 sys 0m8.622s 00:25:40.278 09:59:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:40.278 09:59:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:40.278 ************************************ 00:25:40.278 END TEST nvmf_perf 00:25:40.278 ************************************ 00:25:40.278 09:59:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:25:40.278 09:59:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:40.278 09:59:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:40.278 09:59:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.278 ************************************ 00:25:40.278 START TEST nvmf_fio_host 00:25:40.278 ************************************ 00:25:40.278 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:25:40.278 * Looking for test storage... 00:25:40.278 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:40.278 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:40.278 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:25:40.278 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:40.278 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:40.278 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:40.278 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:40.278 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:40.278 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:25:40.278 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:25:40.278 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:25:40.278 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:25:40.278 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:25:40.278 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:25:40.278 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:25:40.278 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:40.278 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:25:40.278 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:25:40.278 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:40.278 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:40.278 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:25:40.278 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:25:40.278 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:40.278 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:25:40.278 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:25:40.278 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:25:40.278 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:25:40.278 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:40.278 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:25:40.278 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:25:40.278 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:40.278 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:40.278 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:25:40.278 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:40.278 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:40.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:40.278 --rc genhtml_branch_coverage=1 00:25:40.278 --rc genhtml_function_coverage=1 00:25:40.278 --rc genhtml_legend=1 00:25:40.278 --rc geninfo_all_blocks=1 00:25:40.278 --rc geninfo_unexecuted_blocks=1 00:25:40.278 00:25:40.278 ' 00:25:40.278 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:40.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:40.278 --rc genhtml_branch_coverage=1 00:25:40.278 --rc genhtml_function_coverage=1 00:25:40.278 --rc genhtml_legend=1 00:25:40.278 --rc geninfo_all_blocks=1 00:25:40.278 --rc geninfo_unexecuted_blocks=1 00:25:40.278 00:25:40.278 ' 00:25:40.278 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:40.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:40.278 --rc genhtml_branch_coverage=1 00:25:40.278 --rc genhtml_function_coverage=1 00:25:40.278 --rc genhtml_legend=1 00:25:40.278 --rc geninfo_all_blocks=1 00:25:40.278 --rc geninfo_unexecuted_blocks=1 00:25:40.278 00:25:40.278 ' 00:25:40.278 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:40.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:40.278 --rc genhtml_branch_coverage=1 00:25:40.278 --rc genhtml_function_coverage=1 00:25:40.278 --rc genhtml_legend=1 00:25:40.278 --rc geninfo_all_blocks=1 00:25:40.278 --rc geninfo_unexecuted_blocks=1 00:25:40.278 00:25:40.278 ' 00:25:40.278 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:40.278 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:25:40.278 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:40.278 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:40.278 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:40.279 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.279 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.279 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.279 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:25:40.279 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.279 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:40.279 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:25:40.279 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:40.279 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:40.279 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:40.279 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:40.279 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:40.279 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:40.279 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:40.279 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:40.279 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:40.279 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:40.279 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:40.279 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:40.279 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:40.279 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:40.279 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:40.279 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:40.279 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:40.279 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:25:40.279 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:40.279 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:40.279 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:40.279 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.279 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.279 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.279 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:25:40.279 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.279 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:25:40.279 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:40.279 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:40.279 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:40.279 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:40.279 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:40.279 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:40.279 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:40.279 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:40.279 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:40.279 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:40.279 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:40.279 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:25:40.279 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:40.279 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:40.279 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:40.279 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:40.279 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:40.279 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:40.279 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:40.279 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:40.279 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:40.279 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:40.279 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:25:40.279 09:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.423 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:48.423 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:25:48.423 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:48.423 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:48.423 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:48.423 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:48.423 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:48.423 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:25:48.423 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:48.423 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:25:48.423 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:25:48.423 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:25:48.423 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:25:48.423 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:25:48.423 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:25:48.423 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:48.423 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:48.423 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:48.423 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:48.423 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:48.423 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:48.423 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:48.423 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:48.423 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:48.423 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:48.423 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:48.423 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:48.423 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:48.423 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:48.423 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:48.423 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:48.423 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:48.423 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:48.423 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:48.423 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:48.423 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:48.423 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:48.423 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:48.423 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:48.423 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:48.423 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:48.423 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:48.423 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:48.423 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:48.423 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:48.423 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:48.423 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:48.423 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:48.423 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:48.423 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:48.423 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:48.423 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:48.423 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:48.423 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:48.423 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:48.423 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:48.423 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:48.423 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:48.423 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:48.423 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:48.423 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:48.423 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:48.423 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:48.423 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:48.423 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:48.423 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:48.423 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:48.423 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:48.423 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:48.423 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:48.423 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:48.423 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:48.423 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:48.423 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:25:48.423 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:48.423 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:48.423 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:48.423 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:48.423 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:48.423 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:48.423 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:48.423 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:48.423 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:48.423 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:48.423 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:48.423 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:48.423 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:48.423 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:48.423 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:48.423 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:48.423 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:48.423 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:48.423 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:48.424 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:48.424 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:48.424 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:48.424 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:48.424 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:48.424 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:48.424 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:48.424 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:48.424 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.680 ms 00:25:48.424 00:25:48.424 --- 10.0.0.2 ping statistics --- 00:25:48.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:48.424 rtt min/avg/max/mdev = 0.680/0.680/0.680/0.000 ms 00:25:48.424 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:48.424 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:48.424 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.326 ms 00:25:48.424 00:25:48.424 --- 10.0.0.1 ping statistics --- 00:25:48.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:48.424 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:25:48.424 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:48.424 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:25:48.424 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:48.424 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:48.424 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:48.424 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:48.424 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:48.424 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:48.424 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:48.424 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:25:48.424 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:25:48.424 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:48.424 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.424 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1480324 00:25:48.424 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:48.424 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:48.424 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1480324 00:25:48.424 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 1480324 ']' 00:25:48.424 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:48.424 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:48.424 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:48.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:48.424 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:48.424 09:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.424 [2024-11-20 09:59:18.518175] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:25:48.424 [2024-11-20 09:59:18.518242] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:48.424 [2024-11-20 09:59:18.617086] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:48.424 [2024-11-20 09:59:18.671184] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:48.424 [2024-11-20 09:59:18.671236] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:48.424 [2024-11-20 09:59:18.671244] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:48.424 [2024-11-20 09:59:18.671252] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:48.424 [2024-11-20 09:59:18.671258] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:48.424 [2024-11-20 09:59:18.673601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:48.424 [2024-11-20 09:59:18.673749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:48.424 [2024-11-20 09:59:18.673912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:48.424 [2024-11-20 09:59:18.673912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:48.685 09:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:48.685 09:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:25:48.685 09:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:48.685 [2024-11-20 09:59:19.510787] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:48.685 09:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:25:48.685 09:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:48.685 09:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.685 09:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:25:48.945 Malloc1 00:25:48.945 09:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:49.205 09:59:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:49.465 09:59:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:49.726 [2024-11-20 09:59:20.390197] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:49.726 09:59:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:49.726 09:59:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:25:49.726 09:59:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:49.726 09:59:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:49.726 09:59:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:25:49.726 09:59:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:49.726 09:59:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:25:49.726 09:59:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:49.726 09:59:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:25:49.726 09:59:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:25:49.726 09:59:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:49.726 09:59:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:49.726 09:59:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:25:49.726 09:59:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:50.012 09:59:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:50.012 09:59:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:50.012 09:59:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:50.012 09:59:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:50.012 09:59:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:25:50.012 09:59:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:50.012 09:59:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:50.012 09:59:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:50.012 09:59:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:25:50.013 09:59:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:50.278 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:25:50.278 fio-3.35 00:25:50.278 Starting 1 thread 00:25:52.819 00:25:52.819 test: (groupid=0, jobs=1): err= 0: pid=1481179: Wed Nov 20 09:59:23 2024 00:25:52.819 read: IOPS=13.8k, BW=53.8MiB/s (56.4MB/s)(108MiB/2005msec) 00:25:52.819 slat (usec): min=2, max=277, avg= 2.13, stdev= 2.15 00:25:52.819 clat (usec): min=3562, max=8935, avg=5110.45, stdev=365.13 00:25:52.819 lat (usec): min=3565, max=8937, avg=5112.59, stdev=365.18 00:25:52.819 clat percentiles (usec): 00:25:52.819 | 1.00th=[ 4293], 5.00th=[ 4555], 10.00th=[ 4686], 20.00th=[ 4817], 00:25:52.819 | 30.00th=[ 4948], 40.00th=[ 5014], 50.00th=[ 5080], 60.00th=[ 5211], 00:25:52.819 | 70.00th=[ 5276], 80.00th=[ 5407], 90.00th=[ 5538], 95.00th=[ 5669], 00:25:52.819 | 99.00th=[ 5932], 99.50th=[ 6194], 99.90th=[ 7635], 99.95th=[ 7963], 00:25:52.819 | 99.99th=[ 8455] 00:25:52.819 bw ( KiB/s): min=53820, max=55608, per=99.95%, avg=55083.00, stdev=845.21, samples=4 00:25:52.819 iops : min=13455, max=13902, avg=13770.75, stdev=211.30, samples=4 00:25:52.819 write: IOPS=13.8k, BW=53.7MiB/s (56.4MB/s)(108MiB/2005msec); 0 zone resets 00:25:52.819 slat (usec): min=2, max=220, avg= 2.20, stdev= 1.48 00:25:52.819 clat (usec): min=2538, max=8087, avg=4127.54, stdev=310.42 00:25:52.819 lat (usec): min=2556, max=8089, avg=4129.75, stdev=310.50 00:25:52.819 clat percentiles (usec): 00:25:52.819 | 1.00th=[ 3392], 5.00th=[ 3654], 10.00th=[ 3785], 20.00th=[ 3916], 00:25:52.819 | 30.00th=[ 3982], 40.00th=[ 4047], 50.00th=[ 4113], 60.00th=[ 4178], 00:25:52.819 | 70.00th=[ 4293], 80.00th=[ 4359], 90.00th=[ 4490], 95.00th=[ 4555], 00:25:52.819 | 99.00th=[ 4817], 99.50th=[ 4948], 99.90th=[ 6652], 99.95th=[ 7373], 00:25:52.819 | 99.99th=[ 7963] 00:25:52.819 bw ( KiB/s): min=54155, max=55448, per=99.98%, avg=55026.75, stdev=595.04, samples=4 00:25:52.819 iops : min=13538, max=13862, avg=13756.50, stdev=149.13, samples=4 00:25:52.819 lat (msec) : 4=16.00%, 10=84.00% 00:25:52.819 cpu : usr=73.95%, sys=24.85%, ctx=33, majf=0, minf=17 00:25:52.819 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:25:52.819 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.819 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:52.819 issued rwts: total=27623,27586,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.819 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:52.819 00:25:52.819 Run status group 0 (all jobs): 00:25:52.819 READ: bw=53.8MiB/s (56.4MB/s), 53.8MiB/s-53.8MiB/s (56.4MB/s-56.4MB/s), io=108MiB (113MB), run=2005-2005msec 00:25:52.819 WRITE: bw=53.7MiB/s (56.4MB/s), 53.7MiB/s-53.7MiB/s (56.4MB/s-56.4MB/s), io=108MiB (113MB), run=2005-2005msec 00:25:52.819 09:59:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:52.819 09:59:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:52.819 09:59:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:25:52.819 09:59:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:52.819 09:59:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:25:52.819 09:59:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:52.819 09:59:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:25:52.819 09:59:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:25:52.819 09:59:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:52.819 09:59:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:52.819 09:59:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:25:52.819 09:59:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:52.819 09:59:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:52.819 09:59:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:52.819 09:59:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:52.819 09:59:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:52.819 09:59:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:25:52.819 09:59:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:52.819 09:59:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:52.819 09:59:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:52.819 09:59:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:25:52.819 09:59:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:53.079 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:25:53.079 fio-3.35 00:25:53.079 Starting 1 thread 00:25:55.623 00:25:55.623 test: (groupid=0, jobs=1): err= 0: pid=1481794: Wed Nov 20 09:59:26 2024 00:25:55.623 read: IOPS=9417, BW=147MiB/s (154MB/s)(295MiB/2004msec) 00:25:55.623 slat (usec): min=3, max=110, avg= 3.60, stdev= 1.57 00:25:55.623 clat (usec): min=1347, max=14974, avg=8262.84, stdev=1924.02 00:25:55.623 lat (usec): min=1351, max=14978, avg=8266.44, stdev=1924.12 00:25:55.623 clat percentiles (usec): 00:25:55.623 | 1.00th=[ 4146], 5.00th=[ 5276], 10.00th=[ 5866], 20.00th=[ 6521], 00:25:55.623 | 30.00th=[ 7046], 40.00th=[ 7635], 50.00th=[ 8225], 60.00th=[ 8717], 00:25:55.623 | 70.00th=[ 9372], 80.00th=[10159], 90.00th=[10683], 95.00th=[11207], 00:25:55.623 | 99.00th=[12649], 99.50th=[13173], 99.90th=[14222], 99.95th=[14615], 00:25:55.623 | 99.99th=[14877] 00:25:55.623 bw ( KiB/s): min=71008, max=82016, per=49.36%, avg=74384.00, stdev=5125.16, samples=4 00:25:55.623 iops : min= 4438, max= 5126, avg=4649.00, stdev=320.32, samples=4 00:25:55.623 write: IOPS=5543, BW=86.6MiB/s (90.8MB/s)(153MiB/1761msec); 0 zone resets 00:25:55.623 slat (usec): min=39, max=298, avg=40.80, stdev= 6.67 00:25:55.623 clat (usec): min=2951, max=14292, avg=9157.24, stdev=1384.87 00:25:55.623 lat (usec): min=2991, max=14429, avg=9198.03, stdev=1385.96 00:25:55.623 clat percentiles (usec): 00:25:55.623 | 1.00th=[ 5800], 5.00th=[ 7177], 10.00th=[ 7504], 20.00th=[ 8029], 00:25:55.623 | 30.00th=[ 8356], 40.00th=[ 8717], 50.00th=[ 9110], 60.00th=[ 9503], 00:25:55.623 | 70.00th=[ 9896], 80.00th=[10290], 90.00th=[10945], 95.00th=[11469], 00:25:55.623 | 99.00th=[12518], 99.50th=[12911], 99.90th=[13698], 99.95th=[13960], 00:25:55.623 | 99.99th=[14353] 00:25:55.623 bw ( KiB/s): min=74016, max=85248, per=87.53%, avg=77632.00, stdev=5134.31, samples=4 00:25:55.623 iops : min= 4626, max= 5328, avg=4852.00, stdev=320.89, samples=4 00:25:55.623 lat (msec) : 2=0.01%, 4=0.67%, 10=75.84%, 20=23.49% 00:25:55.623 cpu : usr=83.97%, sys=14.48%, ctx=15, majf=0, minf=27 00:25:55.623 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:25:55.623 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:55.623 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:55.623 issued rwts: total=18873,9762,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:55.623 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:55.623 00:25:55.623 Run status group 0 (all jobs): 00:25:55.623 READ: bw=147MiB/s (154MB/s), 147MiB/s-147MiB/s (154MB/s-154MB/s), io=295MiB (309MB), run=2004-2004msec 00:25:55.623 WRITE: bw=86.6MiB/s (90.8MB/s), 86.6MiB/s-86.6MiB/s (90.8MB/s-90.8MB/s), io=153MiB (160MB), run=1761-1761msec 00:25:55.623 09:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:55.885 09:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:25:55.885 09:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:25:55.885 09:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:25:55.885 09:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:25:55.885 09:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:55.885 09:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:25:55.885 09:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:55.885 09:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:25:55.885 09:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:55.885 09:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:55.885 rmmod nvme_tcp 00:25:55.885 rmmod nvme_fabrics 00:25:55.885 rmmod nvme_keyring 00:25:55.885 09:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:55.885 09:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:25:55.885 09:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:25:55.885 09:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 1480324 ']' 00:25:55.885 09:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 1480324 00:25:55.885 09:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 1480324 ']' 00:25:55.885 09:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 1480324 00:25:55.885 09:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:25:55.885 09:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:55.885 09:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1480324 00:25:55.885 09:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:55.885 09:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:55.885 09:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1480324' 00:25:55.885 killing process with pid 1480324 00:25:55.885 09:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 1480324 00:25:55.885 09:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 1480324 00:25:56.146 09:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:56.146 09:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:56.146 09:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:56.146 09:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:25:56.146 09:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:25:56.146 09:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:25:56.146 09:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:56.146 09:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:56.146 09:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:56.146 09:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:56.146 09:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:56.146 09:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:58.109 09:59:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:58.109 00:25:58.109 real 0m18.217s 00:25:58.109 user 1m13.493s 00:25:58.109 sys 0m7.811s 00:25:58.109 09:59:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:58.109 09:59:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.109 ************************************ 00:25:58.109 END TEST nvmf_fio_host 00:25:58.109 ************************************ 00:25:58.109 09:59:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:58.109 09:59:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:58.109 09:59:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:58.109 09:59:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.109 ************************************ 00:25:58.109 START TEST nvmf_failover 00:25:58.109 ************************************ 00:25:58.109 09:59:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:58.471 * Looking for test storage... 00:25:58.471 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:58.471 09:59:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:58.471 09:59:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:25:58.471 09:59:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:58.471 09:59:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:58.471 09:59:29 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:58.471 09:59:29 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:58.471 09:59:29 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:58.471 09:59:29 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:25:58.471 09:59:29 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:25:58.471 09:59:29 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:25:58.471 09:59:29 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:25:58.471 09:59:29 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:25:58.471 09:59:29 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:25:58.471 09:59:29 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:25:58.471 09:59:29 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:58.471 09:59:29 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:25:58.471 09:59:29 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:25:58.471 09:59:29 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:58.471 09:59:29 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:58.471 09:59:29 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:25:58.471 09:59:29 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:25:58.471 09:59:29 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:58.471 09:59:29 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:25:58.471 09:59:29 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:25:58.471 09:59:29 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:25:58.471 09:59:29 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:25:58.471 09:59:29 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:58.471 09:59:29 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:25:58.471 09:59:29 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:25:58.471 09:59:29 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:58.471 09:59:29 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:58.471 09:59:29 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:25:58.471 09:59:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:58.471 09:59:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:58.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:58.471 --rc genhtml_branch_coverage=1 00:25:58.471 --rc genhtml_function_coverage=1 00:25:58.471 --rc genhtml_legend=1 00:25:58.471 --rc geninfo_all_blocks=1 00:25:58.471 --rc geninfo_unexecuted_blocks=1 00:25:58.471 00:25:58.471 ' 00:25:58.471 09:59:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:58.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:58.471 --rc genhtml_branch_coverage=1 00:25:58.471 --rc genhtml_function_coverage=1 00:25:58.471 --rc genhtml_legend=1 00:25:58.471 --rc geninfo_all_blocks=1 00:25:58.471 --rc geninfo_unexecuted_blocks=1 00:25:58.471 00:25:58.471 ' 00:25:58.471 09:59:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:58.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:58.471 --rc genhtml_branch_coverage=1 00:25:58.471 --rc genhtml_function_coverage=1 00:25:58.471 --rc genhtml_legend=1 00:25:58.471 --rc geninfo_all_blocks=1 00:25:58.471 --rc geninfo_unexecuted_blocks=1 00:25:58.471 00:25:58.471 ' 00:25:58.471 09:59:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:58.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:58.471 --rc genhtml_branch_coverage=1 00:25:58.471 --rc genhtml_function_coverage=1 00:25:58.471 --rc genhtml_legend=1 00:25:58.471 --rc geninfo_all_blocks=1 00:25:58.471 --rc geninfo_unexecuted_blocks=1 00:25:58.471 00:25:58.471 ' 00:25:58.471 09:59:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:58.471 09:59:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:25:58.471 09:59:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:58.471 09:59:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:58.471 09:59:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:58.471 09:59:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:58.471 09:59:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:58.471 09:59:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:58.471 09:59:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:58.471 09:59:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:58.471 09:59:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:58.471 09:59:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:58.471 09:59:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:58.471 09:59:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:58.471 09:59:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:58.471 09:59:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:58.471 09:59:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:58.471 09:59:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:58.471 09:59:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:58.471 09:59:29 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:25:58.471 09:59:29 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:58.471 09:59:29 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:58.472 09:59:29 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:58.472 09:59:29 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:58.472 09:59:29 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:58.472 09:59:29 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:58.472 09:59:29 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:25:58.472 09:59:29 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:58.472 09:59:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:25:58.472 09:59:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:58.472 09:59:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:58.472 09:59:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:58.472 09:59:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:58.472 09:59:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:58.472 09:59:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:58.472 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:58.472 09:59:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:58.472 09:59:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:58.472 09:59:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:58.472 09:59:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:58.472 09:59:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:58.472 09:59:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:58.472 09:59:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:58.472 09:59:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:25:58.472 09:59:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:58.472 09:59:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:58.472 09:59:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:58.472 09:59:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:58.472 09:59:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:58.472 09:59:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:58.472 09:59:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:58.472 09:59:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:58.472 09:59:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:58.472 09:59:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:58.472 09:59:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:25:58.472 09:59:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:06.664 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:06.664 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:26:06.664 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:06.664 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:06.664 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:06.664 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:06.664 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:06.664 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:26:06.664 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:06.664 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:26:06.664 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:26:06.664 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:26:06.664 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:26:06.664 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:26:06.664 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:26:06.664 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:06.664 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:06.664 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:06.664 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:06.664 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:06.664 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:06.664 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:06.664 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:06.664 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:06.664 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:06.664 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:06.664 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:06.664 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:06.664 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:06.664 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:06.664 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:06.664 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:06.664 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:06.664 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:06.664 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:06.664 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:06.664 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:06.664 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:06.664 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:06.664 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:06.664 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:06.664 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:06.664 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:06.664 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:06.664 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:06.664 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:06.664 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:06.664 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:06.664 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:06.664 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:06.664 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:06.664 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:06.664 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:06.664 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:06.664 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:06.664 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:06.664 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:06.664 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:06.664 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:06.664 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:06.664 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:06.664 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:06.664 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:06.664 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:06.664 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:06.664 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:06.664 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:06.664 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:06.664 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:06.664 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:06.664 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:06.664 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:06.664 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:06.664 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:26:06.664 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:06.664 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:06.664 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:06.664 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:06.664 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:06.664 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:06.664 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:06.664 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:06.664 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:06.664 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:06.664 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:06.664 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:06.664 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:06.664 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:06.664 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:06.664 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:06.664 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:06.664 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:06.664 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:06.664 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:06.664 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:06.664 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:06.664 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:06.664 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:06.664 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:06.664 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:06.664 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:06.664 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.699 ms 00:26:06.664 00:26:06.664 --- 10.0.0.2 ping statistics --- 00:26:06.664 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:06.664 rtt min/avg/max/mdev = 0.699/0.699/0.699/0.000 ms 00:26:06.665 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:06.665 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:06.665 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:26:06.665 00:26:06.665 --- 10.0.0.1 ping statistics --- 00:26:06.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:06.665 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:26:06.665 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:06.665 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:26:06.665 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:06.665 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:06.665 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:06.665 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:06.665 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:06.665 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:06.665 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:06.665 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:26:06.665 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:06.665 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:06.665 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:06.665 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=1486445 00:26:06.665 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 1486445 00:26:06.665 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:06.665 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1486445 ']' 00:26:06.665 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:06.665 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:06.665 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:06.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:06.665 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:06.665 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:06.665 [2024-11-20 09:59:36.779598] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:26:06.665 [2024-11-20 09:59:36.779664] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:06.665 [2024-11-20 09:59:36.878839] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:06.665 [2024-11-20 09:59:36.929767] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:06.665 [2024-11-20 09:59:36.929819] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:06.665 [2024-11-20 09:59:36.929828] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:06.665 [2024-11-20 09:59:36.929835] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:06.665 [2024-11-20 09:59:36.929842] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:06.665 [2024-11-20 09:59:36.931925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:06.665 [2024-11-20 09:59:36.932090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:06.665 [2024-11-20 09:59:36.932090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:06.925 09:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:06.925 09:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:26:06.925 09:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:06.925 09:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:06.925 09:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:06.925 09:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:06.925 09:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:06.925 [2024-11-20 09:59:37.824287] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:07.187 09:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:07.187 Malloc0 00:26:07.187 09:59:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:07.449 09:59:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:07.710 09:59:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:07.972 [2024-11-20 09:59:38.642181] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:07.972 09:59:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:07.972 [2024-11-20 09:59:38.842808] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:07.972 09:59:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:08.234 [2024-11-20 09:59:39.047556] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:26:08.234 09:59:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1487038 00:26:08.234 09:59:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:26:08.234 09:59:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:08.234 09:59:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1487038 /var/tmp/bdevperf.sock 00:26:08.234 09:59:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1487038 ']' 00:26:08.234 09:59:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:08.234 09:59:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:08.234 09:59:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:08.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:08.234 09:59:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:08.234 09:59:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:09.175 09:59:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:09.175 09:59:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:26:09.175 09:59:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:09.436 NVMe0n1 00:26:09.436 09:59:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:09.696 00:26:09.956 09:59:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1487296 00:26:09.956 09:59:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:09.956 09:59:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:26:10.898 09:59:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:10.898 [2024-11-20 09:59:41.781715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10484f0 is same with the state(6) to be set 00:26:10.898 [2024-11-20 09:59:41.781756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10484f0 is same with the state(6) to be set 00:26:10.898 [2024-11-20 09:59:41.781762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10484f0 is same with the state(6) to be set 00:26:10.898 [2024-11-20 09:59:41.781767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10484f0 is same with the state(6) to be set 00:26:10.898 [2024-11-20 09:59:41.781772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10484f0 is same with the state(6) to be set 00:26:10.898 [2024-11-20 09:59:41.781777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10484f0 is same with the state(6) to be set 00:26:10.898 [2024-11-20 09:59:41.781781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10484f0 is same with the state(6) to be set 00:26:10.898 [2024-11-20 09:59:41.781786] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10484f0 is same with the state(6) to be set 00:26:10.898 [2024-11-20 09:59:41.781791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10484f0 is same with the state(6) to be set 00:26:10.898 [2024-11-20 09:59:41.781795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10484f0 is same with the state(6) to be set 00:26:10.898 [2024-11-20 09:59:41.781800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10484f0 is same with the state(6) to be set 00:26:10.898 [2024-11-20 09:59:41.781804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10484f0 is same with the state(6) to be set 00:26:10.898 [2024-11-20 09:59:41.781809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10484f0 is same with the state(6) to be set 00:26:10.898 [2024-11-20 09:59:41.781813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10484f0 is same with the state(6) to be set 00:26:10.898 [2024-11-20 09:59:41.781818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10484f0 is same with the state(6) to be set 00:26:10.898 [2024-11-20 09:59:41.781822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10484f0 is same with the state(6) to be set 00:26:10.898 [2024-11-20 09:59:41.781827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10484f0 is same with the state(6) to be set 00:26:10.898 [2024-11-20 09:59:41.781831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10484f0 is same with the state(6) to be set 00:26:10.898 [2024-11-20 09:59:41.781836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10484f0 is same with the state(6) to be set 00:26:10.898 [2024-11-20 09:59:41.781840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10484f0 is same with the state(6) to be set 00:26:10.898 [2024-11-20 09:59:41.781845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10484f0 is same with the state(6) to be set 00:26:10.898 [2024-11-20 09:59:41.781849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10484f0 is same with the state(6) to be set 00:26:10.898 [2024-11-20 09:59:41.781853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10484f0 is same with the state(6) to be set 00:26:10.898 [2024-11-20 09:59:41.781858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10484f0 is same with the state(6) to be set 00:26:10.898 [2024-11-20 09:59:41.781868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10484f0 is same with the state(6) to be set 00:26:10.898 [2024-11-20 09:59:41.781872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10484f0 is same with the state(6) to be set 00:26:10.898 [2024-11-20 09:59:41.781877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10484f0 is same with the state(6) to be set 00:26:10.898 [2024-11-20 09:59:41.781881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10484f0 is same with the state(6) to be set 00:26:10.898 [2024-11-20 09:59:41.781886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10484f0 is same with the state(6) to be set 00:26:10.898 [2024-11-20 09:59:41.781891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10484f0 is same with the state(6) to be set 00:26:10.898 [2024-11-20 09:59:41.781895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10484f0 is same with the state(6) to be set 00:26:10.898 [2024-11-20 09:59:41.781900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10484f0 is same with the state(6) to be set 00:26:10.898 [2024-11-20 09:59:41.781905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10484f0 is same with the state(6) to be set 00:26:10.898 [2024-11-20 09:59:41.781909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10484f0 is same with the state(6) to be set 00:26:10.898 [2024-11-20 09:59:41.781914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10484f0 is same with the state(6) to be set 00:26:10.898 [2024-11-20 09:59:41.781918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10484f0 is same with the state(6) to be set 00:26:10.898 [2024-11-20 09:59:41.781923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10484f0 is same with the state(6) to be set 00:26:10.898 [2024-11-20 09:59:41.781927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10484f0 is same with the state(6) to be set 00:26:10.898 [2024-11-20 09:59:41.781931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10484f0 is same with the state(6) to be set 00:26:10.898 [2024-11-20 09:59:41.781936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10484f0 is same with the state(6) to be set 00:26:10.898 [2024-11-20 09:59:41.781940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10484f0 is same with the state(6) to be set 00:26:10.898 [2024-11-20 09:59:41.781945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10484f0 is same with the state(6) to be set 00:26:10.898 [2024-11-20 09:59:41.781950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10484f0 is same with the state(6) to be set 00:26:10.898 [2024-11-20 09:59:41.781955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10484f0 is same with the state(6) to be set 00:26:10.898 [2024-11-20 09:59:41.781959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10484f0 is same with the state(6) to be set 00:26:10.898 [2024-11-20 09:59:41.781963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10484f0 is same with the state(6) to be set 00:26:10.898 [2024-11-20 09:59:41.781968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10484f0 is same with the state(6) to be set 00:26:10.898 [2024-11-20 09:59:41.781973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10484f0 is same with the state(6) to be set 00:26:10.898 [2024-11-20 09:59:41.781977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10484f0 is same with the state(6) to be set 00:26:10.898 [2024-11-20 09:59:41.781982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10484f0 is same with the state(6) to be set 00:26:10.898 [2024-11-20 09:59:41.781987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10484f0 is same with the state(6) to be set 00:26:10.898 [2024-11-20 09:59:41.781993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10484f0 is same with the state(6) to be set 00:26:10.898 [2024-11-20 09:59:41.781998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10484f0 is same with the state(6) to be set 00:26:10.898 [2024-11-20 09:59:41.782003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10484f0 is same with the state(6) to be set 00:26:10.898 [2024-11-20 09:59:41.782007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10484f0 is same with the state(6) to be set 00:26:10.898 [2024-11-20 09:59:41.782012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10484f0 is same with the state(6) to be set 00:26:10.898 [2024-11-20 09:59:41.782017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10484f0 is same with the state(6) to be set 00:26:10.898 [2024-11-20 09:59:41.782022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10484f0 is same with the state(6) to be set 00:26:10.898 [2024-11-20 09:59:41.782026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10484f0 is same with the state(6) to be set 00:26:10.898 [2024-11-20 09:59:41.782031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10484f0 is same with the state(6) to be set 00:26:10.898 [2024-11-20 09:59:41.782035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10484f0 is same with the state(6) to be set 00:26:10.898 [2024-11-20 09:59:41.782040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10484f0 is same with the state(6) to be set 00:26:10.898 [2024-11-20 09:59:41.782044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10484f0 is same with the state(6) to be set 00:26:10.898 [2024-11-20 09:59:41.782049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10484f0 is same with the state(6) to be set 00:26:10.898 [2024-11-20 09:59:41.782054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10484f0 is same with the state(6) to be set 00:26:10.898 [2024-11-20 09:59:41.782058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10484f0 is same with the state(6) to be set 00:26:11.159 09:59:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:26:14.459 09:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:14.459 00:26:14.459 09:59:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:14.459 [2024-11-20 09:59:45.278469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.459 [2024-11-20 09:59:45.278505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.459 [2024-11-20 09:59:45.278511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.459 [2024-11-20 09:59:45.278516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.459 [2024-11-20 09:59:45.278521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.459 [2024-11-20 09:59:45.278526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.459 [2024-11-20 09:59:45.278531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.459 [2024-11-20 09:59:45.278536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.459 [2024-11-20 09:59:45.278545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.459 [2024-11-20 09:59:45.278550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.459 [2024-11-20 09:59:45.278554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.459 [2024-11-20 09:59:45.278559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.459 [2024-11-20 09:59:45.278563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.459 [2024-11-20 09:59:45.278568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.459 [2024-11-20 09:59:45.278572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.459 [2024-11-20 09:59:45.278577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.459 [2024-11-20 09:59:45.278581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.459 [2024-11-20 09:59:45.278586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.459 [2024-11-20 09:59:45.278590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.459 [2024-11-20 09:59:45.278595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.459 [2024-11-20 09:59:45.278599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.459 [2024-11-20 09:59:45.278604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.459 [2024-11-20 09:59:45.278609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.459 [2024-11-20 09:59:45.278614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.459 [2024-11-20 09:59:45.278618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.459 [2024-11-20 09:59:45.278622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.459 [2024-11-20 09:59:45.278627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.459 [2024-11-20 09:59:45.278632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.459 [2024-11-20 09:59:45.278636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.459 [2024-11-20 09:59:45.278641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.459 [2024-11-20 09:59:45.278646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.459 [2024-11-20 09:59:45.278650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.459 [2024-11-20 09:59:45.278654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.459 [2024-11-20 09:59:45.278659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.459 [2024-11-20 09:59:45.278663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.459 [2024-11-20 09:59:45.278669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.459 [2024-11-20 09:59:45.278674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.459 [2024-11-20 09:59:45.278679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.459 [2024-11-20 09:59:45.278683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.459 [2024-11-20 09:59:45.278689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.459 [2024-11-20 09:59:45.278694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.459 [2024-11-20 09:59:45.278698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.460 [2024-11-20 09:59:45.278703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.460 [2024-11-20 09:59:45.278707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.460 [2024-11-20 09:59:45.278712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.460 [2024-11-20 09:59:45.278717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.460 [2024-11-20 09:59:45.278721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.460 [2024-11-20 09:59:45.278726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.460 [2024-11-20 09:59:45.278731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.460 [2024-11-20 09:59:45.278735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.460 [2024-11-20 09:59:45.278742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.460 [2024-11-20 09:59:45.278747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.460 [2024-11-20 09:59:45.278751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.460 [2024-11-20 09:59:45.278758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.460 [2024-11-20 09:59:45.278763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.460 [2024-11-20 09:59:45.278767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.460 [2024-11-20 09:59:45.278772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.460 [2024-11-20 09:59:45.278776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.460 [2024-11-20 09:59:45.278781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.460 [2024-11-20 09:59:45.278785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.460 [2024-11-20 09:59:45.278790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.460 [2024-11-20 09:59:45.278794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.460 [2024-11-20 09:59:45.278800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.460 [2024-11-20 09:59:45.278805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.460 [2024-11-20 09:59:45.278811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.460 [2024-11-20 09:59:45.278816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.460 [2024-11-20 09:59:45.278822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.460 [2024-11-20 09:59:45.278826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.460 [2024-11-20 09:59:45.278832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.460 [2024-11-20 09:59:45.278837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.460 [2024-11-20 09:59:45.278842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.460 [2024-11-20 09:59:45.278847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.460 [2024-11-20 09:59:45.278851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.460 [2024-11-20 09:59:45.278856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.460 [2024-11-20 09:59:45.278863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.460 [2024-11-20 09:59:45.278868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.460 [2024-11-20 09:59:45.278874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.460 [2024-11-20 09:59:45.278879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.460 [2024-11-20 09:59:45.278884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.460 [2024-11-20 09:59:45.278889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.460 [2024-11-20 09:59:45.278895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.460 [2024-11-20 09:59:45.278900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.460 [2024-11-20 09:59:45.278905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.460 [2024-11-20 09:59:45.278911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.460 [2024-11-20 09:59:45.278916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.460 [2024-11-20 09:59:45.278921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.460 [2024-11-20 09:59:45.278925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.460 [2024-11-20 09:59:45.278930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.460 [2024-11-20 09:59:45.278934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.460 [2024-11-20 09:59:45.278940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.460 [2024-11-20 09:59:45.278944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.460 [2024-11-20 09:59:45.278949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.460 [2024-11-20 09:59:45.278954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.460 [2024-11-20 09:59:45.278958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.460 [2024-11-20 09:59:45.278963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.460 [2024-11-20 09:59:45.278968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.460 [2024-11-20 09:59:45.278972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.460 [2024-11-20 09:59:45.278977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.460 [2024-11-20 09:59:45.278981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.460 [2024-11-20 09:59:45.278986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.460 [2024-11-20 09:59:45.278991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.460 [2024-11-20 09:59:45.278995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.460 [2024-11-20 09:59:45.279000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.460 [2024-11-20 09:59:45.279005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.460 [2024-11-20 09:59:45.279010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.460 [2024-11-20 09:59:45.279014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.460 [2024-11-20 09:59:45.279019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.460 [2024-11-20 09:59:45.279023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.460 [2024-11-20 09:59:45.279028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.460 [2024-11-20 09:59:45.279033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.460 [2024-11-20 09:59:45.279037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.460 [2024-11-20 09:59:45.279042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.460 [2024-11-20 09:59:45.279046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.460 [2024-11-20 09:59:45.279051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.460 [2024-11-20 09:59:45.279055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.460 [2024-11-20 09:59:45.279060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.460 [2024-11-20 09:59:45.279065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.461 [2024-11-20 09:59:45.279070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.461 [2024-11-20 09:59:45.279074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.461 [2024-11-20 09:59:45.279079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.461 [2024-11-20 09:59:45.279084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.461 [2024-11-20 09:59:45.279088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.461 [2024-11-20 09:59:45.279093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.461 [2024-11-20 09:59:45.279097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.461 [2024-11-20 09:59:45.279101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.461 [2024-11-20 09:59:45.279106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.461 [2024-11-20 09:59:45.279110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1049040 is same with the state(6) to be set 00:26:14.461 09:59:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:26:17.758 09:59:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:17.758 [2024-11-20 09:59:48.467306] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:17.758 09:59:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:26:18.698 09:59:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:18.959 [2024-11-20 09:59:49.657654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0e4c0 is same with the state(6) to be set 00:26:18.959 [2024-11-20 09:59:49.657693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0e4c0 is same with the state(6) to be set 00:26:18.959 [2024-11-20 09:59:49.657699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0e4c0 is same with the state(6) to be set 00:26:18.959 [2024-11-20 09:59:49.657704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0e4c0 is same with the state(6) to be set 00:26:18.959 [2024-11-20 09:59:49.657710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0e4c0 is same with the state(6) to be set 00:26:18.959 [2024-11-20 09:59:49.657714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0e4c0 is same with the state(6) to be set 00:26:18.959 [2024-11-20 09:59:49.657719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0e4c0 is same with the state(6) to be set 00:26:18.959 [2024-11-20 09:59:49.657724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0e4c0 is same with the state(6) to be set 00:26:18.959 [2024-11-20 09:59:49.657728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0e4c0 is same with the state(6) to be set 00:26:18.959 [2024-11-20 09:59:49.657733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0e4c0 is same with the state(6) to be set 00:26:18.959 [2024-11-20 09:59:49.657738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0e4c0 is same with the state(6) to be set 00:26:18.959 [2024-11-20 09:59:49.657742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0e4c0 is same with the state(6) to be set 00:26:18.959 [2024-11-20 09:59:49.657752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0e4c0 is same with the state(6) to be set 00:26:18.959 [2024-11-20 09:59:49.657757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0e4c0 is same with the state(6) to be set 00:26:18.959 [2024-11-20 09:59:49.657761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0e4c0 is same with the state(6) to be set 00:26:18.959 [2024-11-20 09:59:49.657766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0e4c0 is same with the state(6) to be set 00:26:18.959 [2024-11-20 09:59:49.657771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0e4c0 is same with the state(6) to be set 00:26:18.959 [2024-11-20 09:59:49.657775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0e4c0 is same with the state(6) to be set 00:26:18.959 [2024-11-20 09:59:49.657780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0e4c0 is same with the state(6) to be set 00:26:18.959 [2024-11-20 09:59:49.657784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0e4c0 is same with the state(6) to be set 00:26:18.959 [2024-11-20 09:59:49.657789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0e4c0 is same with the state(6) to be set 00:26:18.959 [2024-11-20 09:59:49.657793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0e4c0 is same with the state(6) to be set 00:26:18.959 [2024-11-20 09:59:49.657798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0e4c0 is same with the state(6) to be set 00:26:18.959 [2024-11-20 09:59:49.657802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0e4c0 is same with the state(6) to be set 00:26:18.959 [2024-11-20 09:59:49.657807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0e4c0 is same with the state(6) to be set 00:26:18.959 [2024-11-20 09:59:49.657812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0e4c0 is same with the state(6) to be set 00:26:18.959 [2024-11-20 09:59:49.657817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0e4c0 is same with the state(6) to be set 00:26:18.959 [2024-11-20 09:59:49.657821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0e4c0 is same with the state(6) to be set 00:26:18.959 [2024-11-20 09:59:49.657826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0e4c0 is same with the state(6) to be set 00:26:18.959 [2024-11-20 09:59:49.657830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0e4c0 is same with the state(6) to be set 00:26:18.959 [2024-11-20 09:59:49.657835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0e4c0 is same with the state(6) to be set 00:26:18.959 [2024-11-20 09:59:49.657839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0e4c0 is same with the state(6) to be set 00:26:18.959 [2024-11-20 09:59:49.657844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0e4c0 is same with the state(6) to be set 00:26:18.959 [2024-11-20 09:59:49.657849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0e4c0 is same with the state(6) to be set 00:26:18.959 [2024-11-20 09:59:49.657854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0e4c0 is same with the state(6) to be set 00:26:18.959 [2024-11-20 09:59:49.657858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0e4c0 is same with the state(6) to be set 00:26:18.959 [2024-11-20 09:59:49.657863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0e4c0 is same with the state(6) to be set 00:26:18.959 [2024-11-20 09:59:49.657867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0e4c0 is same with the state(6) to be set 00:26:18.959 [2024-11-20 09:59:49.657872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0e4c0 is same with the state(6) to be set 00:26:18.959 [2024-11-20 09:59:49.657877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0e4c0 is same with the state(6) to be set 00:26:18.959 [2024-11-20 09:59:49.657882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0e4c0 is same with the state(6) to be set 00:26:18.959 [2024-11-20 09:59:49.657886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0e4c0 is same with the state(6) to be set 00:26:18.959 [2024-11-20 09:59:49.657892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0e4c0 is same with the state(6) to be set 00:26:18.960 [2024-11-20 09:59:49.657897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0e4c0 is same with the state(6) to be set 00:26:18.960 [2024-11-20 09:59:49.657903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0e4c0 is same with the state(6) to be set 00:26:18.960 [2024-11-20 09:59:49.657908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0e4c0 is same with the state(6) to be set 00:26:18.960 [2024-11-20 09:59:49.657912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0e4c0 is same with the state(6) to be set 00:26:18.960 [2024-11-20 09:59:49.657917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0e4c0 is same with the state(6) to be set 00:26:18.960 [2024-11-20 09:59:49.657922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0e4c0 is same with the state(6) to be set 00:26:18.960 [2024-11-20 09:59:49.657926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0e4c0 is same with the state(6) to be set 00:26:18.960 [2024-11-20 09:59:49.657931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0e4c0 is same with the state(6) to be set 00:26:18.960 [2024-11-20 09:59:49.657936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0e4c0 is same with the state(6) to be set 00:26:18.960 [2024-11-20 09:59:49.657940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0e4c0 is same with the state(6) to be set 00:26:18.960 [2024-11-20 09:59:49.657944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0e4c0 is same with the state(6) to be set 00:26:18.960 [2024-11-20 09:59:49.657951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0e4c0 is same with the state(6) to be set 00:26:18.960 [2024-11-20 09:59:49.657957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0e4c0 is same with the state(6) to be set 00:26:18.960 [2024-11-20 09:59:49.657961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0e4c0 is same with the state(6) to be set 00:26:18.960 [2024-11-20 09:59:49.657967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0e4c0 is same with the state(6) to be set 00:26:18.960 [2024-11-20 09:59:49.657972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0e4c0 is same with the state(6) to be set 00:26:18.960 [2024-11-20 09:59:49.657976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0e4c0 is same with the state(6) to be set 00:26:18.960 [2024-11-20 09:59:49.657981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0e4c0 is same with the state(6) to be set 00:26:18.960 [2024-11-20 09:59:49.657985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0e4c0 is same with the state(6) to be set 00:26:18.960 [2024-11-20 09:59:49.657990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0e4c0 is same with the state(6) to be set 00:26:18.960 [2024-11-20 09:59:49.657995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0e4c0 is same with the state(6) to be set 00:26:18.960 [2024-11-20 09:59:49.657999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0e4c0 is same with the state(6) to be set 00:26:18.960 [2024-11-20 09:59:49.658004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0e4c0 is same with the state(6) to be set 00:26:18.960 [2024-11-20 09:59:49.658009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0e4c0 is same with the state(6) to be set 00:26:18.960 [2024-11-20 09:59:49.658016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0e4c0 is same with the state(6) to be set 00:26:18.960 [2024-11-20 09:59:49.658024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0e4c0 is same with the state(6) to be set 00:26:18.960 [2024-11-20 09:59:49.658029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0e4c0 is same with the state(6) to be set 00:26:18.960 09:59:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 1487296 00:26:25.547 { 00:26:25.547 "results": [ 00:26:25.547 { 00:26:25.547 "job": "NVMe0n1", 00:26:25.547 "core_mask": "0x1", 00:26:25.547 "workload": "verify", 00:26:25.547 "status": "finished", 00:26:25.547 "verify_range": { 00:26:25.547 "start": 0, 00:26:25.547 "length": 16384 00:26:25.547 }, 00:26:25.547 "queue_depth": 128, 00:26:25.547 "io_size": 4096, 00:26:25.547 "runtime": 15.005469, 00:26:25.547 "iops": 12470.986411687632, 00:26:25.547 "mibps": 48.714790670654814, 00:26:25.547 "io_failed": 10341, 00:26:25.547 "io_timeout": 0, 00:26:25.547 "avg_latency_us": 9704.306282210991, 00:26:25.547 "min_latency_us": 542.72, 00:26:25.547 "max_latency_us": 23265.28 00:26:25.547 } 00:26:25.547 ], 00:26:25.547 "core_count": 1 00:26:25.547 } 00:26:25.547 09:59:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 1487038 00:26:25.547 09:59:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1487038 ']' 00:26:25.547 09:59:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1487038 00:26:25.547 09:59:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:26:25.547 09:59:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:25.547 09:59:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1487038 00:26:25.547 09:59:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:25.547 09:59:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:25.547 09:59:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1487038' 00:26:25.547 killing process with pid 1487038 00:26:25.547 09:59:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1487038 00:26:25.547 09:59:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1487038 00:26:25.547 09:59:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:25.547 [2024-11-20 09:59:39.129467] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:26:25.547 [2024-11-20 09:59:39.129526] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1487038 ] 00:26:25.547 [2024-11-20 09:59:39.216418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:25.547 [2024-11-20 09:59:39.252270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:25.547 Running I/O for 15 seconds... 00:26:25.547 11598.00 IOPS, 45.30 MiB/s [2024-11-20T08:59:56.463Z] [2024-11-20 09:59:41.782496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:99376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.547 [2024-11-20 09:59:41.782528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.547 [2024-11-20 09:59:41.782546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:99384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.547 [2024-11-20 09:59:41.782555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.547 [2024-11-20 09:59:41.782565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:99392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.547 [2024-11-20 09:59:41.782573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.547 [2024-11-20 09:59:41.782582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:99400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.547 [2024-11-20 09:59:41.782590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.547 [2024-11-20 09:59:41.782599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:99408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.547 [2024-11-20 09:59:41.782607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.547 [2024-11-20 09:59:41.782617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:99416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.547 [2024-11-20 09:59:41.782625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.547 [2024-11-20 09:59:41.782634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:99424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.547 [2024-11-20 09:59:41.782642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.547 [2024-11-20 09:59:41.782651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:99432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.548 [2024-11-20 09:59:41.782659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.548 [2024-11-20 09:59:41.782668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.548 [2024-11-20 09:59:41.782676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.548 [2024-11-20 09:59:41.782685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:99448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.548 [2024-11-20 09:59:41.782693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.548 [2024-11-20 09:59:41.782702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:99456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.548 [2024-11-20 09:59:41.782710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.548 [2024-11-20 09:59:41.782725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.548 [2024-11-20 09:59:41.782732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.548 [2024-11-20 09:59:41.782742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:99472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.548 [2024-11-20 09:59:41.782749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.548 [2024-11-20 09:59:41.782759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:99480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.548 [2024-11-20 09:59:41.782766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.548 [2024-11-20 09:59:41.782776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:99488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.548 [2024-11-20 09:59:41.782783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.548 [2024-11-20 09:59:41.782792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:99496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.548 [2024-11-20 09:59:41.782800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.548 [2024-11-20 09:59:41.782809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.548 [2024-11-20 09:59:41.782817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.548 [2024-11-20 09:59:41.782827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:99512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.548 [2024-11-20 09:59:41.782834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.548 [2024-11-20 09:59:41.782843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:99520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.548 [2024-11-20 09:59:41.782852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.548 [2024-11-20 09:59:41.782861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:99528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.548 [2024-11-20 09:59:41.782868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.548 [2024-11-20 09:59:41.782878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.548 [2024-11-20 09:59:41.782885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.548 [2024-11-20 09:59:41.782894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:99544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.548 [2024-11-20 09:59:41.782901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.548 [2024-11-20 09:59:41.782911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:99552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.548 [2024-11-20 09:59:41.782918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.548 [2024-11-20 09:59:41.782927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:99560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.548 [2024-11-20 09:59:41.782937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.548 [2024-11-20 09:59:41.782946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:99568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.548 [2024-11-20 09:59:41.782954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.548 [2024-11-20 09:59:41.782963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.548 [2024-11-20 09:59:41.782970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.548 [2024-11-20 09:59:41.782980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:99584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.548 [2024-11-20 09:59:41.782987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.548 [2024-11-20 09:59:41.782996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:99592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.548 [2024-11-20 09:59:41.783004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.548 [2024-11-20 09:59:41.783014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:99600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.548 [2024-11-20 09:59:41.783021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.548 [2024-11-20 09:59:41.783031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:99608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.548 [2024-11-20 09:59:41.783038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.548 [2024-11-20 09:59:41.783047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:99616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.548 [2024-11-20 09:59:41.783055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.548 [2024-11-20 09:59:41.783064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:99624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.548 [2024-11-20 09:59:41.783072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.548 [2024-11-20 09:59:41.783081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:99632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.548 [2024-11-20 09:59:41.783089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.548 [2024-11-20 09:59:41.783098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:99640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.548 [2024-11-20 09:59:41.783105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.548 [2024-11-20 09:59:41.783115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:99648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.548 [2024-11-20 09:59:41.783123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.548 [2024-11-20 09:59:41.783132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:99656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.548 [2024-11-20 09:59:41.783140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.548 [2024-11-20 09:59:41.783150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:99664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.548 [2024-11-20 09:59:41.783163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.548 [2024-11-20 09:59:41.783173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:99672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.548 [2024-11-20 09:59:41.783181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.548 [2024-11-20 09:59:41.783191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.548 [2024-11-20 09:59:41.783199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.548 [2024-11-20 09:59:41.783209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:99688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.548 [2024-11-20 09:59:41.783216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.548 [2024-11-20 09:59:41.783226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:99696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.548 [2024-11-20 09:59:41.783233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.548 [2024-11-20 09:59:41.783242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.548 [2024-11-20 09:59:41.783250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.548 [2024-11-20 09:59:41.783259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:99712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.548 [2024-11-20 09:59:41.783266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.548 [2024-11-20 09:59:41.783276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:99720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.548 [2024-11-20 09:59:41.783283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.548 [2024-11-20 09:59:41.783292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:99728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.548 [2024-11-20 09:59:41.783299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.548 [2024-11-20 09:59:41.783309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:99736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.548 [2024-11-20 09:59:41.783316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.548 [2024-11-20 09:59:41.783326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:99744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.548 [2024-11-20 09:59:41.783333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.549 [2024-11-20 09:59:41.783343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:99752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.549 [2024-11-20 09:59:41.783350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.549 [2024-11-20 09:59:41.783359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:99760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.549 [2024-11-20 09:59:41.783372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.549 [2024-11-20 09:59:41.783381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:99768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.549 [2024-11-20 09:59:41.783389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.549 [2024-11-20 09:59:41.783398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:99776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.549 [2024-11-20 09:59:41.783405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.549 [2024-11-20 09:59:41.783414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:99784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.549 [2024-11-20 09:59:41.783422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.549 [2024-11-20 09:59:41.783431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:99792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.549 [2024-11-20 09:59:41.783439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.549 [2024-11-20 09:59:41.783448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:99800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.549 [2024-11-20 09:59:41.783455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.549 [2024-11-20 09:59:41.783464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:99808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.549 [2024-11-20 09:59:41.783472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.549 [2024-11-20 09:59:41.783481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:99816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.549 [2024-11-20 09:59:41.783489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.549 [2024-11-20 09:59:41.783499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:99824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.549 [2024-11-20 09:59:41.783506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.549 [2024-11-20 09:59:41.783515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:99832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.549 [2024-11-20 09:59:41.783523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.549 [2024-11-20 09:59:41.783532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:99840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.549 [2024-11-20 09:59:41.783540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.549 [2024-11-20 09:59:41.783549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:99848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.549 [2024-11-20 09:59:41.783556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.549 [2024-11-20 09:59:41.783565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:99856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.549 [2024-11-20 09:59:41.783573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.549 [2024-11-20 09:59:41.783583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:99864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.549 [2024-11-20 09:59:41.783591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.549 [2024-11-20 09:59:41.783601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:99872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.549 [2024-11-20 09:59:41.783608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.549 [2024-11-20 09:59:41.783618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:99880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.549 [2024-11-20 09:59:41.783625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.549 [2024-11-20 09:59:41.783635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:99888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.549 [2024-11-20 09:59:41.783643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.549 [2024-11-20 09:59:41.783652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:99896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.549 [2024-11-20 09:59:41.783659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.549 [2024-11-20 09:59:41.783669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:99904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.549 [2024-11-20 09:59:41.783677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.549 [2024-11-20 09:59:41.783686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:99912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.549 [2024-11-20 09:59:41.783693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.549 [2024-11-20 09:59:41.783702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:99920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.549 [2024-11-20 09:59:41.783710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.549 [2024-11-20 09:59:41.783719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:99928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.549 [2024-11-20 09:59:41.783727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.549 [2024-11-20 09:59:41.783736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:99936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.549 [2024-11-20 09:59:41.783743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.549 [2024-11-20 09:59:41.783753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:99944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.549 [2024-11-20 09:59:41.783761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.549 [2024-11-20 09:59:41.783770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:99952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.549 [2024-11-20 09:59:41.783778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.549 [2024-11-20 09:59:41.783787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:99960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.549 [2024-11-20 09:59:41.783794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.549 [2024-11-20 09:59:41.783805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:99968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.549 [2024-11-20 09:59:41.783813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.549 [2024-11-20 09:59:41.783822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:99976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.549 [2024-11-20 09:59:41.783830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.549 [2024-11-20 09:59:41.783839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:99984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.549 [2024-11-20 09:59:41.783847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.549 [2024-11-20 09:59:41.783856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:99992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.549 [2024-11-20 09:59:41.783864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.549 [2024-11-20 09:59:41.783873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:100000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.549 [2024-11-20 09:59:41.783881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.549 [2024-11-20 09:59:41.783890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:100008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.549 [2024-11-20 09:59:41.783897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.549 [2024-11-20 09:59:41.783907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:100016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.549 [2024-11-20 09:59:41.783915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.549 [2024-11-20 09:59:41.783924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:100024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.549 [2024-11-20 09:59:41.783931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.549 [2024-11-20 09:59:41.783940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:100032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.549 [2024-11-20 09:59:41.783948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.549 [2024-11-20 09:59:41.783957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.549 [2024-11-20 09:59:41.783965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.549 [2024-11-20 09:59:41.783974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:100048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.549 [2024-11-20 09:59:41.783981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.549 [2024-11-20 09:59:41.783991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:100056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.549 [2024-11-20 09:59:41.783998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.549 [2024-11-20 09:59:41.784007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:100064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.550 [2024-11-20 09:59:41.784017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.550 [2024-11-20 09:59:41.784026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:100072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.550 [2024-11-20 09:59:41.784033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.550 [2024-11-20 09:59:41.784043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:100080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.550 [2024-11-20 09:59:41.784050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.550 [2024-11-20 09:59:41.784059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:100088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.550 [2024-11-20 09:59:41.784067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.550 [2024-11-20 09:59:41.784076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:100096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.550 [2024-11-20 09:59:41.784084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.550 [2024-11-20 09:59:41.784093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:100104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.550 [2024-11-20 09:59:41.784100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.550 [2024-11-20 09:59:41.784109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:100112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.550 [2024-11-20 09:59:41.784117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.550 [2024-11-20 09:59:41.784126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:100120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.550 [2024-11-20 09:59:41.784133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.550 [2024-11-20 09:59:41.784142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:100128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.550 [2024-11-20 09:59:41.784149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.550 [2024-11-20 09:59:41.784162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:100136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.550 [2024-11-20 09:59:41.784170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.550 [2024-11-20 09:59:41.784179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:100144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.550 [2024-11-20 09:59:41.784190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.550 [2024-11-20 09:59:41.784199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:100152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.550 [2024-11-20 09:59:41.784206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.550 [2024-11-20 09:59:41.784215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:100160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.550 [2024-11-20 09:59:41.784223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.550 [2024-11-20 09:59:41.784234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:100168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.550 [2024-11-20 09:59:41.784242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.550 [2024-11-20 09:59:41.784251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:100176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.550 [2024-11-20 09:59:41.784258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.550 [2024-11-20 09:59:41.784267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:100184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.550 [2024-11-20 09:59:41.784274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.550 [2024-11-20 09:59:41.784283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:100192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.550 [2024-11-20 09:59:41.784291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.550 [2024-11-20 09:59:41.784300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:100200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.550 [2024-11-20 09:59:41.784307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.550 [2024-11-20 09:59:41.784316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:100208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.550 [2024-11-20 09:59:41.784324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.550 [2024-11-20 09:59:41.784333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:100216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.550 [2024-11-20 09:59:41.784340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.550 [2024-11-20 09:59:41.784349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:100224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.550 [2024-11-20 09:59:41.784356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.550 [2024-11-20 09:59:41.784366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:100232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.550 [2024-11-20 09:59:41.784374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.550 [2024-11-20 09:59:41.784383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:100240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.550 [2024-11-20 09:59:41.784390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.550 [2024-11-20 09:59:41.784399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:100248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.550 [2024-11-20 09:59:41.784406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.550 [2024-11-20 09:59:41.784415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:100256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.550 [2024-11-20 09:59:41.784423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.550 [2024-11-20 09:59:41.784432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:100264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.550 [2024-11-20 09:59:41.784441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.550 [2024-11-20 09:59:41.784450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:100272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.550 [2024-11-20 09:59:41.784457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.550 [2024-11-20 09:59:41.784467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:100280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.550 [2024-11-20 09:59:41.784474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.550 [2024-11-20 09:59:41.784484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:100288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.550 [2024-11-20 09:59:41.784491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.550 [2024-11-20 09:59:41.784500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:100296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.550 [2024-11-20 09:59:41.784507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.550 [2024-11-20 09:59:41.784516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:100304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.550 [2024-11-20 09:59:41.784524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.550 [2024-11-20 09:59:41.784533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.550 [2024-11-20 09:59:41.784540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.550 [2024-11-20 09:59:41.784549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:100320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.550 [2024-11-20 09:59:41.784556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.550 [2024-11-20 09:59:41.784565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:100328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.550 [2024-11-20 09:59:41.784573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.550 [2024-11-20 09:59:41.784582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:100336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.550 [2024-11-20 09:59:41.784589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.550 [2024-11-20 09:59:41.784598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:100344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.550 [2024-11-20 09:59:41.784606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.550 [2024-11-20 09:59:41.784615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:100352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.550 [2024-11-20 09:59:41.784622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.550 [2024-11-20 09:59:41.784631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:100360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.550 [2024-11-20 09:59:41.784639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.550 [2024-11-20 09:59:41.784649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:100368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.550 [2024-11-20 09:59:41.784657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.550 [2024-11-20 09:59:41.784666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:100376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.551 [2024-11-20 09:59:41.784673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.551 [2024-11-20 09:59:41.784683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.551 [2024-11-20 09:59:41.784690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.551 [2024-11-20 09:59:41.784712] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:25.551 [2024-11-20 09:59:41.784719] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:25.551 [2024-11-20 09:59:41.784725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100392 len:8 PRP1 0x0 PRP2 0x0 00:26:25.551 [2024-11-20 09:59:41.784735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.551 [2024-11-20 09:59:41.784775] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:26:25.551 [2024-11-20 09:59:41.784797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:25.551 [2024-11-20 09:59:41.784805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.551 [2024-11-20 09:59:41.784814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:25.551 [2024-11-20 09:59:41.784821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.551 [2024-11-20 09:59:41.784829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:25.551 [2024-11-20 09:59:41.784836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.551 [2024-11-20 09:59:41.784844] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:25.551 [2024-11-20 09:59:41.784851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.551 [2024-11-20 09:59:41.784866] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:26:25.551 [2024-11-20 09:59:41.784905] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x743d70 (9): Bad file descriptor 00:26:25.551 [2024-11-20 09:59:41.788474] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:26:25.551 [2024-11-20 09:59:41.813198] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:26:25.551 11415.00 IOPS, 44.59 MiB/s [2024-11-20T08:59:56.467Z] 11320.33 IOPS, 44.22 MiB/s [2024-11-20T08:59:56.467Z] 11603.50 IOPS, 45.33 MiB/s [2024-11-20T08:59:56.467Z] [2024-11-20 09:59:45.279749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:44400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.551 [2024-11-20 09:59:45.279779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.551 [2024-11-20 09:59:45.279792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:44408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.551 [2024-11-20 09:59:45.279802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.551 [2024-11-20 09:59:45.279809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:44416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.551 [2024-11-20 09:59:45.279815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.551 [2024-11-20 09:59:45.279821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:44424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.551 [2024-11-20 09:59:45.279826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.551 [2024-11-20 09:59:45.279833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:44432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.551 [2024-11-20 09:59:45.279839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.551 [2024-11-20 09:59:45.279845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:44440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.551 [2024-11-20 09:59:45.279851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.551 [2024-11-20 09:59:45.279858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:44448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.551 [2024-11-20 09:59:45.279863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.551 [2024-11-20 09:59:45.279869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:44456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.551 [2024-11-20 09:59:45.279874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.551 [2024-11-20 09:59:45.279881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:44464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.551 [2024-11-20 09:59:45.279885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.551 [2024-11-20 09:59:45.279892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:44472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.551 [2024-11-20 09:59:45.279897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.551 [2024-11-20 09:59:45.279904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:44480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.551 [2024-11-20 09:59:45.279909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.551 [2024-11-20 09:59:45.279916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:44488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.551 [2024-11-20 09:59:45.279921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.551 [2024-11-20 09:59:45.279928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:44496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.551 [2024-11-20 09:59:45.279933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.551 [2024-11-20 09:59:45.279940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:44504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.551 [2024-11-20 09:59:45.279945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.551 [2024-11-20 09:59:45.279952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:44512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.551 [2024-11-20 09:59:45.279958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.551 [2024-11-20 09:59:45.279965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:44520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.551 [2024-11-20 09:59:45.279971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.551 [2024-11-20 09:59:45.279977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:44528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.551 [2024-11-20 09:59:45.279982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.551 [2024-11-20 09:59:45.279988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:44536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.551 [2024-11-20 09:59:45.279994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.551 [2024-11-20 09:59:45.280001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:44544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.551 [2024-11-20 09:59:45.280006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.551 [2024-11-20 09:59:45.280012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:44552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.551 [2024-11-20 09:59:45.280017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.551 [2024-11-20 09:59:45.280024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:44560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.551 [2024-11-20 09:59:45.280029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.551 [2024-11-20 09:59:45.280036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:44568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.551 [2024-11-20 09:59:45.280042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.551 [2024-11-20 09:59:45.280049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:44576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.551 [2024-11-20 09:59:45.280054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.551 [2024-11-20 09:59:45.280060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:44584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.551 [2024-11-20 09:59:45.280065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.551 [2024-11-20 09:59:45.280072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:44592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.551 [2024-11-20 09:59:45.280076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.551 [2024-11-20 09:59:45.280083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:44600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.551 [2024-11-20 09:59:45.280088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.551 [2024-11-20 09:59:45.280095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:44608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.551 [2024-11-20 09:59:45.280100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.551 [2024-11-20 09:59:45.280108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:44616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.551 [2024-11-20 09:59:45.280114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.552 [2024-11-20 09:59:45.280120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:44624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.552 [2024-11-20 09:59:45.280125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.552 [2024-11-20 09:59:45.280131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:44632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.552 [2024-11-20 09:59:45.280136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.552 [2024-11-20 09:59:45.280146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:44640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.552 [2024-11-20 09:59:45.280151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.552 [2024-11-20 09:59:45.280164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:44648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.552 [2024-11-20 09:59:45.280170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.552 [2024-11-20 09:59:45.280176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:44656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.552 [2024-11-20 09:59:45.280183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.552 [2024-11-20 09:59:45.280189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:44664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.552 [2024-11-20 09:59:45.280194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.552 [2024-11-20 09:59:45.280201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.552 [2024-11-20 09:59:45.280206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.552 [2024-11-20 09:59:45.280213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:44680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.552 [2024-11-20 09:59:45.280218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.552 [2024-11-20 09:59:45.280225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:44688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.552 [2024-11-20 09:59:45.280230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.552 [2024-11-20 09:59:45.280237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:44696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.552 [2024-11-20 09:59:45.280242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.552 [2024-11-20 09:59:45.280249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:44704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.552 [2024-11-20 09:59:45.280254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.552 [2024-11-20 09:59:45.280261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:44712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.552 [2024-11-20 09:59:45.280267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.552 [2024-11-20 09:59:45.280274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:44720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.552 [2024-11-20 09:59:45.280279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.552 [2024-11-20 09:59:45.280286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:44728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.552 [2024-11-20 09:59:45.280291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.552 [2024-11-20 09:59:45.280297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:44736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.552 [2024-11-20 09:59:45.280303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.552 [2024-11-20 09:59:45.280311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:44744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.552 [2024-11-20 09:59:45.280316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.552 [2024-11-20 09:59:45.280323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:44752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.552 [2024-11-20 09:59:45.280328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.552 [2024-11-20 09:59:45.280334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:44760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.552 [2024-11-20 09:59:45.280339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.552 [2024-11-20 09:59:45.280346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:44768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.552 [2024-11-20 09:59:45.280351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.552 [2024-11-20 09:59:45.280357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:44776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.552 [2024-11-20 09:59:45.280363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.552 [2024-11-20 09:59:45.280370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:44784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.552 [2024-11-20 09:59:45.280375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.552 [2024-11-20 09:59:45.280381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:44792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.552 [2024-11-20 09:59:45.280386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.552 [2024-11-20 09:59:45.280393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:44800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.552 [2024-11-20 09:59:45.280398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.552 [2024-11-20 09:59:45.280404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:44808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.552 [2024-11-20 09:59:45.280409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.552 [2024-11-20 09:59:45.280417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:44816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.552 [2024-11-20 09:59:45.280423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.552 [2024-11-20 09:59:45.280430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:44824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.552 [2024-11-20 09:59:45.280435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.552 [2024-11-20 09:59:45.280441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:44832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.552 [2024-11-20 09:59:45.280446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.552 [2024-11-20 09:59:45.280452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:44840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.552 [2024-11-20 09:59:45.280458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.552 [2024-11-20 09:59:45.280464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:44848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.552 [2024-11-20 09:59:45.280469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.552 [2024-11-20 09:59:45.280476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:44856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.552 [2024-11-20 09:59:45.280481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.552 [2024-11-20 09:59:45.280488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:44864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.552 [2024-11-20 09:59:45.280492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.552 [2024-11-20 09:59:45.280499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:44872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.552 [2024-11-20 09:59:45.280504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.552 [2024-11-20 09:59:45.280511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:44880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.552 [2024-11-20 09:59:45.280516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.552 [2024-11-20 09:59:45.280522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:44888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.552 [2024-11-20 09:59:45.280527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.552 [2024-11-20 09:59:45.280536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:44896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.552 [2024-11-20 09:59:45.280541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.552 [2024-11-20 09:59:45.280547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:44904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.553 [2024-11-20 09:59:45.280552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.553 [2024-11-20 09:59:45.280559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:44912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.553 [2024-11-20 09:59:45.280565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.553 [2024-11-20 09:59:45.280573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:44920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.553 [2024-11-20 09:59:45.280578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.553 [2024-11-20 09:59:45.280584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:44928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.553 [2024-11-20 09:59:45.280590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.553 [2024-11-20 09:59:45.280597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:44936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.553 [2024-11-20 09:59:45.280602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.553 [2024-11-20 09:59:45.280609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:44944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.553 [2024-11-20 09:59:45.280614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.553 [2024-11-20 09:59:45.280621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:44952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.553 [2024-11-20 09:59:45.280626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.553 [2024-11-20 09:59:45.280633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:44960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.553 [2024-11-20 09:59:45.280638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.553 [2024-11-20 09:59:45.280645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:44968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.553 [2024-11-20 09:59:45.280649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.553 [2024-11-20 09:59:45.280656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:44976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.553 [2024-11-20 09:59:45.280662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.553 [2024-11-20 09:59:45.280668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:44984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.553 [2024-11-20 09:59:45.280673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.553 [2024-11-20 09:59:45.280680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:44992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.553 [2024-11-20 09:59:45.280685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.553 [2024-11-20 09:59:45.280692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:45000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.553 [2024-11-20 09:59:45.280697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.553 [2024-11-20 09:59:45.280704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:45008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.553 [2024-11-20 09:59:45.280709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.553 [2024-11-20 09:59:45.280715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:45016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.553 [2024-11-20 09:59:45.280722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.553 [2024-11-20 09:59:45.280729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:45024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.553 [2024-11-20 09:59:45.280734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.553 [2024-11-20 09:59:45.280740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:45032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.553 [2024-11-20 09:59:45.280745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.553 [2024-11-20 09:59:45.280752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:45040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.553 [2024-11-20 09:59:45.280758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.553 [2024-11-20 09:59:45.280764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:45048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.553 [2024-11-20 09:59:45.280769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.553 [2024-11-20 09:59:45.280775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:45056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.553 [2024-11-20 09:59:45.280781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.553 [2024-11-20 09:59:45.280787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:45064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.553 [2024-11-20 09:59:45.280792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.553 [2024-11-20 09:59:45.280799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:45072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.553 [2024-11-20 09:59:45.280804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.553 [2024-11-20 09:59:45.280810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:45080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.553 [2024-11-20 09:59:45.280816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.553 [2024-11-20 09:59:45.280822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:45088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.553 [2024-11-20 09:59:45.280827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.553 [2024-11-20 09:59:45.280833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:45096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.553 [2024-11-20 09:59:45.280839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.553 [2024-11-20 09:59:45.280845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:45104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.553 [2024-11-20 09:59:45.280851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.553 [2024-11-20 09:59:45.280857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:45112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.553 [2024-11-20 09:59:45.280862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.553 [2024-11-20 09:59:45.280870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:45120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.553 [2024-11-20 09:59:45.280875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.553 [2024-11-20 09:59:45.280881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:45128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.553 [2024-11-20 09:59:45.280887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.553 [2024-11-20 09:59:45.280893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:45136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.553 [2024-11-20 09:59:45.280898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.553 [2024-11-20 09:59:45.280904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:45144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.553 [2024-11-20 09:59:45.280909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.553 [2024-11-20 09:59:45.280917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:45152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.553 [2024-11-20 09:59:45.280922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.553 [2024-11-20 09:59:45.280929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:45160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.553 [2024-11-20 09:59:45.280934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.553 [2024-11-20 09:59:45.280940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:45168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.553 [2024-11-20 09:59:45.280946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.553 [2024-11-20 09:59:45.280952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:45176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.553 [2024-11-20 09:59:45.280957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.553 [2024-11-20 09:59:45.280964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:45184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.553 [2024-11-20 09:59:45.280969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.553 [2024-11-20 09:59:45.280976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:45192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.553 [2024-11-20 09:59:45.280981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.553 [2024-11-20 09:59:45.280987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:45200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.553 [2024-11-20 09:59:45.280993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.553 [2024-11-20 09:59:45.280999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:45208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.553 [2024-11-20 09:59:45.281005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.553 [2024-11-20 09:59:45.281012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:45216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.554 [2024-11-20 09:59:45.281019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.554 [2024-11-20 09:59:45.281025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:45224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.554 [2024-11-20 09:59:45.281030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.554 [2024-11-20 09:59:45.281036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:45232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.554 [2024-11-20 09:59:45.281041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.554 [2024-11-20 09:59:45.281048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:45240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.554 [2024-11-20 09:59:45.281053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.554 [2024-11-20 09:59:45.281060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:45248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.554 [2024-11-20 09:59:45.281065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.554 [2024-11-20 09:59:45.281071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:45256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.554 [2024-11-20 09:59:45.281076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.554 [2024-11-20 09:59:45.281083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:45264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.554 [2024-11-20 09:59:45.281088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.554 [2024-11-20 09:59:45.281105] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:25.554 [2024-11-20 09:59:45.281110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45272 len:8 PRP1 0x0 PRP2 0x0 00:26:25.554 [2024-11-20 09:59:45.281116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.554 [2024-11-20 09:59:45.281123] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:25.554 [2024-11-20 09:59:45.281127] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:25.554 [2024-11-20 09:59:45.281132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45280 len:8 PRP1 0x0 PRP2 0x0 00:26:25.554 [2024-11-20 09:59:45.281137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.554 [2024-11-20 09:59:45.281142] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:25.554 [2024-11-20 09:59:45.281146] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:25.554 [2024-11-20 09:59:45.281151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45288 len:8 PRP1 0x0 PRP2 0x0 00:26:25.554 [2024-11-20 09:59:45.281156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.554 [2024-11-20 09:59:45.281166] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:25.554 [2024-11-20 09:59:45.281170] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:25.554 [2024-11-20 09:59:45.281174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45296 len:8 PRP1 0x0 PRP2 0x0 00:26:25.554 [2024-11-20 09:59:45.281179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.554 [2024-11-20 09:59:45.281186] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:25.554 [2024-11-20 09:59:45.281190] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:25.554 [2024-11-20 09:59:45.281194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45304 len:8 PRP1 0x0 PRP2 0x0 00:26:25.554 [2024-11-20 09:59:45.281199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.554 [2024-11-20 09:59:45.281205] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:25.554 [2024-11-20 09:59:45.281209] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:25.554 [2024-11-20 09:59:45.281213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45312 len:8 PRP1 0x0 PRP2 0x0 00:26:25.554 [2024-11-20 09:59:45.281218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.554 [2024-11-20 09:59:45.281223] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:25.554 [2024-11-20 09:59:45.281227] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:25.554 [2024-11-20 09:59:45.281232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45320 len:8 PRP1 0x0 PRP2 0x0 00:26:25.554 [2024-11-20 09:59:45.281237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.554 [2024-11-20 09:59:45.281242] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:25.554 [2024-11-20 09:59:45.281246] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:25.554 [2024-11-20 09:59:45.281250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45328 len:8 PRP1 0x0 PRP2 0x0 00:26:25.554 [2024-11-20 09:59:45.281255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.554 [2024-11-20 09:59:45.281260] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:25.554 [2024-11-20 09:59:45.281264] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:25.554 [2024-11-20 09:59:45.281268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45336 len:8 PRP1 0x0 PRP2 0x0 00:26:25.554 [2024-11-20 09:59:45.281274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.554 [2024-11-20 09:59:45.281280] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:25.554 [2024-11-20 09:59:45.281284] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:25.554 [2024-11-20 09:59:45.281288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45344 len:8 PRP1 0x0 PRP2 0x0 00:26:25.554 [2024-11-20 09:59:45.281293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.554 [2024-11-20 09:59:45.281298] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:25.554 [2024-11-20 09:59:45.281303] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:25.554 [2024-11-20 09:59:45.281308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45352 len:8 PRP1 0x0 PRP2 0x0 00:26:25.554 [2024-11-20 09:59:45.281313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.554 [2024-11-20 09:59:45.281319] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:25.554 [2024-11-20 09:59:45.281323] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:25.554 [2024-11-20 09:59:45.281327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45360 len:8 PRP1 0x0 PRP2 0x0 00:26:25.554 [2024-11-20 09:59:45.281334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.554 [2024-11-20 09:59:45.281340] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:25.554 [2024-11-20 09:59:45.281344] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:25.554 [2024-11-20 09:59:45.281348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45368 len:8 PRP1 0x0 PRP2 0x0 00:26:25.554 [2024-11-20 09:59:45.281353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.554 [2024-11-20 09:59:45.281358] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:25.554 [2024-11-20 09:59:45.281362] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:25.554 [2024-11-20 09:59:45.281367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45376 len:8 PRP1 0x0 PRP2 0x0 00:26:25.554 [2024-11-20 09:59:45.281372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.554 [2024-11-20 09:59:45.281378] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:25.554 [2024-11-20 09:59:45.281382] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:25.554 [2024-11-20 09:59:45.281386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45384 len:8 PRP1 0x0 PRP2 0x0 00:26:25.554 [2024-11-20 09:59:45.281391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.554 [2024-11-20 09:59:45.281397] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:25.554 [2024-11-20 09:59:45.281400] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:25.554 [2024-11-20 09:59:45.295147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45392 len:8 PRP1 0x0 PRP2 0x0 00:26:25.554 [2024-11-20 09:59:45.295184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.554 [2024-11-20 09:59:45.295198] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:25.554 [2024-11-20 09:59:45.295204] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:25.554 [2024-11-20 09:59:45.295211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45400 len:8 PRP1 0x0 PRP2 0x0 00:26:25.554 [2024-11-20 09:59:45.295219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.554 [2024-11-20 09:59:45.295226] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:25.554 [2024-11-20 09:59:45.295231] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:25.554 [2024-11-20 09:59:45.295237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45408 len:8 PRP1 0x0 PRP2 0x0 00:26:25.554 [2024-11-20 09:59:45.295244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.554 [2024-11-20 09:59:45.295251] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:25.554 [2024-11-20 09:59:45.295257] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:25.554 [2024-11-20 09:59:45.295264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45416 len:8 PRP1 0x0 PRP2 0x0 00:26:25.554 [2024-11-20 09:59:45.295270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.554 [2024-11-20 09:59:45.295311] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:26:25.554 [2024-11-20 09:59:45.295344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:25.555 [2024-11-20 09:59:45.295353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.555 [2024-11-20 09:59:45.295362] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:25.555 [2024-11-20 09:59:45.295369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.555 [2024-11-20 09:59:45.295377] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:25.555 [2024-11-20 09:59:45.295384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.555 [2024-11-20 09:59:45.295392] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:25.555 [2024-11-20 09:59:45.295399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.555 [2024-11-20 09:59:45.295406] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:26:25.555 [2024-11-20 09:59:45.295444] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x743d70 (9): Bad file descriptor 00:26:25.555 [2024-11-20 09:59:45.298721] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:26:25.555 [2024-11-20 09:59:45.440919] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:26:25.555 11423.40 IOPS, 44.62 MiB/s [2024-11-20T08:59:56.471Z] 11714.50 IOPS, 45.76 MiB/s [2024-11-20T08:59:56.471Z] 11932.86 IOPS, 46.61 MiB/s [2024-11-20T08:59:56.471Z] 12067.50 IOPS, 47.14 MiB/s [2024-11-20T08:59:56.471Z] [2024-11-20 09:59:49.658568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:21336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.555 [2024-11-20 09:59:49.658598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.555 [2024-11-20 09:59:49.658611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:21344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.555 [2024-11-20 09:59:49.658618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.555 [2024-11-20 09:59:49.658625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:21352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.555 [2024-11-20 09:59:49.658630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.555 [2024-11-20 09:59:49.658637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:21360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.555 [2024-11-20 09:59:49.658642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.555 [2024-11-20 09:59:49.658649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:21368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.555 [2024-11-20 09:59:49.658655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.555 [2024-11-20 09:59:49.658661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:21376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.555 [2024-11-20 09:59:49.658667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.555 [2024-11-20 09:59:49.658674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:21384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.555 [2024-11-20 09:59:49.658683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.555 [2024-11-20 09:59:49.658690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:21392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.555 [2024-11-20 09:59:49.658695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.555 [2024-11-20 09:59:49.658702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.555 [2024-11-20 09:59:49.658708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.555 [2024-11-20 09:59:49.658714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:21408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.555 [2024-11-20 09:59:49.658720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.555 [2024-11-20 09:59:49.658726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:21416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.555 [2024-11-20 09:59:49.658731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.555 [2024-11-20 09:59:49.658738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:21424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.555 [2024-11-20 09:59:49.658744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.555 [2024-11-20 09:59:49.658750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:21432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.555 [2024-11-20 09:59:49.658755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.555 [2024-11-20 09:59:49.658762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:21440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.555 [2024-11-20 09:59:49.658768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.555 [2024-11-20 09:59:49.658774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:21448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.555 [2024-11-20 09:59:49.658779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.555 [2024-11-20 09:59:49.658786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.555 [2024-11-20 09:59:49.658791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.555 [2024-11-20 09:59:49.658798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:21464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.555 [2024-11-20 09:59:49.658803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.555 [2024-11-20 09:59:49.658810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:21472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.555 [2024-11-20 09:59:49.658815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.555 [2024-11-20 09:59:49.658822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:21480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.555 [2024-11-20 09:59:49.658827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.555 [2024-11-20 09:59:49.658833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:21488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.555 [2024-11-20 09:59:49.658843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.555 [2024-11-20 09:59:49.658849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:21496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.555 [2024-11-20 09:59:49.658854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.555 [2024-11-20 09:59:49.658861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:21504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.555 [2024-11-20 09:59:49.658866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.555 [2024-11-20 09:59:49.658873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:21512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.555 [2024-11-20 09:59:49.658878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.555 [2024-11-20 09:59:49.658885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.555 [2024-11-20 09:59:49.658890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.555 [2024-11-20 09:59:49.658896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.555 [2024-11-20 09:59:49.658901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.555 [2024-11-20 09:59:49.658907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:21536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.555 [2024-11-20 09:59:49.658912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.555 [2024-11-20 09:59:49.658919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:21544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.555 [2024-11-20 09:59:49.658924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.555 [2024-11-20 09:59:49.658931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:21552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.555 [2024-11-20 09:59:49.658936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.555 [2024-11-20 09:59:49.658942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:21560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.555 [2024-11-20 09:59:49.658947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.555 [2024-11-20 09:59:49.658953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:21568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.555 [2024-11-20 09:59:49.658958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.555 [2024-11-20 09:59:49.658965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.555 [2024-11-20 09:59:49.658970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.555 [2024-11-20 09:59:49.658977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:21584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.555 [2024-11-20 09:59:49.658982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.555 [2024-11-20 09:59:49.658990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.555 [2024-11-20 09:59:49.658996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.555 [2024-11-20 09:59:49.659003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:21600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.555 [2024-11-20 09:59:49.659008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.556 [2024-11-20 09:59:49.659015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:21608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.556 [2024-11-20 09:59:49.659020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.556 [2024-11-20 09:59:49.659027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:21616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.556 [2024-11-20 09:59:49.659033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.556 [2024-11-20 09:59:49.659039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:21624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.556 [2024-11-20 09:59:49.659044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.556 [2024-11-20 09:59:49.659051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.556 [2024-11-20 09:59:49.659056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.556 [2024-11-20 09:59:49.659063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.556 [2024-11-20 09:59:49.659068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.556 [2024-11-20 09:59:49.659075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.556 [2024-11-20 09:59:49.659080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.556 [2024-11-20 09:59:49.659087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:21656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.556 [2024-11-20 09:59:49.659092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.556 [2024-11-20 09:59:49.659099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:21664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.556 [2024-11-20 09:59:49.659104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.556 [2024-11-20 09:59:49.659110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:21672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.556 [2024-11-20 09:59:49.659116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.556 [2024-11-20 09:59:49.659122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:21680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.556 [2024-11-20 09:59:49.659128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.556 [2024-11-20 09:59:49.659134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:21688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.556 [2024-11-20 09:59:49.659141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.556 [2024-11-20 09:59:49.659148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:21696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.556 [2024-11-20 09:59:49.659153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.556 [2024-11-20 09:59:49.659163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.556 [2024-11-20 09:59:49.659170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.556 [2024-11-20 09:59:49.659176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:21712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.556 [2024-11-20 09:59:49.659182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.556 [2024-11-20 09:59:49.659188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:21720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.556 [2024-11-20 09:59:49.659193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.556 [2024-11-20 09:59:49.659200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:21728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.556 [2024-11-20 09:59:49.659205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.556 [2024-11-20 09:59:49.659211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:21736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.556 [2024-11-20 09:59:49.659216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.556 [2024-11-20 09:59:49.659223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:21744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.556 [2024-11-20 09:59:49.659228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.556 [2024-11-20 09:59:49.659234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:21752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.556 [2024-11-20 09:59:49.659240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.556 [2024-11-20 09:59:49.659246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:21760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.556 [2024-11-20 09:59:49.659252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.556 [2024-11-20 09:59:49.659258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:21768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.556 [2024-11-20 09:59:49.659263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.556 [2024-11-20 09:59:49.659270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:21776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.556 [2024-11-20 09:59:49.659275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.556 [2024-11-20 09:59:49.659282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.556 [2024-11-20 09:59:49.659287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.556 [2024-11-20 09:59:49.659295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:21792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.556 [2024-11-20 09:59:49.659300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.556 [2024-11-20 09:59:49.659307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:21800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.556 [2024-11-20 09:59:49.659312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.556 [2024-11-20 09:59:49.659319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:21808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.556 [2024-11-20 09:59:49.659324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.556 [2024-11-20 09:59:49.659331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:21816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.556 [2024-11-20 09:59:49.659336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.556 [2024-11-20 09:59:49.659343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:21824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.556 [2024-11-20 09:59:49.659348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.556 [2024-11-20 09:59:49.659354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:21832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.556 [2024-11-20 09:59:49.659359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.556 [2024-11-20 09:59:49.659365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:21840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.556 [2024-11-20 09:59:49.659370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.556 [2024-11-20 09:59:49.659377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.556 [2024-11-20 09:59:49.659383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.556 [2024-11-20 09:59:49.659390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:21856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.556 [2024-11-20 09:59:49.659395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.556 [2024-11-20 09:59:49.659402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:21864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.556 [2024-11-20 09:59:49.659408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.556 [2024-11-20 09:59:49.659414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.556 [2024-11-20 09:59:49.659420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.556 [2024-11-20 09:59:49.659427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:21880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.556 [2024-11-20 09:59:49.659432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.557 [2024-11-20 09:59:49.659439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:21888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.557 [2024-11-20 09:59:49.659446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.557 [2024-11-20 09:59:49.659452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:21896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.557 [2024-11-20 09:59:49.659458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.557 [2024-11-20 09:59:49.659465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:21904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.557 [2024-11-20 09:59:49.659469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.557 [2024-11-20 09:59:49.659476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:21912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.557 [2024-11-20 09:59:49.659482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.557 [2024-11-20 09:59:49.659488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:21920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.557 [2024-11-20 09:59:49.659494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.557 [2024-11-20 09:59:49.659500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:21928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.557 [2024-11-20 09:59:49.659505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.557 [2024-11-20 09:59:49.659511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:21936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.557 [2024-11-20 09:59:49.659516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.557 [2024-11-20 09:59:49.659523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:21944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.557 [2024-11-20 09:59:49.659528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.557 [2024-11-20 09:59:49.659535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.557 [2024-11-20 09:59:49.659541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.557 [2024-11-20 09:59:49.659547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:21960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.557 [2024-11-20 09:59:49.659552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.557 [2024-11-20 09:59:49.659558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.557 [2024-11-20 09:59:49.659563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.557 [2024-11-20 09:59:49.659570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.557 [2024-11-20 09:59:49.659575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.557 [2024-11-20 09:59:49.659582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:21984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.557 [2024-11-20 09:59:49.659587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.557 [2024-11-20 09:59:49.659593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:21992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.557 [2024-11-20 09:59:49.659599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.557 [2024-11-20 09:59:49.659606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.557 [2024-11-20 09:59:49.659611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.557 [2024-11-20 09:59:49.659618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:22008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.557 [2024-11-20 09:59:49.659623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.557 [2024-11-20 09:59:49.659629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:22016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.557 [2024-11-20 09:59:49.659634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.557 [2024-11-20 09:59:49.659641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:22024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.557 [2024-11-20 09:59:49.659646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.557 [2024-11-20 09:59:49.659652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:22032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.557 [2024-11-20 09:59:49.659657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.557 [2024-11-20 09:59:49.659663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:22040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.557 [2024-11-20 09:59:49.659669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.557 [2024-11-20 09:59:49.659676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:22048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.557 [2024-11-20 09:59:49.659681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.557 [2024-11-20 09:59:49.659687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:22056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.557 [2024-11-20 09:59:49.659692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.557 [2024-11-20 09:59:49.659698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:22064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.557 [2024-11-20 09:59:49.659703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.557 [2024-11-20 09:59:49.659710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.557 [2024-11-20 09:59:49.659715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.557 [2024-11-20 09:59:49.659721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:22080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.557 [2024-11-20 09:59:49.659726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.557 [2024-11-20 09:59:49.659733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:22088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.557 [2024-11-20 09:59:49.659738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.557 [2024-11-20 09:59:49.659746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:22096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.557 [2024-11-20 09:59:49.659751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.557 [2024-11-20 09:59:49.659757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:22104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.557 [2024-11-20 09:59:49.659763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.557 [2024-11-20 09:59:49.659770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.557 [2024-11-20 09:59:49.659775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.557 [2024-11-20 09:59:49.659782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.557 [2024-11-20 09:59:49.659787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.557 [2024-11-20 09:59:49.659794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:22128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.557 [2024-11-20 09:59:49.659799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.557 [2024-11-20 09:59:49.659806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:22136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.557 [2024-11-20 09:59:49.659811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.557 [2024-11-20 09:59:49.659817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.557 [2024-11-20 09:59:49.659823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.557 [2024-11-20 09:59:49.659829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:22152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.557 [2024-11-20 09:59:49.659835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.557 [2024-11-20 09:59:49.659841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:22160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.557 [2024-11-20 09:59:49.659846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.557 [2024-11-20 09:59:49.659853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:22168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.557 [2024-11-20 09:59:49.659858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.557 [2024-11-20 09:59:49.659865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:22176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.557 [2024-11-20 09:59:49.659871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.557 [2024-11-20 09:59:49.659877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.557 [2024-11-20 09:59:49.659883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.557 [2024-11-20 09:59:49.659889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:22192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.557 [2024-11-20 09:59:49.659895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.557 [2024-11-20 09:59:49.659902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:22200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.557 [2024-11-20 09:59:49.659907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.558 [2024-11-20 09:59:49.659913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:22208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.558 [2024-11-20 09:59:49.659918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.558 [2024-11-20 09:59:49.659924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:22216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.558 [2024-11-20 09:59:49.659930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.558 [2024-11-20 09:59:49.659937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:22224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.558 [2024-11-20 09:59:49.659941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.558 [2024-11-20 09:59:49.659948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:22232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.558 [2024-11-20 09:59:49.659953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.558 [2024-11-20 09:59:49.659959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:22240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.558 [2024-11-20 09:59:49.659965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.558 [2024-11-20 09:59:49.659972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:22248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.558 [2024-11-20 09:59:49.659977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.558 [2024-11-20 09:59:49.659984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:22256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.558 [2024-11-20 09:59:49.659989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.558 [2024-11-20 09:59:49.659995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:22264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.558 [2024-11-20 09:59:49.660000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.558 [2024-11-20 09:59:49.660006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:22272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.558 [2024-11-20 09:59:49.660011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.558 [2024-11-20 09:59:49.660018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.558 [2024-11-20 09:59:49.660023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.558 [2024-11-20 09:59:49.660029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:22288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.558 [2024-11-20 09:59:49.660034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.558 [2024-11-20 09:59:49.660041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:22296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.558 [2024-11-20 09:59:49.660047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.558 [2024-11-20 09:59:49.660053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:22304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.558 [2024-11-20 09:59:49.660058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.558 [2024-11-20 09:59:49.660064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:22312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.558 [2024-11-20 09:59:49.660070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.558 [2024-11-20 09:59:49.660076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:22320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.558 [2024-11-20 09:59:49.660081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.558 [2024-11-20 09:59:49.660088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:22328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.558 [2024-11-20 09:59:49.660093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.558 [2024-11-20 09:59:49.660099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:22336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.558 [2024-11-20 09:59:49.660104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.558 [2024-11-20 09:59:49.660110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.558 [2024-11-20 09:59:49.660116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.558 [2024-11-20 09:59:49.660138] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:25.558 [2024-11-20 09:59:49.660143] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:25.558 [2024-11-20 09:59:49.660148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22352 len:8 PRP1 0x0 PRP2 0x0 00:26:25.558 [2024-11-20 09:59:49.660154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.558 [2024-11-20 09:59:49.660192] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:26:25.558 [2024-11-20 09:59:49.660210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:25.558 [2024-11-20 09:59:49.660216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.558 [2024-11-20 09:59:49.660222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:25.558 [2024-11-20 09:59:49.660227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.558 [2024-11-20 09:59:49.660233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:25.558 [2024-11-20 09:59:49.660238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.558 [2024-11-20 09:59:49.660244] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:25.558 [2024-11-20 09:59:49.660251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.558 [2024-11-20 09:59:49.660257] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:26:25.558 [2024-11-20 09:59:49.662700] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:26:25.558 [2024-11-20 09:59:49.662722] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x743d70 (9): Bad file descriptor 00:26:25.558 [2024-11-20 09:59:49.689437] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:26:25.558 12115.11 IOPS, 47.32 MiB/s [2024-11-20T08:59:56.474Z] 12195.20 IOPS, 47.64 MiB/s [2024-11-20T08:59:56.474Z] 12263.73 IOPS, 47.91 MiB/s [2024-11-20T08:59:56.474Z] 12331.17 IOPS, 48.17 MiB/s [2024-11-20T08:59:56.474Z] 12370.31 IOPS, 48.32 MiB/s [2024-11-20T08:59:56.474Z] 12423.36 IOPS, 48.53 MiB/s [2024-11-20T08:59:56.474Z] 12474.00 IOPS, 48.73 MiB/s 00:26:25.558 Latency(us) 00:26:25.558 [2024-11-20T08:59:56.474Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:25.558 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:25.558 Verification LBA range: start 0x0 length 0x4000 00:26:25.558 NVMe0n1 : 15.01 12470.99 48.71 689.15 0.00 9704.31 542.72 23265.28 00:26:25.558 [2024-11-20T08:59:56.474Z] =================================================================================================================== 00:26:25.558 [2024-11-20T08:59:56.474Z] Total : 12470.99 48.71 689.15 0.00 9704.31 542.72 23265.28 00:26:25.558 Received shutdown signal, test time was about 15.000000 seconds 00:26:25.558 00:26:25.558 Latency(us) 00:26:25.558 [2024-11-20T08:59:56.474Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:25.558 [2024-11-20T08:59:56.474Z] =================================================================================================================== 00:26:25.558 [2024-11-20T08:59:56.474Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:25.558 09:59:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:26:25.558 09:59:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:26:25.558 09:59:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:26:25.558 09:59:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1490181 00:26:25.558 09:59:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1490181 /var/tmp/bdevperf.sock 00:26:25.558 09:59:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:26:25.558 09:59:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1490181 ']' 00:26:25.558 09:59:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:25.558 09:59:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:25.558 09:59:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:25.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:25.558 09:59:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:25.558 09:59:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:26.130 09:59:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:26.130 09:59:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:26:26.130 09:59:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:26.130 [2024-11-20 09:59:56.944702] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:26.130 09:59:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:26.389 [2024-11-20 09:59:57.129147] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:26:26.389 09:59:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:26.650 NVMe0n1 00:26:26.650 09:59:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:26.910 00:26:26.910 09:59:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:27.169 00:26:27.429 09:59:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:27.429 09:59:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:26:27.429 09:59:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:27.689 09:59:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:26:30.986 10:00:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:30.986 10:00:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:26:30.986 10:00:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:30.986 10:00:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1491497 00:26:30.986 10:00:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 1491497 00:26:31.927 { 00:26:31.927 "results": [ 00:26:31.927 { 00:26:31.927 "job": "NVMe0n1", 00:26:31.927 "core_mask": "0x1", 00:26:31.927 "workload": "verify", 00:26:31.927 "status": "finished", 00:26:31.927 "verify_range": { 00:26:31.927 "start": 0, 00:26:31.927 "length": 16384 00:26:31.927 }, 00:26:31.927 "queue_depth": 128, 00:26:31.927 "io_size": 4096, 00:26:31.927 "runtime": 1.002787, 00:26:31.927 "iops": 12797.333830614078, 00:26:31.927 "mibps": 49.98958527583624, 00:26:31.927 "io_failed": 0, 00:26:31.927 "io_timeout": 0, 00:26:31.927 "avg_latency_us": 9966.580110652225, 00:26:31.927 "min_latency_us": 1747.6266666666668, 00:26:31.927 "max_latency_us": 9448.106666666667 00:26:31.927 } 00:26:31.927 ], 00:26:31.927 "core_count": 1 00:26:31.927 } 00:26:31.927 10:00:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:31.927 [2024-11-20 09:59:55.998105] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:26:31.927 [2024-11-20 09:59:55.998189] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1490181 ] 00:26:31.927 [2024-11-20 09:59:56.082703] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:31.927 [2024-11-20 09:59:56.110458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:31.927 [2024-11-20 09:59:58.442809] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:26:31.927 [2024-11-20 09:59:58.442846] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:31.928 [2024-11-20 09:59:58.442854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.928 [2024-11-20 09:59:58.442861] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:31.928 [2024-11-20 09:59:58.442866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.928 [2024-11-20 09:59:58.442872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:31.928 [2024-11-20 09:59:58.442878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.928 [2024-11-20 09:59:58.442883] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:31.928 [2024-11-20 09:59:58.442889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.928 [2024-11-20 09:59:58.442894] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:26:31.928 [2024-11-20 09:59:58.442913] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:26:31.928 [2024-11-20 09:59:58.442924] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b4fd70 (9): Bad file descriptor 00:26:31.928 [2024-11-20 09:59:58.455193] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:26:31.928 Running I/O for 1 seconds... 00:26:31.928 12705.00 IOPS, 49.63 MiB/s 00:26:31.928 Latency(us) 00:26:31.928 [2024-11-20T09:00:02.844Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:31.928 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:31.928 Verification LBA range: start 0x0 length 0x4000 00:26:31.928 NVMe0n1 : 1.00 12797.33 49.99 0.00 0.00 9966.58 1747.63 9448.11 00:26:31.928 [2024-11-20T09:00:02.844Z] =================================================================================================================== 00:26:31.928 [2024-11-20T09:00:02.844Z] Total : 12797.33 49.99 0.00 0.00 9966.58 1747.63 9448.11 00:26:31.928 10:00:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:31.928 10:00:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:26:32.189 10:00:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:32.449 10:00:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:32.449 10:00:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:26:32.449 10:00:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:32.708 10:00:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:26:36.002 10:00:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:36.002 10:00:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:26:36.002 10:00:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 1490181 00:26:36.002 10:00:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1490181 ']' 00:26:36.002 10:00:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1490181 00:26:36.002 10:00:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:26:36.002 10:00:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:36.002 10:00:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1490181 00:26:36.002 10:00:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:36.002 10:00:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:36.002 10:00:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1490181' 00:26:36.002 killing process with pid 1490181 00:26:36.002 10:00:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1490181 00:26:36.002 10:00:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1490181 00:26:36.002 10:00:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:26:36.002 10:00:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:36.262 10:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:26:36.262 10:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:36.262 10:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:26:36.262 10:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:36.262 10:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:26:36.262 10:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:36.262 10:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:26:36.262 10:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:36.262 10:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:36.262 rmmod nvme_tcp 00:26:36.262 rmmod nvme_fabrics 00:26:36.262 rmmod nvme_keyring 00:26:36.262 10:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:36.262 10:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:26:36.262 10:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:26:36.262 10:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 1486445 ']' 00:26:36.262 10:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 1486445 00:26:36.262 10:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1486445 ']' 00:26:36.262 10:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1486445 00:26:36.262 10:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:26:36.262 10:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:36.262 10:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1486445 00:26:36.522 10:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:36.522 10:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:36.522 10:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1486445' 00:26:36.522 killing process with pid 1486445 00:26:36.522 10:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1486445 00:26:36.522 10:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1486445 00:26:36.522 10:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:36.522 10:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:36.522 10:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:36.522 10:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:26:36.522 10:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:26:36.522 10:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:36.522 10:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:26:36.522 10:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:36.522 10:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:36.522 10:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:36.522 10:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:36.522 10:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:39.063 10:00:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:39.063 00:26:39.063 real 0m40.416s 00:26:39.063 user 2m4.401s 00:26:39.063 sys 0m8.681s 00:26:39.063 10:00:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:39.063 10:00:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:39.063 ************************************ 00:26:39.063 END TEST nvmf_failover 00:26:39.063 ************************************ 00:26:39.063 10:00:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:26:39.063 10:00:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:39.063 10:00:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:39.063 10:00:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.063 ************************************ 00:26:39.063 START TEST nvmf_host_discovery 00:26:39.063 ************************************ 00:26:39.063 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:26:39.063 * Looking for test storage... 00:26:39.063 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:39.063 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:39.063 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:26:39.063 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:39.063 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:39.063 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:39.063 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:39.063 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:39.063 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:26:39.063 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:26:39.063 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:26:39.063 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:26:39.063 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:26:39.063 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:26:39.063 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:26:39.063 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:39.063 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:26:39.063 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:26:39.063 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:39.063 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:39.063 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:26:39.063 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:26:39.063 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:39.063 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:26:39.063 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:26:39.063 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:26:39.063 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:26:39.063 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:39.063 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:26:39.063 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:26:39.063 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:39.063 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:39.063 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:26:39.063 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:39.063 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:39.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:39.063 --rc genhtml_branch_coverage=1 00:26:39.063 --rc genhtml_function_coverage=1 00:26:39.063 --rc genhtml_legend=1 00:26:39.063 --rc geninfo_all_blocks=1 00:26:39.063 --rc geninfo_unexecuted_blocks=1 00:26:39.063 00:26:39.063 ' 00:26:39.063 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:39.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:39.063 --rc genhtml_branch_coverage=1 00:26:39.063 --rc genhtml_function_coverage=1 00:26:39.063 --rc genhtml_legend=1 00:26:39.063 --rc geninfo_all_blocks=1 00:26:39.063 --rc geninfo_unexecuted_blocks=1 00:26:39.063 00:26:39.063 ' 00:26:39.063 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:39.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:39.063 --rc genhtml_branch_coverage=1 00:26:39.063 --rc genhtml_function_coverage=1 00:26:39.063 --rc genhtml_legend=1 00:26:39.063 --rc geninfo_all_blocks=1 00:26:39.063 --rc geninfo_unexecuted_blocks=1 00:26:39.063 00:26:39.063 ' 00:26:39.063 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:39.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:39.063 --rc genhtml_branch_coverage=1 00:26:39.063 --rc genhtml_function_coverage=1 00:26:39.063 --rc genhtml_legend=1 00:26:39.063 --rc geninfo_all_blocks=1 00:26:39.063 --rc geninfo_unexecuted_blocks=1 00:26:39.063 00:26:39.063 ' 00:26:39.063 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:39.063 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:26:39.063 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:39.063 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:39.063 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:39.063 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:39.063 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:39.063 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:39.063 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:39.063 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:39.063 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:39.063 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:39.063 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:39.063 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:39.064 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:39.064 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:39.064 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:39.064 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:39.064 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:39.064 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:26:39.064 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:39.064 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:39.064 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:39.064 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.064 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.064 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.064 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:26:39.064 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.064 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:26:39.064 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:39.064 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:39.064 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:39.064 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:39.064 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:39.064 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:39.064 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:39.064 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:39.064 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:39.064 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:39.064 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:26:39.064 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:26:39.064 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:26:39.064 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:26:39.064 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:26:39.064 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:26:39.064 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:26:39.064 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:39.064 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:39.064 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:39.064 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:39.064 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:39.064 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:39.064 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:39.064 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:39.064 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:39.064 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:39.064 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:26:39.064 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:47.198 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:47.198 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:26:47.198 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:47.198 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:47.198 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:47.198 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:47.198 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:47.198 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:26:47.198 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:47.198 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:26:47.198 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:26:47.198 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:26:47.198 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:26:47.198 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:26:47.198 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:26:47.198 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:47.198 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:47.198 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:47.198 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:47.198 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:47.198 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:47.198 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:47.198 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:47.198 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:47.198 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:47.198 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:47.198 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:47.198 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:47.198 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:47.198 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:47.198 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:47.198 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:47.199 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:47.199 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:47.199 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:47.199 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:47.199 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:47.199 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:47.199 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:47.199 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:47.199 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:47.199 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:47.199 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:47.199 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:47.199 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:47.199 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:47.199 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:47.199 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:47.199 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:47.199 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:47.199 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:47.199 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:47.199 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:47.199 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:47.199 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:47.199 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:47.199 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:47.199 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:47.199 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:47.199 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:47.199 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:47.199 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:47.199 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:47.199 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:47.199 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:47.199 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:47.199 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:47.199 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:47.199 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:47.199 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:47.199 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:47.199 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:47.199 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:47.199 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:26:47.199 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:47.199 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:47.199 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:47.199 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:47.199 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:47.199 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:47.199 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:47.199 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:47.199 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:47.199 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:47.199 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:47.199 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:47.199 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:47.199 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:47.199 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:47.199 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:47.199 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:47.199 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:47.199 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:47.199 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:47.199 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:47.199 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:47.199 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:47.199 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:47.199 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:47.199 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:47.199 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:47.199 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.593 ms 00:26:47.199 00:26:47.199 --- 10.0.0.2 ping statistics --- 00:26:47.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:47.199 rtt min/avg/max/mdev = 0.593/0.593/0.593/0.000 ms 00:26:47.199 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:47.199 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:47.199 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:26:47.199 00:26:47.199 --- 10.0.0.1 ping statistics --- 00:26:47.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:47.199 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:26:47.199 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:47.199 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:26:47.199 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:47.199 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:47.199 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:47.199 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:47.199 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:47.199 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:47.199 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:47.199 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:26:47.199 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:47.199 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:47.199 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:47.199 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=1497102 00:26:47.199 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 1497102 00:26:47.199 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:47.199 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 1497102 ']' 00:26:47.199 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:47.199 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:47.199 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:47.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:47.199 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:47.199 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:47.199 [2024-11-20 10:00:17.328132] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:26:47.199 [2024-11-20 10:00:17.328207] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:47.199 [2024-11-20 10:00:17.426087] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:47.199 [2024-11-20 10:00:17.476934] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:47.199 [2024-11-20 10:00:17.476983] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:47.199 [2024-11-20 10:00:17.476992] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:47.199 [2024-11-20 10:00:17.477001] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:47.199 [2024-11-20 10:00:17.477009] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:47.199 [2024-11-20 10:00:17.477753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:47.460 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:47.460 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:26:47.460 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:47.460 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:47.460 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:47.460 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:47.460 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:47.460 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.460 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:47.460 [2024-11-20 10:00:18.213672] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:47.460 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.460 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:26:47.460 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.460 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:47.460 [2024-11-20 10:00:18.225929] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:47.460 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.460 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:26:47.460 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.460 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:47.460 null0 00:26:47.460 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.460 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:26:47.460 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.460 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:47.460 null1 00:26:47.460 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.460 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:26:47.460 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.460 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:47.460 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.460 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1497345 00:26:47.460 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1497345 /tmp/host.sock 00:26:47.460 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:26:47.460 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 1497345 ']' 00:26:47.460 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:26:47.460 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:47.460 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:47.460 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:47.460 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:47.460 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:47.460 [2024-11-20 10:00:18.326208] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:26:47.460 [2024-11-20 10:00:18.326277] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1497345 ] 00:26:47.719 [2024-11-20 10:00:18.420632] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:47.719 [2024-11-20 10:00:18.473348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:48.290 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:48.290 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:26:48.290 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:48.290 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:26:48.290 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.290 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:48.290 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.290 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:26:48.290 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.290 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:48.290 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.290 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:26:48.290 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:26:48.290 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:48.290 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:48.290 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.290 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:48.290 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:48.290 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:48.290 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.613 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:26:48.613 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:26:48.613 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:48.613 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:48.613 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.613 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:48.613 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:48.613 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:48.613 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.613 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:26:48.613 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:26:48.613 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.613 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:48.613 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.613 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:26:48.613 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:48.613 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:48.613 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.613 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:48.613 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:48.613 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:48.613 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.613 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:26:48.613 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:26:48.613 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:48.613 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:48.613 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.613 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:48.613 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:48.613 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:48.613 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.613 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:26:48.613 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:26:48.613 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.613 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:48.613 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.613 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:26:48.613 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:48.613 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:48.613 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.613 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:48.613 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:48.613 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:48.613 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.613 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:26:48.613 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:26:48.613 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:48.613 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:48.613 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.613 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:48.613 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:48.613 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:48.613 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.613 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:26:48.613 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:48.613 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.613 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:48.613 [2024-11-20 10:00:19.489210] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:48.613 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.613 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:26:48.613 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:48.613 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.613 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:48.613 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:48.613 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:48.613 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:48.613 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.892 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:26:48.892 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:26:48.892 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:48.892 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:48.892 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.892 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:48.892 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:48.892 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:48.892 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.892 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:26:48.892 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:26:48.892 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:48.892 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:48.892 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:48.892 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:48.892 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:48.892 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:48.892 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:48.893 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:48.893 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:48.893 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.893 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:48.893 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.893 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:48.893 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:26:48.893 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:48.893 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:48.893 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:26:48.893 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.893 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:48.893 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.893 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:48.893 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:48.893 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:48.893 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:48.893 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:48.893 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:26:48.893 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:48.893 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.893 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:48.893 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:48.893 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:48.893 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:48.893 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.893 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:26:48.893 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:26:49.502 [2024-11-20 10:00:20.213340] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:49.502 [2024-11-20 10:00:20.213362] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:49.502 [2024-11-20 10:00:20.213376] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:49.502 [2024-11-20 10:00:20.300651] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:49.762 [2024-11-20 10:00:20.482689] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:26:49.762 [2024-11-20 10:00:20.483794] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x2002780:1 started. 00:26:49.762 [2024-11-20 10:00:20.485445] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:49.762 [2024-11-20 10:00:20.485464] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:49.762 [2024-11-20 10:00:20.491561] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x2002780 was disconnected and freed. delete nvme_qpair. 00:26:50.023 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:50.023 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:50.023 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:26:50.023 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:50.023 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:50.023 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.023 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:50.023 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:50.023 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:50.023 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.023 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:50.023 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:50.023 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:26:50.023 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:26:50.023 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:50.023 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:50.023 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:26:50.023 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:26:50.023 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:50.023 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:50.023 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.023 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:50.023 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:50.023 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:50.023 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.023 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:26:50.023 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:50.023 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:26:50.023 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:26:50.023 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:50.023 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:50.023 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:26:50.023 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:26:50.023 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:50.023 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:50.023 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.023 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:50.023 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:50.023 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:50.023 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.023 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:26:50.023 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:50.023 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:26:50.023 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:26:50.023 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:50.023 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:50.023 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:50.023 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:50.023 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:50.023 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:50.023 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:50.023 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:50.023 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.023 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:50.023 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.023 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:26:50.023 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:26:50.023 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:50.023 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:50.023 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:26:50.023 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.023 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:50.023 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.023 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:50.023 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:50.023 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:50.023 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:50.023 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:50.023 [2024-11-20 10:00:20.916641] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x2002b20:1 started. 00:26:50.024 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:26:50.024 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:50.024 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:50.024 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.024 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:50.024 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:50.024 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:50.024 [2024-11-20 10:00:20.922075] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x2002b20 was disconnected and freed. delete nvme_qpair. 00:26:50.284 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.284 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:50.284 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:50.284 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:26:50.284 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:26:50.284 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:50.284 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:50.284 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:50.284 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:50.284 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:50.284 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:50.284 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:26:50.284 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:50.284 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.284 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:50.284 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.284 10:00:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:26:50.284 10:00:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:50.284 10:00:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:50.284 10:00:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:50.284 10:00:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:26:50.284 10:00:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.284 10:00:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:50.284 [2024-11-20 10:00:21.021372] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:50.284 [2024-11-20 10:00:21.021997] bdev_nvme.c:7460:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:50.284 [2024-11-20 10:00:21.022018] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:50.284 10:00:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.284 10:00:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:50.284 10:00:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:50.284 10:00:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:50.284 10:00:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:50.284 10:00:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:50.284 10:00:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:26:50.284 10:00:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:50.285 10:00:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:50.285 10:00:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:50.285 10:00:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:50.285 10:00:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.285 10:00:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:50.285 10:00:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.285 10:00:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:50.285 10:00:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:50.285 10:00:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:50.285 10:00:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:50.285 10:00:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:50.285 10:00:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:50.285 10:00:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:50.285 10:00:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:26:50.285 10:00:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:50.285 10:00:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:50.285 10:00:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.285 10:00:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:50.285 10:00:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:50.285 10:00:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:50.285 [2024-11-20 10:00:21.109291] bdev_nvme.c:7402:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:26:50.285 10:00:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.285 10:00:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:50.285 10:00:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:50.285 10:00:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:26:50.285 10:00:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:26:50.285 10:00:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:50.285 10:00:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:50.285 10:00:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:26:50.285 10:00:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:26:50.285 10:00:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:50.285 10:00:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:50.285 10:00:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.285 10:00:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:50.285 10:00:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:50.285 10:00:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:50.285 10:00:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.285 10:00:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:26:50.285 10:00:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:26:50.545 [2024-11-20 10:00:21.210144] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:26:50.545 [2024-11-20 10:00:21.210187] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:50.545 [2024-11-20 10:00:21.210196] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:50.545 [2024-11-20 10:00:21.210201] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:51.488 10:00:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:51.488 10:00:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:26:51.488 10:00:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:26:51.488 10:00:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:51.488 10:00:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:51.488 10:00:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.488 10:00:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:51.488 10:00:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:51.488 10:00:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:51.488 10:00:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.488 10:00:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:26:51.488 10:00:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:51.488 10:00:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:26:51.488 10:00:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:51.488 10:00:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:51.488 10:00:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:51.488 10:00:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:51.488 10:00:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:51.488 10:00:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:51.488 10:00:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:51.488 10:00:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:51.488 10:00:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:51.488 10:00:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.488 10:00:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:51.488 10:00:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.488 10:00:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:51.488 10:00:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:51.488 10:00:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:51.488 10:00:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:51.488 10:00:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:51.488 10:00:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.488 10:00:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:51.488 [2024-11-20 10:00:22.273171] bdev_nvme.c:7460:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:51.488 [2024-11-20 10:00:22.273192] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:51.488 [2024-11-20 10:00:22.277071] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:51.488 [2024-11-20 10:00:22.277091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.488 [2024-11-20 10:00:22.277101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:51.488 [2024-11-20 10:00:22.277114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.488 [2024-11-20 10:00:22.277122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:51.488 [2024-11-20 10:00:22.277130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.488 [2024-11-20 10:00:22.277138] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:51.488 [2024-11-20 10:00:22.277145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.488 [2024-11-20 10:00:22.277152] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd2e10 is same with the state(6) to be set 00:26:51.488 10:00:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.488 10:00:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:51.488 10:00:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:51.488 10:00:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:51.488 10:00:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:51.488 10:00:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:51.488 10:00:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:26:51.488 10:00:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:51.488 10:00:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:51.488 10:00:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.488 10:00:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:51.488 10:00:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:51.488 10:00:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:51.488 [2024-11-20 10:00:22.287084] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fd2e10 (9): Bad file descriptor 00:26:51.488 [2024-11-20 10:00:22.297119] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:51.488 [2024-11-20 10:00:22.297133] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:51.488 [2024-11-20 10:00:22.297139] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:51.489 [2024-11-20 10:00:22.297144] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:51.489 [2024-11-20 10:00:22.297171] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:51.489 [2024-11-20 10:00:22.297409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.489 [2024-11-20 10:00:22.297426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd2e10 with addr=10.0.0.2, port=4420 00:26:51.489 [2024-11-20 10:00:22.297435] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd2e10 is same with the state(6) to be set 00:26:51.489 [2024-11-20 10:00:22.297449] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fd2e10 (9): Bad file descriptor 00:26:51.489 [2024-11-20 10:00:22.297467] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:51.489 [2024-11-20 10:00:22.297474] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:51.489 [2024-11-20 10:00:22.297486] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:51.489 [2024-11-20 10:00:22.297493] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:51.489 [2024-11-20 10:00:22.297499] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:51.489 [2024-11-20 10:00:22.297504] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:51.489 10:00:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.489 [2024-11-20 10:00:22.307201] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:51.489 [2024-11-20 10:00:22.307213] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:51.489 [2024-11-20 10:00:22.307218] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:51.489 [2024-11-20 10:00:22.307222] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:51.489 [2024-11-20 10:00:22.307237] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:51.489 [2024-11-20 10:00:22.307582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.489 [2024-11-20 10:00:22.307595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd2e10 with addr=10.0.0.2, port=4420 00:26:51.489 [2024-11-20 10:00:22.307603] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd2e10 is same with the state(6) to be set 00:26:51.489 [2024-11-20 10:00:22.307615] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fd2e10 (9): Bad file descriptor 00:26:51.489 [2024-11-20 10:00:22.307632] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:51.489 [2024-11-20 10:00:22.307639] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:51.489 [2024-11-20 10:00:22.307646] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:51.489 [2024-11-20 10:00:22.307652] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:51.489 [2024-11-20 10:00:22.307657] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:51.489 [2024-11-20 10:00:22.307661] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:51.489 [2024-11-20 10:00:22.317269] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:51.489 [2024-11-20 10:00:22.317282] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:51.489 [2024-11-20 10:00:22.317287] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:51.489 [2024-11-20 10:00:22.317292] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:51.489 [2024-11-20 10:00:22.317307] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:51.489 [2024-11-20 10:00:22.317643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.489 [2024-11-20 10:00:22.317657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd2e10 with addr=10.0.0.2, port=4420 00:26:51.489 [2024-11-20 10:00:22.317665] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd2e10 is same with the state(6) to be set 00:26:51.489 [2024-11-20 10:00:22.317677] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fd2e10 (9): Bad file descriptor 00:26:51.489 [2024-11-20 10:00:22.317710] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:51.489 [2024-11-20 10:00:22.317718] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:51.489 [2024-11-20 10:00:22.317725] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:51.489 [2024-11-20 10:00:22.317732] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:51.489 [2024-11-20 10:00:22.317737] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:51.489 [2024-11-20 10:00:22.317741] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:51.489 [2024-11-20 10:00:22.327338] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:51.489 [2024-11-20 10:00:22.327351] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:51.489 [2024-11-20 10:00:22.327355] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:51.489 [2024-11-20 10:00:22.327360] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:51.489 [2024-11-20 10:00:22.327376] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:51.489 [2024-11-20 10:00:22.327717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.489 [2024-11-20 10:00:22.327729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd2e10 with addr=10.0.0.2, port=4420 00:26:51.489 [2024-11-20 10:00:22.327737] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd2e10 is same with the state(6) to be set 00:26:51.489 [2024-11-20 10:00:22.327748] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fd2e10 (9): Bad file descriptor 00:26:51.489 [2024-11-20 10:00:22.327764] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:51.489 [2024-11-20 10:00:22.327771] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:51.489 [2024-11-20 10:00:22.327778] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:51.489 [2024-11-20 10:00:22.327785] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:51.489 [2024-11-20 10:00:22.327789] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:51.489 [2024-11-20 10:00:22.327794] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:51.489 10:00:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:51.489 10:00:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:51.489 10:00:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:51.489 10:00:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:51.489 10:00:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:51.489 10:00:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:51.489 10:00:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:51.489 10:00:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:26:51.489 10:00:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:51.489 10:00:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:51.489 10:00:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.489 10:00:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:51.489 10:00:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:51.489 10:00:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:51.489 [2024-11-20 10:00:22.337408] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:51.489 [2024-11-20 10:00:22.337423] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:51.489 [2024-11-20 10:00:22.337428] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:51.489 [2024-11-20 10:00:22.337432] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:51.489 [2024-11-20 10:00:22.337447] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:51.489 [2024-11-20 10:00:22.337782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.489 [2024-11-20 10:00:22.337795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd2e10 with addr=10.0.0.2, port=4420 00:26:51.489 [2024-11-20 10:00:22.337802] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd2e10 is same with the state(6) to be set 00:26:51.489 [2024-11-20 10:00:22.337813] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fd2e10 (9): Bad file descriptor 00:26:51.489 [2024-11-20 10:00:22.337831] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:51.489 [2024-11-20 10:00:22.337838] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:51.489 [2024-11-20 10:00:22.337845] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:51.489 [2024-11-20 10:00:22.337851] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:51.489 [2024-11-20 10:00:22.337856] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:51.489 [2024-11-20 10:00:22.337861] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:51.489 [2024-11-20 10:00:22.347479] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:51.489 [2024-11-20 10:00:22.347493] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:51.489 [2024-11-20 10:00:22.347498] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:51.490 [2024-11-20 10:00:22.347503] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:51.490 [2024-11-20 10:00:22.347518] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:51.490 [2024-11-20 10:00:22.347810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.490 [2024-11-20 10:00:22.347823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd2e10 with addr=10.0.0.2, port=4420 00:26:51.490 [2024-11-20 10:00:22.347830] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd2e10 is same with the state(6) to be set 00:26:51.490 [2024-11-20 10:00:22.347842] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fd2e10 (9): Bad file descriptor 00:26:51.490 [2024-11-20 10:00:22.347852] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:51.490 [2024-11-20 10:00:22.347859] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:51.490 [2024-11-20 10:00:22.347870] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:51.490 [2024-11-20 10:00:22.347876] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:51.490 [2024-11-20 10:00:22.347881] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:51.490 [2024-11-20 10:00:22.347886] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:51.490 [2024-11-20 10:00:22.357549] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:51.490 [2024-11-20 10:00:22.357560] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:51.490 [2024-11-20 10:00:22.357565] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:51.490 [2024-11-20 10:00:22.357569] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:51.490 [2024-11-20 10:00:22.357584] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:51.490 [2024-11-20 10:00:22.357793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.490 [2024-11-20 10:00:22.357805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd2e10 with addr=10.0.0.2, port=4420 00:26:51.490 [2024-11-20 10:00:22.357813] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd2e10 is same with the state(6) to be set 00:26:51.490 [2024-11-20 10:00:22.357823] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fd2e10 (9): Bad file descriptor 00:26:51.490 [2024-11-20 10:00:22.357834] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:51.490 [2024-11-20 10:00:22.357841] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:51.490 [2024-11-20 10:00:22.357848] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:51.490 [2024-11-20 10:00:22.357855] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:51.490 [2024-11-20 10:00:22.357860] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:51.490 [2024-11-20 10:00:22.357864] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:51.490 [2024-11-20 10:00:22.367616] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:51.490 [2024-11-20 10:00:22.367629] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:51.490 [2024-11-20 10:00:22.367634] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:51.490 [2024-11-20 10:00:22.367638] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:51.490 [2024-11-20 10:00:22.367653] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:51.490 [2024-11-20 10:00:22.367948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.490 [2024-11-20 10:00:22.367960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd2e10 with addr=10.0.0.2, port=4420 00:26:51.490 [2024-11-20 10:00:22.367968] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd2e10 is same with the state(6) to be set 00:26:51.490 [2024-11-20 10:00:22.367979] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fd2e10 (9): Bad file descriptor 00:26:51.490 [2024-11-20 10:00:22.367996] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:51.490 [2024-11-20 10:00:22.368009] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:51.490 [2024-11-20 10:00:22.368017] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:51.490 [2024-11-20 10:00:22.368023] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:51.490 [2024-11-20 10:00:22.368028] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:51.490 [2024-11-20 10:00:22.368032] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:51.490 10:00:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.490 [2024-11-20 10:00:22.377685] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:51.490 [2024-11-20 10:00:22.377697] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:51.490 [2024-11-20 10:00:22.377702] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:51.490 [2024-11-20 10:00:22.377707] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:51.490 [2024-11-20 10:00:22.377721] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:51.490 [2024-11-20 10:00:22.378015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.490 [2024-11-20 10:00:22.378027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd2e10 with addr=10.0.0.2, port=4420 00:26:51.490 [2024-11-20 10:00:22.378035] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd2e10 is same with the state(6) to be set 00:26:51.490 [2024-11-20 10:00:22.378047] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fd2e10 (9): Bad file descriptor 00:26:51.490 [2024-11-20 10:00:22.378066] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:51.490 [2024-11-20 10:00:22.378073] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:51.490 [2024-11-20 10:00:22.378080] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:51.490 [2024-11-20 10:00:22.378087] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:51.490 [2024-11-20 10:00:22.378091] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:51.490 [2024-11-20 10:00:22.378096] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:51.490 10:00:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:51.490 10:00:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:51.490 10:00:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:26:51.490 10:00:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:26:51.490 10:00:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:51.490 10:00:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:51.490 10:00:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:26:51.490 10:00:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:26:51.490 [2024-11-20 10:00:22.387753] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:51.490 [2024-11-20 10:00:22.387768] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:51.490 [2024-11-20 10:00:22.387773] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:51.490 [2024-11-20 10:00:22.387777] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:51.490 [2024-11-20 10:00:22.387792] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:51.490 [2024-11-20 10:00:22.387958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.490 [2024-11-20 10:00:22.387972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd2e10 with addr=10.0.0.2, port=4420 00:26:51.490 [2024-11-20 10:00:22.387980] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd2e10 is same with the state(6) to be set 00:26:51.490 [2024-11-20 10:00:22.387991] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fd2e10 (9): Bad file descriptor 00:26:51.490 [2024-11-20 10:00:22.388002] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:51.490 [2024-11-20 10:00:22.388008] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:51.490 [2024-11-20 10:00:22.388015] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:51.490 [2024-11-20 10:00:22.388022] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:51.490 [2024-11-20 10:00:22.388027] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:51.490 [2024-11-20 10:00:22.388031] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:51.490 10:00:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:51.490 10:00:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:51.490 10:00:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:51.490 10:00:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.490 10:00:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:51.490 10:00:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:51.490 [2024-11-20 10:00:22.397823] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:51.490 [2024-11-20 10:00:22.397836] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:51.490 [2024-11-20 10:00:22.397841] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:51.491 [2024-11-20 10:00:22.397845] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:51.491 [2024-11-20 10:00:22.397861] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:51.491 [2024-11-20 10:00:22.398120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.491 [2024-11-20 10:00:22.398133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd2e10 with addr=10.0.0.2, port=4420 00:26:51.491 [2024-11-20 10:00:22.398140] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd2e10 is same with the state(6) to be set 00:26:51.491 [2024-11-20 10:00:22.398152] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fd2e10 (9): Bad file descriptor 00:26:51.491 [2024-11-20 10:00:22.398173] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:51.491 [2024-11-20 10:00:22.398181] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:51.491 [2024-11-20 10:00:22.398192] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:51.491 [2024-11-20 10:00:22.398198] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:51.491 [2024-11-20 10:00:22.398203] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:51.491 [2024-11-20 10:00:22.398208] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:51.751 [2024-11-20 10:00:22.400180] bdev_nvme.c:7265:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:26:51.751 [2024-11-20 10:00:22.400200] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:51.751 10:00:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.751 10:00:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:26:51.751 10:00:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:26:52.694 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:52.694 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:26:52.694 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:26:52.694 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:52.694 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:52.694 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.694 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:52.694 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:52.694 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:52.694 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.694 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:26:52.694 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:52.694 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:26:52.694 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:52.694 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:52.694 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:52.694 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:52.694 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:52.694 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:52.694 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:52.694 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:52.694 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:52.694 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.694 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:52.694 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.694 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:52.694 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:52.694 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:52.694 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:52.694 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:26:52.694 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.694 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:52.694 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.694 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:26:52.694 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:26:52.694 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:52.694 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:52.694 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:26:52.694 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:26:52.694 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:52.694 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:52.695 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:52.695 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.695 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:52.695 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:52.695 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.957 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:26:52.957 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:52.957 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:26:52.957 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:26:52.957 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:52.957 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:52.957 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:26:52.957 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:26:52.957 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:52.957 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:52.957 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.957 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:52.957 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:52.957 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:52.957 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.957 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:26:52.957 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:52.957 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:26:52.957 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:26:52.957 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:52.957 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:52.957 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:52.957 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:52.957 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:52.957 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:52.957 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:52.957 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:52.957 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.957 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:52.957 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.957 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:26:52.957 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:26:52.957 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:52.957 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:52.957 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:52.957 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.957 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:53.899 [2024-11-20 10:00:24.774301] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:53.899 [2024-11-20 10:00:24.774316] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:53.899 [2024-11-20 10:00:24.774325] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:54.159 [2024-11-20 10:00:24.862584] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:26:54.159 [2024-11-20 10:00:24.968363] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:26:54.159 [2024-11-20 10:00:24.969034] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x1fd0700:1 started. 00:26:54.159 [2024-11-20 10:00:24.970420] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:54.159 [2024-11-20 10:00:24.970442] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:54.159 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.159 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:54.159 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:26:54.159 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:54.159 [2024-11-20 10:00:24.972946] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x1fd0700 was disconnected and freed. delete nvme_qpair. 00:26:54.159 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:54.159 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:54.159 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:54.159 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:54.159 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:54.159 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.159 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:54.159 request: 00:26:54.159 { 00:26:54.159 "name": "nvme", 00:26:54.159 "trtype": "tcp", 00:26:54.159 "traddr": "10.0.0.2", 00:26:54.159 "adrfam": "ipv4", 00:26:54.159 "trsvcid": "8009", 00:26:54.159 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:54.159 "wait_for_attach": true, 00:26:54.159 "method": "bdev_nvme_start_discovery", 00:26:54.159 "req_id": 1 00:26:54.159 } 00:26:54.159 Got JSON-RPC error response 00:26:54.159 response: 00:26:54.159 { 00:26:54.159 "code": -17, 00:26:54.159 "message": "File exists" 00:26:54.159 } 00:26:54.159 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:54.159 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:26:54.159 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:54.159 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:54.159 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:54.159 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:26:54.159 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:54.159 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:54.159 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.159 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:54.159 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:54.159 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:54.159 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.159 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:26:54.159 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:26:54.159 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:54.159 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:54.159 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.159 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:54.159 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:54.159 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:54.419 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.419 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:54.419 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:54.419 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:26:54.419 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:54.419 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:54.419 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:54.419 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:54.419 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:54.419 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:54.419 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.419 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:54.419 request: 00:26:54.419 { 00:26:54.419 "name": "nvme_second", 00:26:54.419 "trtype": "tcp", 00:26:54.419 "traddr": "10.0.0.2", 00:26:54.419 "adrfam": "ipv4", 00:26:54.419 "trsvcid": "8009", 00:26:54.419 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:54.419 "wait_for_attach": true, 00:26:54.419 "method": "bdev_nvme_start_discovery", 00:26:54.419 "req_id": 1 00:26:54.419 } 00:26:54.419 Got JSON-RPC error response 00:26:54.419 response: 00:26:54.419 { 00:26:54.419 "code": -17, 00:26:54.419 "message": "File exists" 00:26:54.419 } 00:26:54.419 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:54.419 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:26:54.419 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:54.419 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:54.419 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:54.419 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:26:54.419 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:54.419 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:54.419 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.419 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:54.419 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:54.419 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:54.420 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.420 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:26:54.420 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:26:54.420 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:54.420 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:54.420 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.420 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:54.420 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:54.420 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:54.420 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.420 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:54.420 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:54.420 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:26:54.420 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:54.420 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:54.420 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:54.420 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:54.420 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:54.420 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:54.420 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.420 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:55.367 [2024-11-20 10:00:26.229752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.368 [2024-11-20 10:00:26.229777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200e910 with addr=10.0.0.2, port=8010 00:26:55.368 [2024-11-20 10:00:26.229787] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:55.368 [2024-11-20 10:00:26.229792] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:55.368 [2024-11-20 10:00:26.229797] bdev_nvme.c:7546:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:56.753 [2024-11-20 10:00:27.232076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.753 [2024-11-20 10:00:27.232097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fcfe70 with addr=10.0.0.2, port=8010 00:26:56.753 [2024-11-20 10:00:27.232106] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:56.753 [2024-11-20 10:00:27.232111] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:56.753 [2024-11-20 10:00:27.232116] bdev_nvme.c:7546:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:57.324 [2024-11-20 10:00:28.234215] bdev_nvme.c:7521:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:26:57.324 request: 00:26:57.324 { 00:26:57.324 "name": "nvme_second", 00:26:57.324 "trtype": "tcp", 00:26:57.324 "traddr": "10.0.0.2", 00:26:57.324 "adrfam": "ipv4", 00:26:57.324 "trsvcid": "8010", 00:26:57.324 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:57.324 "wait_for_attach": false, 00:26:57.324 "attach_timeout_ms": 3000, 00:26:57.324 "method": "bdev_nvme_start_discovery", 00:26:57.324 "req_id": 1 00:26:57.324 } 00:26:57.324 Got JSON-RPC error response 00:26:57.324 response: 00:26:57.584 { 00:26:57.584 "code": -110, 00:26:57.584 "message": "Connection timed out" 00:26:57.584 } 00:26:57.584 10:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:57.584 10:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:26:57.584 10:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:57.584 10:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:57.584 10:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:57.584 10:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:26:57.584 10:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:57.584 10:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:57.584 10:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.584 10:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:57.584 10:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:57.584 10:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:57.584 10:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.584 10:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:26:57.584 10:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:26:57.584 10:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1497345 00:26:57.584 10:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:26:57.584 10:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:57.584 10:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:26:57.584 10:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:57.585 10:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:26:57.585 10:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:57.585 10:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:57.585 rmmod nvme_tcp 00:26:57.585 rmmod nvme_fabrics 00:26:57.585 rmmod nvme_keyring 00:26:57.585 10:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:57.585 10:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:26:57.585 10:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:26:57.585 10:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 1497102 ']' 00:26:57.585 10:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 1497102 00:26:57.585 10:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 1497102 ']' 00:26:57.585 10:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 1497102 00:26:57.585 10:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:26:57.585 10:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:57.585 10:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1497102 00:26:57.585 10:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:57.585 10:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:57.585 10:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1497102' 00:26:57.585 killing process with pid 1497102 00:26:57.585 10:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 1497102 00:26:57.585 10:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 1497102 00:26:57.846 10:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:57.846 10:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:57.846 10:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:57.846 10:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:26:57.846 10:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:57.846 10:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:26:57.846 10:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:26:57.846 10:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:57.846 10:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:57.846 10:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:57.846 10:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:57.846 10:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:59.761 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:59.761 00:26:59.761 real 0m21.148s 00:26:59.761 user 0m25.167s 00:26:59.761 sys 0m7.286s 00:26:59.761 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:59.761 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:59.761 ************************************ 00:26:59.761 END TEST nvmf_host_discovery 00:26:59.761 ************************************ 00:26:59.761 10:00:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:59.761 10:00:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:59.761 10:00:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:59.761 10:00:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.024 ************************************ 00:27:00.024 START TEST nvmf_host_multipath_status 00:27:00.024 ************************************ 00:27:00.024 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:27:00.024 * Looking for test storage... 00:27:00.024 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:00.024 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:00.024 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:27:00.024 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:00.024 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:00.024 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:00.024 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:00.024 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:00.024 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:27:00.024 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:27:00.024 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:27:00.024 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:27:00.024 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:27:00.024 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:27:00.024 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:27:00.024 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:00.024 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:27:00.024 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:27:00.024 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:00.024 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:00.024 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:27:00.024 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:27:00.024 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:00.024 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:27:00.024 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:27:00.024 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:27:00.024 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:27:00.024 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:00.024 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:27:00.024 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:27:00.024 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:00.024 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:00.024 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:27:00.024 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:00.024 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:00.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:00.024 --rc genhtml_branch_coverage=1 00:27:00.024 --rc genhtml_function_coverage=1 00:27:00.024 --rc genhtml_legend=1 00:27:00.024 --rc geninfo_all_blocks=1 00:27:00.024 --rc geninfo_unexecuted_blocks=1 00:27:00.024 00:27:00.024 ' 00:27:00.024 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:00.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:00.024 --rc genhtml_branch_coverage=1 00:27:00.024 --rc genhtml_function_coverage=1 00:27:00.024 --rc genhtml_legend=1 00:27:00.024 --rc geninfo_all_blocks=1 00:27:00.024 --rc geninfo_unexecuted_blocks=1 00:27:00.024 00:27:00.024 ' 00:27:00.024 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:00.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:00.024 --rc genhtml_branch_coverage=1 00:27:00.024 --rc genhtml_function_coverage=1 00:27:00.024 --rc genhtml_legend=1 00:27:00.024 --rc geninfo_all_blocks=1 00:27:00.024 --rc geninfo_unexecuted_blocks=1 00:27:00.024 00:27:00.024 ' 00:27:00.024 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:00.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:00.024 --rc genhtml_branch_coverage=1 00:27:00.024 --rc genhtml_function_coverage=1 00:27:00.024 --rc genhtml_legend=1 00:27:00.024 --rc geninfo_all_blocks=1 00:27:00.024 --rc geninfo_unexecuted_blocks=1 00:27:00.024 00:27:00.024 ' 00:27:00.024 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:00.024 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:27:00.024 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:00.024 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:00.024 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:00.024 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:00.024 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:00.024 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:00.024 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:00.024 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:00.024 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:00.024 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:00.024 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:00.024 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:00.024 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:00.024 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:00.024 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:00.024 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:00.024 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:00.024 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:27:00.024 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:00.024 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:00.024 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:00.024 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:00.024 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:00.024 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:00.024 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:27:00.024 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:00.025 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:27:00.025 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:00.025 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:00.025 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:00.025 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:00.025 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:00.025 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:00.025 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:00.025 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:00.025 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:00.025 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:00.286 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:27:00.286 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:27:00.286 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:00.286 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:27:00.286 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:00.286 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:27:00.286 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:27:00.286 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:00.286 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:00.286 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:00.286 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:00.286 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:00.286 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:00.286 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:00.286 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:00.286 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:00.286 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:00.286 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:27:00.286 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:08.433 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:08.433 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:27:08.433 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:08.433 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:08.433 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:08.433 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:08.433 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:08.433 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:27:08.433 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:08.433 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:27:08.433 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:27:08.433 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:27:08.433 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:27:08.433 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:27:08.433 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:27:08.433 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:08.433 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:08.433 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:08.433 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:08.433 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:08.433 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:08.433 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:08.433 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:08.433 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:08.433 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:08.433 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:08.433 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:08.433 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:08.433 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:08.433 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:08.433 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:08.433 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:08.433 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:08.433 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:08.433 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:08.433 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:08.433 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:08.433 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:08.433 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:08.433 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:08.433 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:08.433 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:08.433 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:08.433 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:08.433 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:08.433 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:08.433 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:08.433 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:08.433 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:08.433 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:08.433 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:08.433 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:08.433 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:08.433 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:08.433 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:08.433 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:08.433 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:08.433 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:08.433 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:08.433 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:08.433 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:08.433 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:08.433 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:08.433 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:08.433 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:08.433 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:08.433 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:08.433 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:08.433 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:08.433 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:08.433 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:08.433 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:08.433 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:08.433 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:27:08.433 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:08.433 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:08.433 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:08.434 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:08.434 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:08.434 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:08.434 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:08.434 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:08.434 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:08.434 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:08.434 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:08.434 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:08.434 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:08.434 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:08.434 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:08.434 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:08.434 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:08.434 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:08.434 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:08.434 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:08.434 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:08.434 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:08.434 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:08.434 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:08.434 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:08.434 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:08.434 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:08.434 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.693 ms 00:27:08.434 00:27:08.434 --- 10.0.0.2 ping statistics --- 00:27:08.434 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:08.434 rtt min/avg/max/mdev = 0.693/0.693/0.693/0.000 ms 00:27:08.434 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:08.434 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:08.434 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.301 ms 00:27:08.434 00:27:08.434 --- 10.0.0.1 ping statistics --- 00:27:08.434 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:08.434 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:27:08.434 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:08.434 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:27:08.434 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:08.434 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:08.434 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:08.434 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:08.434 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:08.434 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:08.434 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:08.434 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:27:08.434 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:08.434 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:08.434 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:08.434 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=1503549 00:27:08.434 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 1503549 00:27:08.434 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:27:08.434 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 1503549 ']' 00:27:08.434 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:08.434 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:08.434 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:08.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:08.434 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:08.434 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:08.434 [2024-11-20 10:00:38.475152] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:27:08.434 [2024-11-20 10:00:38.475230] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:08.434 [2024-11-20 10:00:38.576850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:08.434 [2024-11-20 10:00:38.629415] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:08.434 [2024-11-20 10:00:38.629468] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:08.434 [2024-11-20 10:00:38.629476] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:08.434 [2024-11-20 10:00:38.629484] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:08.434 [2024-11-20 10:00:38.629491] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:08.434 [2024-11-20 10:00:38.631129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:08.434 [2024-11-20 10:00:38.631134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:08.434 10:00:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:08.434 10:00:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:27:08.434 10:00:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:08.434 10:00:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:08.434 10:00:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:08.434 10:00:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:08.434 10:00:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1503549 00:27:08.434 10:00:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:08.696 [2024-11-20 10:00:39.498682] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:08.696 10:00:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:27:08.958 Malloc0 00:27:08.958 10:00:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:27:09.219 10:00:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:09.479 10:00:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:09.479 [2024-11-20 10:00:40.306504] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:09.479 10:00:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:09.740 [2024-11-20 10:00:40.490988] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:09.740 10:00:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:27:09.740 10:00:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1504012 00:27:09.740 10:00:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:09.740 10:00:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1504012 /var/tmp/bdevperf.sock 00:27:09.740 10:00:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 1504012 ']' 00:27:09.740 10:00:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:09.740 10:00:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:09.740 10:00:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:09.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:09.740 10:00:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:09.740 10:00:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:10.682 10:00:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:10.682 10:00:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:27:10.682 10:00:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:27:10.942 10:00:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:27:11.203 Nvme0n1 00:27:11.203 10:00:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:27:11.773 Nvme0n1 00:27:11.773 10:00:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:27:11.773 10:00:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:27:13.685 10:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:27:13.685 10:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:27:13.946 10:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:13.946 10:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:27:15.331 10:00:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:27:15.331 10:00:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:15.331 10:00:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:15.331 10:00:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:15.331 10:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:15.331 10:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:15.331 10:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:15.331 10:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:15.331 10:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:15.331 10:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:15.331 10:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:15.331 10:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:15.592 10:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:15.592 10:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:15.592 10:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:15.592 10:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:15.854 10:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:15.854 10:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:15.854 10:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:15.854 10:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:15.854 10:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:15.854 10:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:15.854 10:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:15.854 10:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:16.115 10:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:16.115 10:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:27:16.115 10:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:16.375 10:00:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:16.637 10:00:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:27:17.581 10:00:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:27:17.581 10:00:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:17.581 10:00:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:17.581 10:00:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:17.842 10:00:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:17.842 10:00:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:17.842 10:00:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:17.842 10:00:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:17.842 10:00:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:17.842 10:00:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:17.842 10:00:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:17.842 10:00:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:18.103 10:00:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:18.103 10:00:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:18.103 10:00:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:18.103 10:00:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:18.363 10:00:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:18.363 10:00:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:18.363 10:00:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:18.363 10:00:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:18.363 10:00:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:18.363 10:00:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:18.363 10:00:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:18.363 10:00:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:18.624 10:00:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:18.624 10:00:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:27:18.624 10:00:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:18.885 10:00:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:27:18.885 10:00:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:27:20.273 10:00:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:27:20.273 10:00:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:20.273 10:00:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:20.273 10:00:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:20.273 10:00:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:20.273 10:00:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:20.273 10:00:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:20.273 10:00:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:20.273 10:00:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:20.273 10:00:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:20.273 10:00:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:20.273 10:00:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:20.534 10:00:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:20.534 10:00:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:20.534 10:00:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:20.534 10:00:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:20.794 10:00:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:20.794 10:00:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:20.794 10:00:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:20.794 10:00:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:20.794 10:00:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:20.794 10:00:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:20.794 10:00:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:20.794 10:00:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:21.055 10:00:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:21.055 10:00:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:27:21.055 10:00:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:21.316 10:00:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:21.316 10:00:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:27:22.701 10:00:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:27:22.702 10:00:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:22.702 10:00:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:22.702 10:00:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:22.702 10:00:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:22.702 10:00:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:22.702 10:00:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:22.702 10:00:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:22.702 10:00:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:22.702 10:00:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:22.702 10:00:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:22.702 10:00:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:22.962 10:00:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:22.962 10:00:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:22.962 10:00:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:22.962 10:00:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:23.222 10:00:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:23.222 10:00:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:23.222 10:00:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:23.222 10:00:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:23.222 10:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:23.222 10:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:23.222 10:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:23.222 10:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:23.482 10:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:23.482 10:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:27:23.482 10:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:27:23.742 10:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:23.742 10:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:27:25.124 10:00:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:27:25.124 10:00:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:25.124 10:00:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:25.124 10:00:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:25.124 10:00:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:25.124 10:00:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:25.124 10:00:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:25.124 10:00:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:25.124 10:00:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:25.124 10:00:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:25.124 10:00:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:25.124 10:00:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:25.384 10:00:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:25.384 10:00:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:25.384 10:00:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:25.384 10:00:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:25.646 10:00:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:25.646 10:00:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:27:25.646 10:00:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:25.646 10:00:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:25.646 10:00:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:25.646 10:00:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:25.646 10:00:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:25.646 10:00:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:25.907 10:00:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:25.907 10:00:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:27:25.907 10:00:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:27:26.168 10:00:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:26.168 10:00:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:27:27.551 10:00:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:27:27.551 10:00:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:27.551 10:00:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:27.551 10:00:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:27.551 10:00:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:27.551 10:00:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:27.551 10:00:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:27.551 10:00:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:27.551 10:00:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:27.551 10:00:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:27.551 10:00:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:27.551 10:00:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:27.812 10:00:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:27.812 10:00:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:27.812 10:00:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:27.812 10:00:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:28.073 10:00:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:28.073 10:00:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:27:28.073 10:00:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:28.073 10:00:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:28.073 10:00:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:28.073 10:00:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:28.073 10:00:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:28.073 10:00:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:28.334 10:00:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:28.334 10:00:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:27:28.595 10:00:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:27:28.595 10:00:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:27:28.595 10:00:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:28.855 10:00:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:27:29.803 10:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:27:29.803 10:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:29.803 10:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:29.803 10:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:30.064 10:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:30.064 10:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:30.064 10:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:30.064 10:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:30.324 10:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:30.324 10:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:30.324 10:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:30.324 10:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:30.584 10:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:30.584 10:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:30.584 10:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:30.584 10:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:30.584 10:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:30.585 10:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:30.585 10:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:30.585 10:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:30.844 10:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:30.844 10:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:30.844 10:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:30.844 10:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:31.104 10:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:31.104 10:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:27:31.104 10:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:31.104 10:01:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:31.364 10:01:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:27:32.306 10:01:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:27:32.306 10:01:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:32.306 10:01:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:32.306 10:01:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:32.566 10:01:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:32.566 10:01:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:32.566 10:01:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:32.566 10:01:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:32.831 10:01:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:32.831 10:01:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:32.831 10:01:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:32.831 10:01:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:32.831 10:01:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:32.831 10:01:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:32.831 10:01:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:32.831 10:01:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:33.093 10:01:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:33.093 10:01:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:33.093 10:01:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:33.093 10:01:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:33.354 10:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:33.354 10:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:33.354 10:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:33.354 10:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:33.354 10:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:33.354 10:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:27:33.354 10:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:33.615 10:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:27:33.875 10:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:27:34.821 10:01:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:27:34.821 10:01:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:34.821 10:01:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:34.821 10:01:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:35.081 10:01:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:35.081 10:01:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:35.081 10:01:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:35.081 10:01:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:35.342 10:01:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:35.342 10:01:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:35.342 10:01:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:35.342 10:01:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:35.342 10:01:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:35.342 10:01:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:35.342 10:01:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:35.342 10:01:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:35.602 10:01:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:35.602 10:01:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:35.602 10:01:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:35.602 10:01:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:35.863 10:01:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:35.863 10:01:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:35.863 10:01:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:35.863 10:01:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:35.863 10:01:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:35.863 10:01:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:27:35.863 10:01:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:36.124 10:01:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:36.385 10:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:27:37.325 10:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:27:37.325 10:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:37.325 10:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:37.325 10:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:37.585 10:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:37.585 10:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:37.585 10:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:37.585 10:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:37.585 10:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:37.585 10:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:37.585 10:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:37.585 10:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:37.845 10:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:37.845 10:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:37.845 10:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:37.845 10:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:38.105 10:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:38.105 10:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:38.105 10:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:38.105 10:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:38.105 10:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:38.105 10:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:38.105 10:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:38.105 10:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:38.364 10:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:38.364 10:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1504012 00:27:38.364 10:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 1504012 ']' 00:27:38.364 10:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 1504012 00:27:38.364 10:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:27:38.364 10:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:38.365 10:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1504012 00:27:38.365 10:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:27:38.365 10:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:27:38.365 10:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1504012' 00:27:38.365 killing process with pid 1504012 00:27:38.365 10:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 1504012 00:27:38.365 10:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 1504012 00:27:38.365 { 00:27:38.365 "results": [ 00:27:38.365 { 00:27:38.365 "job": "Nvme0n1", 00:27:38.365 "core_mask": "0x4", 00:27:38.365 "workload": "verify", 00:27:38.365 "status": "terminated", 00:27:38.365 "verify_range": { 00:27:38.365 "start": 0, 00:27:38.365 "length": 16384 00:27:38.365 }, 00:27:38.365 "queue_depth": 128, 00:27:38.365 "io_size": 4096, 00:27:38.365 "runtime": 26.577071, 00:27:38.365 "iops": 12086.84734295965, 00:27:38.365 "mibps": 47.214247433436135, 00:27:38.365 "io_failed": 0, 00:27:38.365 "io_timeout": 0, 00:27:38.365 "avg_latency_us": 10571.319969409535, 00:27:38.365 "min_latency_us": 901.12, 00:27:38.365 "max_latency_us": 3019898.88 00:27:38.365 } 00:27:38.365 ], 00:27:38.365 "core_count": 1 00:27:38.365 } 00:27:38.628 10:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1504012 00:27:38.628 10:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:38.628 [2024-11-20 10:00:40.572534] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:27:38.628 [2024-11-20 10:00:40.572616] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1504012 ] 00:27:38.628 [2024-11-20 10:00:40.666852] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:38.628 [2024-11-20 10:00:40.718532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:38.628 Running I/O for 90 seconds... 00:27:38.628 10987.00 IOPS, 42.92 MiB/s [2024-11-20T09:01:09.544Z] 11404.50 IOPS, 44.55 MiB/s [2024-11-20T09:01:09.544Z] 11386.33 IOPS, 44.48 MiB/s [2024-11-20T09:01:09.544Z] 11773.00 IOPS, 45.99 MiB/s [2024-11-20T09:01:09.544Z] 12032.60 IOPS, 47.00 MiB/s [2024-11-20T09:01:09.544Z] 12165.67 IOPS, 47.52 MiB/s [2024-11-20T09:01:09.544Z] 12295.71 IOPS, 48.03 MiB/s [2024-11-20T09:01:09.544Z] 12385.00 IOPS, 48.38 MiB/s [2024-11-20T09:01:09.544Z] 12448.44 IOPS, 48.63 MiB/s [2024-11-20T09:01:09.544Z] 12504.60 IOPS, 48.85 MiB/s [2024-11-20T09:01:09.544Z] 12535.55 IOPS, 48.97 MiB/s [2024-11-20T09:01:09.544Z] [2024-11-20 10:00:54.448099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:15328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.628 [2024-11-20 10:00:54.448135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:38.628 [2024-11-20 10:00:54.448173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:15336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.628 [2024-11-20 10:00:54.448181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:38.628 [2024-11-20 10:00:54.448192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:15344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.628 [2024-11-20 10:00:54.448198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:38.628 [2024-11-20 10:00:54.448208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:15352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.629 [2024-11-20 10:00:54.448214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:38.629 [2024-11-20 10:00:54.448224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:15360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.629 [2024-11-20 10:00:54.448229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:38.629 [2024-11-20 10:00:54.448240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:15368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.629 [2024-11-20 10:00:54.448245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:38.629 [2024-11-20 10:00:54.448256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:15376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.629 [2024-11-20 10:00:54.448260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:38.629 [2024-11-20 10:00:54.448272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:15384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.629 [2024-11-20 10:00:54.448278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:38.629 [2024-11-20 10:00:54.448288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.629 [2024-11-20 10:00:54.448293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:38.629 [2024-11-20 10:00:54.448303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.629 [2024-11-20 10:00:54.448314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:38.629 [2024-11-20 10:00:54.448325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:15408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.629 [2024-11-20 10:00:54.448330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:38.629 [2024-11-20 10:00:54.448340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:15416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.629 [2024-11-20 10:00:54.448345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:38.629 [2024-11-20 10:00:54.448355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:15424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.629 [2024-11-20 10:00:54.448362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:38.629 [2024-11-20 10:00:54.448372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:15432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.629 [2024-11-20 10:00:54.448377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:38.629 [2024-11-20 10:00:54.448389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:15440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.629 [2024-11-20 10:00:54.448394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:38.629 [2024-11-20 10:00:54.448405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:15448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.629 [2024-11-20 10:00:54.448412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:38.629 [2024-11-20 10:00:54.448423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:15456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.629 [2024-11-20 10:00:54.448429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:38.629 [2024-11-20 10:00:54.448439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:15464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.629 [2024-11-20 10:00:54.448445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:38.629 [2024-11-20 10:00:54.448455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:15472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.629 [2024-11-20 10:00:54.448461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:38.629 [2024-11-20 10:00:54.448471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.629 [2024-11-20 10:00:54.448477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:38.629 [2024-11-20 10:00:54.448488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:15488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.629 [2024-11-20 10:00:54.448495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:38.629 [2024-11-20 10:00:54.448505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:15496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.629 [2024-11-20 10:00:54.448511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:38.629 [2024-11-20 10:00:54.448524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.629 [2024-11-20 10:00:54.448529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:38.629 [2024-11-20 10:00:54.448541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:15512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.629 [2024-11-20 10:00:54.448548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:38.629 [2024-11-20 10:00:54.448559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:15520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.629 [2024-11-20 10:00:54.448565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:38.629 [2024-11-20 10:00:54.448577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:15528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.629 [2024-11-20 10:00:54.448584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:38.629 [2024-11-20 10:00:54.448595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:15536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.629 [2024-11-20 10:00:54.448600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:38.629 [2024-11-20 10:00:54.448610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:15544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.629 [2024-11-20 10:00:54.448615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:38.629 [2024-11-20 10:00:54.448626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:15552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.629 [2024-11-20 10:00:54.448631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:38.629 [2024-11-20 10:00:54.448641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:15560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.629 [2024-11-20 10:00:54.448646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:38.629 [2024-11-20 10:00:54.448656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:15568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.629 [2024-11-20 10:00:54.448663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:38.629 [2024-11-20 10:00:54.448673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:15576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.629 [2024-11-20 10:00:54.448678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:38.629 [2024-11-20 10:00:54.449229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:15584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.629 [2024-11-20 10:00:54.449238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:38.629 [2024-11-20 10:00:54.449250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:15592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.629 [2024-11-20 10:00:54.449257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:38.629 [2024-11-20 10:00:54.449271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:15600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.629 [2024-11-20 10:00:54.449276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:38.629 [2024-11-20 10:00:54.449288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:15608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.629 [2024-11-20 10:00:54.449294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:38.629 [2024-11-20 10:00:54.449306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:15616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.629 [2024-11-20 10:00:54.449312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:38.629 [2024-11-20 10:00:54.449324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:15624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.629 [2024-11-20 10:00:54.449329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:38.629 [2024-11-20 10:00:54.449341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:15632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.629 [2024-11-20 10:00:54.449346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:38.629 [2024-11-20 10:00:54.449358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:15640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.629 [2024-11-20 10:00:54.449363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:38.629 [2024-11-20 10:00:54.449375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:15648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.629 [2024-11-20 10:00:54.449380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:38.630 [2024-11-20 10:00:54.449392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:15656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.630 [2024-11-20 10:00:54.449398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:38.630 [2024-11-20 10:00:54.449410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:15664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.630 [2024-11-20 10:00:54.449416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:38.630 [2024-11-20 10:00:54.449428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:15672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.630 [2024-11-20 10:00:54.449433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:38.630 [2024-11-20 10:00:54.449444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:15680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.630 [2024-11-20 10:00:54.449450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:38.630 [2024-11-20 10:00:54.449462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:15688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.630 [2024-11-20 10:00:54.449467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:38.630 [2024-11-20 10:00:54.449480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:15696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.630 [2024-11-20 10:00:54.449486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:38.630 [2024-11-20 10:00:54.449498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:15704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.630 [2024-11-20 10:00:54.449503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:38.630 [2024-11-20 10:00:54.449515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:15712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.630 [2024-11-20 10:00:54.449521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:38.630 [2024-11-20 10:00:54.449533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:15720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.630 [2024-11-20 10:00:54.449539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:38.630 [2024-11-20 10:00:54.449551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:15728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.630 [2024-11-20 10:00:54.449556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:38.630 [2024-11-20 10:00:54.449568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:15736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.630 [2024-11-20 10:00:54.449573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:38.630 [2024-11-20 10:00:54.449585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:15744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.630 [2024-11-20 10:00:54.449591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:38.630 [2024-11-20 10:00:54.449603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:15752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.630 [2024-11-20 10:00:54.449608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:38.630 [2024-11-20 10:00:54.449619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:15760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.630 [2024-11-20 10:00:54.449624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:38.630 [2024-11-20 10:00:54.449636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:15768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.630 [2024-11-20 10:00:54.449642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:38.630 [2024-11-20 10:00:54.449654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.630 [2024-11-20 10:00:54.449660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:38.630 [2024-11-20 10:00:54.449671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:15784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.630 [2024-11-20 10:00:54.449676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:38.630 [2024-11-20 10:00:54.449688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:15792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.630 [2024-11-20 10:00:54.449695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:38.630 [2024-11-20 10:00:54.449706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:15800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.630 [2024-11-20 10:00:54.449712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:38.630 [2024-11-20 10:00:54.449723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:15808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.630 [2024-11-20 10:00:54.449728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:38.630 [2024-11-20 10:00:54.449740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:15816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.630 [2024-11-20 10:00:54.449746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:38.630 [2024-11-20 10:00:54.449757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:15824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.630 [2024-11-20 10:00:54.449763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:38.630 [2024-11-20 10:00:54.449775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:15832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.630 [2024-11-20 10:00:54.449780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:38.630 [2024-11-20 10:00:54.449791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:14936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.630 [2024-11-20 10:00:54.449797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:38.630 [2024-11-20 10:00:54.449809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:14944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.630 [2024-11-20 10:00:54.449815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:38.630 [2024-11-20 10:00:54.449880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:14952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.630 [2024-11-20 10:00:54.449887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:38.630 [2024-11-20 10:00:54.449903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:14960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.630 [2024-11-20 10:00:54.449909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:38.630 [2024-11-20 10:00:54.449923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:14968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.630 [2024-11-20 10:00:54.449928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:38.630 [2024-11-20 10:00:54.449942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.630 [2024-11-20 10:00:54.449947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:38.630 [2024-11-20 10:00:54.449961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:14984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.630 [2024-11-20 10:00:54.449967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:38.630 [2024-11-20 10:00:54.449985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:14992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.630 [2024-11-20 10:00:54.449990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:38.630 [2024-11-20 10:00:54.450004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:15000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.630 [2024-11-20 10:00:54.450010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:38.630 [2024-11-20 10:00:54.450023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:15008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.630 [2024-11-20 10:00:54.450028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:38.630 [2024-11-20 10:00:54.450042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:15016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.630 [2024-11-20 10:00:54.450047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:38.630 [2024-11-20 10:00:54.450061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:15024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.630 [2024-11-20 10:00:54.450067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:38.630 [2024-11-20 10:00:54.450081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:15032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.630 [2024-11-20 10:00:54.450086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:38.630 [2024-11-20 10:00:54.450099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:15040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.631 [2024-11-20 10:00:54.450104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:38.631 [2024-11-20 10:00:54.450118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:15048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.631 [2024-11-20 10:00:54.450124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:38.631 [2024-11-20 10:00:54.450138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:15840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.631 [2024-11-20 10:00:54.450143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:38.631 [2024-11-20 10:00:54.450156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:15848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.631 [2024-11-20 10:00:54.450166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:38.631 [2024-11-20 10:00:54.450180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:15856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.631 [2024-11-20 10:00:54.450186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:38.631 [2024-11-20 10:00:54.450200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:15864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.631 [2024-11-20 10:00:54.450205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:38.631 [2024-11-20 10:00:54.450219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:15872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.631 [2024-11-20 10:00:54.450225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:38.631 [2024-11-20 10:00:54.450239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:15880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.631 [2024-11-20 10:00:54.450244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:38.631 [2024-11-20 10:00:54.450260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:15888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.631 [2024-11-20 10:00:54.450266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:38.631 [2024-11-20 10:00:54.450313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:15896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.631 [2024-11-20 10:00:54.450319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:38.631 [2024-11-20 10:00:54.450335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.631 [2024-11-20 10:00:54.450341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:38.631 [2024-11-20 10:00:54.450355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.631 [2024-11-20 10:00:54.450361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:38.631 [2024-11-20 10:00:54.450375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:15920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.631 [2024-11-20 10:00:54.450380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:38.631 [2024-11-20 10:00:54.450395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:15928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.631 [2024-11-20 10:00:54.450401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:38.631 [2024-11-20 10:00:54.450415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:15936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.631 [2024-11-20 10:00:54.450420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:38.631 [2024-11-20 10:00:54.450434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:15944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.631 [2024-11-20 10:00:54.450440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:38.631 [2024-11-20 10:00:54.450454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:15952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.631 [2024-11-20 10:00:54.450460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:38.631 [2024-11-20 10:00:54.450474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.631 [2024-11-20 10:00:54.450479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:38.631 [2024-11-20 10:00:54.450494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:15064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.631 [2024-11-20 10:00:54.450501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:38.631 [2024-11-20 10:00:54.450516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:15072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.631 [2024-11-20 10:00:54.450521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:38.631 [2024-11-20 10:00:54.450535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:15080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.631 [2024-11-20 10:00:54.450542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:38.631 [2024-11-20 10:00:54.450556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:15088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.631 [2024-11-20 10:00:54.450561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:38.631 [2024-11-20 10:00:54.450576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:15096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.631 [2024-11-20 10:00:54.450583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:38.631 [2024-11-20 10:00:54.450598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.631 [2024-11-20 10:00:54.450604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:38.631 [2024-11-20 10:00:54.450619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:15112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.631 [2024-11-20 10:00:54.450625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:38.631 [2024-11-20 10:00:54.450640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:15120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.631 [2024-11-20 10:00:54.450645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:38.631 [2024-11-20 10:00:54.450660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.631 [2024-11-20 10:00:54.450666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:38.631 [2024-11-20 10:00:54.450680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.631 [2024-11-20 10:00:54.450686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:38.631 [2024-11-20 10:00:54.450700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:15144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.631 [2024-11-20 10:00:54.450706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:38.631 [2024-11-20 10:00:54.450720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:15152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.631 [2024-11-20 10:00:54.450726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:38.631 [2024-11-20 10:00:54.450740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:15160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.631 [2024-11-20 10:00:54.450747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:38.631 [2024-11-20 10:00:54.450762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:15168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.631 [2024-11-20 10:00:54.450768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:38.631 [2024-11-20 10:00:54.450782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:15176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.631 [2024-11-20 10:00:54.450788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:38.631 [2024-11-20 10:00:54.450802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:15184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.631 [2024-11-20 10:00:54.450808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:38.631 [2024-11-20 10:00:54.450823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.631 [2024-11-20 10:00:54.450828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:38.631 [2024-11-20 10:00:54.450842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:15200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.631 [2024-11-20 10:00:54.450847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:38.631 [2024-11-20 10:00:54.450862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.631 [2024-11-20 10:00:54.450868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:38.631 [2024-11-20 10:00:54.450883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:15216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.631 [2024-11-20 10:00:54.450888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:38.631 [2024-11-20 10:00:54.450903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:15224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.631 [2024-11-20 10:00:54.450908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:38.631 [2024-11-20 10:00:54.450922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:15232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.631 [2024-11-20 10:00:54.450928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:38.631 [2024-11-20 10:00:54.450943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.631 [2024-11-20 10:00:54.450948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:38.631 [2024-11-20 10:00:54.450963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:15248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.631 [2024-11-20 10:00:54.450968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:38.631 [2024-11-20 10:00:54.450983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:15256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.631 [2024-11-20 10:00:54.450989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:38.631 [2024-11-20 10:00:54.451004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.631 [2024-11-20 10:00:54.451009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:38.632 [2024-11-20 10:00:54.451023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.632 [2024-11-20 10:00:54.451030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:38.632 [2024-11-20 10:00:54.451046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.632 [2024-11-20 10:00:54.451051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:38.632 [2024-11-20 10:00:54.451066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:15288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.632 [2024-11-20 10:00:54.451071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:38.632 [2024-11-20 10:00:54.451086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:15296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.632 [2024-11-20 10:00:54.451092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:38.632 [2024-11-20 10:00:54.451106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:15304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.632 [2024-11-20 10:00:54.451112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:38.632 [2024-11-20 10:00:54.451126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:15312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.632 [2024-11-20 10:00:54.451132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:38.632 [2024-11-20 10:00:54.451147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:15320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.632 [2024-11-20 10:00:54.451152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.632 12443.58 IOPS, 48.61 MiB/s [2024-11-20T09:01:09.548Z] 11486.38 IOPS, 44.87 MiB/s [2024-11-20T09:01:09.548Z] 10665.93 IOPS, 41.66 MiB/s [2024-11-20T09:01:09.548Z] 10068.20 IOPS, 39.33 MiB/s [2024-11-20T09:01:09.548Z] 10246.00 IOPS, 40.02 MiB/s [2024-11-20T09:01:09.548Z] 10425.24 IOPS, 40.72 MiB/s [2024-11-20T09:01:09.548Z] 10794.89 IOPS, 42.17 MiB/s [2024-11-20T09:01:09.548Z] 11119.21 IOPS, 43.43 MiB/s [2024-11-20T09:01:09.548Z] 11307.80 IOPS, 44.17 MiB/s [2024-11-20T09:01:09.548Z] 11386.24 IOPS, 44.48 MiB/s [2024-11-20T09:01:09.548Z] 11454.23 IOPS, 44.74 MiB/s [2024-11-20T09:01:09.548Z] 11698.43 IOPS, 45.70 MiB/s [2024-11-20T09:01:09.548Z] 11919.58 IOPS, 46.56 MiB/s [2024-11-20T09:01:09.548Z] [2024-11-20 10:01:07.079309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:123400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.632 [2024-11-20 10:01:07.079348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:38.632 [2024-11-20 10:01:07.079379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:123416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.632 [2024-11-20 10:01:07.079386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:38.632 [2024-11-20 10:01:07.079397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:123432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.632 [2024-11-20 10:01:07.079408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:38.632 [2024-11-20 10:01:07.079419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:123448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.632 [2024-11-20 10:01:07.079425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:38.632 [2024-11-20 10:01:07.079435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:123464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.632 [2024-11-20 10:01:07.079442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:38.632 [2024-11-20 10:01:07.079452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:123480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.632 [2024-11-20 10:01:07.079458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:38.632 [2024-11-20 10:01:07.079468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:123496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.632 [2024-11-20 10:01:07.079473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:38.632 [2024-11-20 10:01:07.079484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:123512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.632 [2024-11-20 10:01:07.079489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:38.632 [2024-11-20 10:01:07.081076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:123528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.632 [2024-11-20 10:01:07.081093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:38.632 [2024-11-20 10:01:07.081106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:123192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.632 [2024-11-20 10:01:07.081112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:38.632 [2024-11-20 10:01:07.081123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:123224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.632 [2024-11-20 10:01:07.081129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:38.632 [2024-11-20 10:01:07.081140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:123256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.632 [2024-11-20 10:01:07.081145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:38.632 [2024-11-20 10:01:07.081156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:123544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.632 [2024-11-20 10:01:07.081166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:38.632 [2024-11-20 10:01:07.081176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:123560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.632 [2024-11-20 10:01:07.081182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:38.632 [2024-11-20 10:01:07.081192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:123576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.632 [2024-11-20 10:01:07.081198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:38.632 [2024-11-20 10:01:07.081211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:123592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.632 [2024-11-20 10:01:07.081216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:38.632 [2024-11-20 10:01:07.081227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:123608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.632 [2024-11-20 10:01:07.081233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:38.632 [2024-11-20 10:01:07.081244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:123624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.632 [2024-11-20 10:01:07.081249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:38.632 [2024-11-20 10:01:07.081259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:123640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.632 [2024-11-20 10:01:07.081265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:38.632 [2024-11-20 10:01:07.081275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:123656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.632 [2024-11-20 10:01:07.081280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:38.632 [2024-11-20 10:01:07.081290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:123672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.632 [2024-11-20 10:01:07.081296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:38.632 [2024-11-20 10:01:07.081306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:123688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.632 [2024-11-20 10:01:07.081311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:38.632 [2024-11-20 10:01:07.081321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:123704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.632 [2024-11-20 10:01:07.081327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:38.632 [2024-11-20 10:01:07.081337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:123720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.632 [2024-11-20 10:01:07.081343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:38.632 [2024-11-20 10:01:07.081353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:123736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.632 [2024-11-20 10:01:07.081359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:38.632 [2024-11-20 10:01:07.081369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:123752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.632 [2024-11-20 10:01:07.081375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:38.632 [2024-11-20 10:01:07.081385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:123768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.632 [2024-11-20 10:01:07.081390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:38.632 [2024-11-20 10:01:07.081402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:123784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.632 [2024-11-20 10:01:07.081407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:38.632 [2024-11-20 10:01:07.081417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:123800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.632 [2024-11-20 10:01:07.081423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:38.632 [2024-11-20 10:01:07.081433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:123816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.632 [2024-11-20 10:01:07.081439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:38.632 [2024-11-20 10:01:07.081449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:123832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.632 [2024-11-20 10:01:07.081454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:38.632 [2024-11-20 10:01:07.081465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:123288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.632 [2024-11-20 10:01:07.081470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:38.632 [2024-11-20 10:01:07.081480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:123320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.632 [2024-11-20 10:01:07.081486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:38.632 [2024-11-20 10:01:07.081496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:123352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.632 [2024-11-20 10:01:07.081501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:38.632 [2024-11-20 10:01:07.081511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:123384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.633 [2024-11-20 10:01:07.081517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:38.633 [2024-11-20 10:01:07.081527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:123856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.633 [2024-11-20 10:01:07.081532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:38.633 [2024-11-20 10:01:07.081543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:123872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.633 [2024-11-20 10:01:07.081548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:38.633 [2024-11-20 10:01:07.081558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:123888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.633 [2024-11-20 10:01:07.081563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:38.633 [2024-11-20 10:01:07.081574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:123904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.633 [2024-11-20 10:01:07.081579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:38.633 12035.20 IOPS, 47.01 MiB/s [2024-11-20T09:01:09.549Z] 12072.81 IOPS, 47.16 MiB/s [2024-11-20T09:01:09.549Z] Received shutdown signal, test time was about 26.577682 seconds 00:27:38.633 00:27:38.633 Latency(us) 00:27:38.633 [2024-11-20T09:01:09.549Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:38.633 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:27:38.633 Verification LBA range: start 0x0 length 0x4000 00:27:38.633 Nvme0n1 : 26.58 12086.85 47.21 0.00 0.00 10571.32 901.12 3019898.88 00:27:38.633 [2024-11-20T09:01:09.549Z] =================================================================================================================== 00:27:38.633 [2024-11-20T09:01:09.549Z] Total : 12086.85 47.21 0.00 0.00 10571.32 901.12 3019898.88 00:27:38.633 10:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:38.633 10:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:27:38.633 10:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:38.633 10:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:27:38.633 10:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:38.633 10:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:27:38.633 10:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:38.633 10:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:27:38.633 10:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:38.633 10:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:38.633 rmmod nvme_tcp 00:27:38.633 rmmod nvme_fabrics 00:27:38.633 rmmod nvme_keyring 00:27:38.633 10:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:38.908 10:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:27:38.908 10:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:27:38.908 10:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 1503549 ']' 00:27:38.908 10:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 1503549 00:27:38.908 10:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 1503549 ']' 00:27:38.908 10:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 1503549 00:27:38.908 10:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:27:38.908 10:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:38.908 10:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1503549 00:27:38.908 10:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:38.908 10:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:38.908 10:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1503549' 00:27:38.908 killing process with pid 1503549 00:27:38.908 10:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 1503549 00:27:38.908 10:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 1503549 00:27:38.908 10:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:38.908 10:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:38.908 10:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:38.908 10:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:27:38.908 10:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:27:38.908 10:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:38.908 10:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:27:38.908 10:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:38.908 10:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:38.908 10:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:38.908 10:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:38.908 10:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:40.918 10:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:40.918 00:27:40.918 real 0m41.105s 00:27:40.918 user 1m46.364s 00:27:40.918 sys 0m11.377s 00:27:40.918 10:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:40.918 10:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:40.918 ************************************ 00:27:40.918 END TEST nvmf_host_multipath_status 00:27:40.918 ************************************ 00:27:41.178 10:01:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:27:41.178 10:01:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:41.178 10:01:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:41.178 10:01:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.178 ************************************ 00:27:41.178 START TEST nvmf_discovery_remove_ifc 00:27:41.178 ************************************ 00:27:41.178 10:01:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:27:41.178 * Looking for test storage... 00:27:41.178 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:41.178 10:01:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:41.178 10:01:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:27:41.178 10:01:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:41.178 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:41.178 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:41.178 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:41.178 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:41.178 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:27:41.178 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:27:41.178 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:27:41.178 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:27:41.179 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:27:41.179 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:27:41.179 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:27:41.179 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:41.179 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:27:41.179 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:27:41.179 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:41.179 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:41.179 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:27:41.179 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:27:41.179 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:41.179 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:27:41.179 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:27:41.179 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:27:41.179 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:27:41.179 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:41.440 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:27:41.440 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:27:41.440 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:41.440 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:41.440 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:27:41.440 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:41.440 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:41.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:41.440 --rc genhtml_branch_coverage=1 00:27:41.440 --rc genhtml_function_coverage=1 00:27:41.440 --rc genhtml_legend=1 00:27:41.440 --rc geninfo_all_blocks=1 00:27:41.440 --rc geninfo_unexecuted_blocks=1 00:27:41.440 00:27:41.440 ' 00:27:41.441 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:41.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:41.441 --rc genhtml_branch_coverage=1 00:27:41.441 --rc genhtml_function_coverage=1 00:27:41.441 --rc genhtml_legend=1 00:27:41.441 --rc geninfo_all_blocks=1 00:27:41.441 --rc geninfo_unexecuted_blocks=1 00:27:41.441 00:27:41.441 ' 00:27:41.441 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:41.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:41.441 --rc genhtml_branch_coverage=1 00:27:41.441 --rc genhtml_function_coverage=1 00:27:41.441 --rc genhtml_legend=1 00:27:41.441 --rc geninfo_all_blocks=1 00:27:41.441 --rc geninfo_unexecuted_blocks=1 00:27:41.441 00:27:41.441 ' 00:27:41.441 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:41.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:41.441 --rc genhtml_branch_coverage=1 00:27:41.441 --rc genhtml_function_coverage=1 00:27:41.441 --rc genhtml_legend=1 00:27:41.441 --rc geninfo_all_blocks=1 00:27:41.441 --rc geninfo_unexecuted_blocks=1 00:27:41.441 00:27:41.441 ' 00:27:41.441 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:41.441 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:27:41.441 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:41.441 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:41.441 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:41.441 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:41.441 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:41.441 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:41.441 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:41.441 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:41.441 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:41.441 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:41.441 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:41.441 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:41.441 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:41.441 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:41.441 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:41.441 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:41.441 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:41.441 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:27:41.441 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:41.441 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:41.441 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:41.441 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:41.441 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:41.441 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:41.441 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:27:41.441 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:41.441 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:27:41.441 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:41.441 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:41.441 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:41.441 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:41.441 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:41.441 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:41.441 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:41.441 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:41.441 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:41.441 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:41.441 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:27:41.441 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:27:41.441 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:27:41.441 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:27:41.441 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:27:41.441 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:27:41.441 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:27:41.441 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:41.441 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:41.441 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:41.441 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:41.441 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:41.441 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:41.441 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:41.441 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:41.441 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:41.441 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:41.441 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:27:41.441 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:49.581 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:49.581 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:27:49.581 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:49.581 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:49.581 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:49.581 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:49.581 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:49.581 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:27:49.581 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:49.581 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:27:49.581 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:27:49.581 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:27:49.581 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:27:49.581 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:27:49.581 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:27:49.581 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:49.581 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:49.581 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:49.581 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:49.581 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:49.581 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:49.581 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:49.581 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:49.581 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:49.581 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:49.581 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:49.581 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:49.581 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:49.581 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:49.581 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:49.581 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:49.581 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:49.581 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:49.581 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:49.581 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:49.581 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:49.581 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:49.581 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:49.581 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:49.581 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:49.581 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:49.581 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:49.581 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:49.581 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:49.581 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:49.581 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:49.581 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:49.581 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:49.581 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:49.581 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:49.581 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:49.581 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:49.581 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:49.581 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:49.581 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:49.581 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:49.581 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:49.581 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:49.581 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:49.581 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:49.581 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:49.581 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:49.581 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:49.581 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:49.581 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:49.581 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:49.581 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:49.581 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:49.581 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:49.581 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:49.581 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:49.581 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:49.581 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:49.581 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:27:49.581 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:49.581 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:49.581 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:49.581 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:49.581 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:49.581 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:49.582 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:49.582 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:49.582 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:49.582 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:49.582 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:49.582 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:49.582 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:49.582 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:49.582 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:49.582 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:49.582 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:49.582 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:49.582 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:49.582 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:49.582 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:49.582 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:49.582 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:49.582 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:49.582 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:49.582 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:49.582 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:49.582 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.690 ms 00:27:49.582 00:27:49.582 --- 10.0.0.2 ping statistics --- 00:27:49.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:49.582 rtt min/avg/max/mdev = 0.690/0.690/0.690/0.000 ms 00:27:49.582 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:49.582 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:49.582 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.269 ms 00:27:49.582 00:27:49.582 --- 10.0.0.1 ping statistics --- 00:27:49.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:49.582 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:27:49.582 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:49.582 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:27:49.582 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:49.582 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:49.582 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:49.582 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:49.582 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:49.582 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:49.582 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:49.582 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:27:49.582 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:49.582 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:49.582 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:49.582 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=1514008 00:27:49.582 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 1514008 00:27:49.582 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:27:49.582 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 1514008 ']' 00:27:49.582 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:49.582 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:49.582 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:49.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:49.582 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:49.582 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:49.582 [2024-11-20 10:01:19.693996] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:27:49.582 [2024-11-20 10:01:19.694064] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:49.582 [2024-11-20 10:01:19.793723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:49.582 [2024-11-20 10:01:19.844643] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:49.582 [2024-11-20 10:01:19.844692] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:49.582 [2024-11-20 10:01:19.844700] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:49.582 [2024-11-20 10:01:19.844714] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:49.582 [2024-11-20 10:01:19.844720] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:49.582 [2024-11-20 10:01:19.845529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:49.843 10:01:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:49.843 10:01:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:27:49.843 10:01:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:49.843 10:01:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:49.843 10:01:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:49.843 10:01:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:49.844 10:01:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:27:49.844 10:01:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.844 10:01:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:49.844 [2024-11-20 10:01:20.564401] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:49.844 [2024-11-20 10:01:20.572637] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:27:49.844 null0 00:27:49.844 [2024-11-20 10:01:20.604601] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:49.844 10:01:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.844 10:01:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1514152 00:27:49.844 10:01:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:27:49.844 10:01:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1514152 /tmp/host.sock 00:27:49.844 10:01:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 1514152 ']' 00:27:49.844 10:01:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:27:49.844 10:01:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:49.844 10:01:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:27:49.844 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:27:49.844 10:01:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:49.844 10:01:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:49.844 [2024-11-20 10:01:20.681903] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:27:49.844 [2024-11-20 10:01:20.681970] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1514152 ] 00:27:50.105 [2024-11-20 10:01:20.774703] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:50.105 [2024-11-20 10:01:20.827692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:50.677 10:01:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:50.678 10:01:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:27:50.678 10:01:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:50.678 10:01:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:27:50.678 10:01:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.678 10:01:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:50.678 10:01:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.678 10:01:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:27:50.678 10:01:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.678 10:01:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:50.678 10:01:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.678 10:01:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:27:50.678 10:01:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.678 10:01:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:52.064 [2024-11-20 10:01:22.644402] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:52.064 [2024-11-20 10:01:22.644423] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:52.064 [2024-11-20 10:01:22.644438] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:52.064 [2024-11-20 10:01:22.730731] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:27:52.064 [2024-11-20 10:01:22.953993] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:27:52.064 [2024-11-20 10:01:22.954963] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1bda3f0:1 started. 00:27:52.064 [2024-11-20 10:01:22.956528] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:52.064 [2024-11-20 10:01:22.956571] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:52.064 [2024-11-20 10:01:22.956592] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:52.064 [2024-11-20 10:01:22.956605] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:52.064 [2024-11-20 10:01:22.956625] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:52.064 10:01:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.064 10:01:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:27:52.064 10:01:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:52.064 [2024-11-20 10:01:22.962593] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1bda3f0 was disconnected and freed. delete nvme_qpair. 00:27:52.064 10:01:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:52.064 10:01:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:52.064 10:01:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.064 10:01:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:52.064 10:01:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:52.064 10:01:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:52.327 10:01:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.327 10:01:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:27:52.327 10:01:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:27:52.327 10:01:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:27:52.327 10:01:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:27:52.327 10:01:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:52.327 10:01:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:52.327 10:01:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:52.327 10:01:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.327 10:01:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:52.327 10:01:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:52.327 10:01:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:52.327 10:01:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.327 10:01:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:52.327 10:01:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:53.712 10:01:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:53.712 10:01:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:53.712 10:01:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:53.712 10:01:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.712 10:01:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:53.712 10:01:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:53.712 10:01:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:53.712 10:01:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.712 10:01:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:53.712 10:01:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:54.655 10:01:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:54.655 10:01:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:54.655 10:01:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:54.655 10:01:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.655 10:01:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:54.655 10:01:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:54.655 10:01:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:54.655 10:01:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.655 10:01:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:54.655 10:01:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:55.598 10:01:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:55.598 10:01:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:55.598 10:01:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:55.598 10:01:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.598 10:01:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:55.598 10:01:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:55.598 10:01:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:55.598 10:01:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.598 10:01:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:55.598 10:01:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:56.540 10:01:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:56.540 10:01:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:56.540 10:01:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:56.540 10:01:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.540 10:01:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:56.540 10:01:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:56.540 10:01:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:56.540 10:01:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.540 10:01:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:56.540 10:01:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:57.927 10:01:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:57.927 10:01:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:57.927 10:01:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:57.927 10:01:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.927 10:01:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:57.927 10:01:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:57.927 10:01:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:57.927 [2024-11-20 10:01:28.407232] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:27:57.927 [2024-11-20 10:01:28.407274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.927 [2024-11-20 10:01:28.407285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.927 [2024-11-20 10:01:28.407295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.927 [2024-11-20 10:01:28.407303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.927 [2024-11-20 10:01:28.407311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.927 [2024-11-20 10:01:28.407317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.927 [2024-11-20 10:01:28.407324] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.927 [2024-11-20 10:01:28.407334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.927 [2024-11-20 10:01:28.407341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.927 [2024-11-20 10:01:28.407347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.927 [2024-11-20 10:01:28.407353] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb6c00 is same with the state(6) to be set 00:27:57.927 [2024-11-20 10:01:28.417254] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bb6c00 (9): Bad file descriptor 00:27:57.927 [2024-11-20 10:01:28.427287] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:57.927 [2024-11-20 10:01:28.427296] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:57.927 [2024-11-20 10:01:28.427299] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:57.927 [2024-11-20 10:01:28.427303] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:57.927 [2024-11-20 10:01:28.427323] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:57.927 10:01:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.927 10:01:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:57.927 10:01:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:58.869 [2024-11-20 10:01:29.435219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:27:58.869 [2024-11-20 10:01:29.435316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bb6c00 with addr=10.0.0.2, port=4420 00:27:58.869 [2024-11-20 10:01:29.435348] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb6c00 is same with the state(6) to be set 00:27:58.869 [2024-11-20 10:01:29.435408] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bb6c00 (9): Bad file descriptor 00:27:58.869 [2024-11-20 10:01:29.435525] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:27:58.869 [2024-11-20 10:01:29.435582] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:58.869 [2024-11-20 10:01:29.435605] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:58.869 [2024-11-20 10:01:29.435631] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:58.869 [2024-11-20 10:01:29.435652] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:58.869 [2024-11-20 10:01:29.435671] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:58.869 [2024-11-20 10:01:29.435686] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:58.869 [2024-11-20 10:01:29.435709] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:58.869 [2024-11-20 10:01:29.435724] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:58.869 10:01:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:58.869 10:01:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:58.869 10:01:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:58.869 10:01:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.869 10:01:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:58.869 10:01:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:58.869 10:01:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:58.869 10:01:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.869 10:01:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:58.869 10:01:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:59.811 [2024-11-20 10:01:30.438133] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:59.811 [2024-11-20 10:01:30.438152] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:59.811 [2024-11-20 10:01:30.438166] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:59.811 [2024-11-20 10:01:30.438171] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:59.811 [2024-11-20 10:01:30.438177] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:27:59.811 [2024-11-20 10:01:30.438182] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:59.811 [2024-11-20 10:01:30.438187] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:59.811 [2024-11-20 10:01:30.438190] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:59.811 [2024-11-20 10:01:30.438210] bdev_nvme.c:7229:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:27:59.811 [2024-11-20 10:01:30.438229] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:59.811 [2024-11-20 10:01:30.438236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.811 [2024-11-20 10:01:30.438244] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:59.811 [2024-11-20 10:01:30.438250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.811 [2024-11-20 10:01:30.438256] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:59.811 [2024-11-20 10:01:30.438262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.811 [2024-11-20 10:01:30.438268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:59.811 [2024-11-20 10:01:30.438274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.811 [2024-11-20 10:01:30.438280] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:59.811 [2024-11-20 10:01:30.438285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.811 [2024-11-20 10:01:30.438290] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:27:59.811 [2024-11-20 10:01:30.438339] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba6340 (9): Bad file descriptor 00:27:59.811 [2024-11-20 10:01:30.439351] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:27:59.811 [2024-11-20 10:01:30.439360] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:27:59.811 10:01:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:59.811 10:01:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:59.811 10:01:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:59.811 10:01:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.811 10:01:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:59.811 10:01:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:59.811 10:01:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:59.811 10:01:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.811 10:01:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:27:59.811 10:01:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:59.811 10:01:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:59.811 10:01:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:27:59.811 10:01:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:59.811 10:01:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:59.811 10:01:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:59.811 10:01:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.812 10:01:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:59.812 10:01:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:59.812 10:01:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:59.812 10:01:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.812 10:01:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:59.812 10:01:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:01.195 10:01:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:01.195 10:01:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:01.195 10:01:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:01.195 10:01:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.195 10:01:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:01.195 10:01:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:01.195 10:01:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:01.195 10:01:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.195 10:01:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:28:01.195 10:01:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:01.765 [2024-11-20 10:01:32.499304] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:01.765 [2024-11-20 10:01:32.499320] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:01.765 [2024-11-20 10:01:32.499330] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:01.765 [2024-11-20 10:01:32.587590] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:28:02.025 [2024-11-20 10:01:32.688239] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:28:02.025 [2024-11-20 10:01:32.688932] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x1bab130:1 started. 00:28:02.025 [2024-11-20 10:01:32.689833] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:28:02.026 [2024-11-20 10:01:32.689859] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:28:02.026 [2024-11-20 10:01:32.689873] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:28:02.026 [2024-11-20 10:01:32.689883] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:28:02.026 [2024-11-20 10:01:32.689889] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:02.026 [2024-11-20 10:01:32.737759] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x1bab130 was disconnected and freed. delete nvme_qpair. 00:28:02.026 10:01:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:02.026 10:01:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:02.026 10:01:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:02.026 10:01:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.026 10:01:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:02.026 10:01:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:02.026 10:01:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:02.026 10:01:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.026 10:01:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:28:02.026 10:01:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:28:02.026 10:01:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1514152 00:28:02.026 10:01:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 1514152 ']' 00:28:02.026 10:01:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 1514152 00:28:02.026 10:01:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:28:02.026 10:01:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:02.026 10:01:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1514152 00:28:02.026 10:01:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:02.026 10:01:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:02.026 10:01:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1514152' 00:28:02.026 killing process with pid 1514152 00:28:02.026 10:01:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 1514152 00:28:02.026 10:01:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 1514152 00:28:02.287 10:01:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:28:02.287 10:01:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:02.287 10:01:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:28:02.287 10:01:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:02.287 10:01:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:28:02.287 10:01:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:02.287 10:01:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:02.287 rmmod nvme_tcp 00:28:02.287 rmmod nvme_fabrics 00:28:02.287 rmmod nvme_keyring 00:28:02.287 10:01:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:02.287 10:01:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:28:02.287 10:01:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:28:02.287 10:01:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 1514008 ']' 00:28:02.287 10:01:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 1514008 00:28:02.287 10:01:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 1514008 ']' 00:28:02.287 10:01:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 1514008 00:28:02.287 10:01:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:28:02.287 10:01:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:02.287 10:01:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1514008 00:28:02.287 10:01:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:02.287 10:01:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:02.287 10:01:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1514008' 00:28:02.287 killing process with pid 1514008 00:28:02.287 10:01:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 1514008 00:28:02.287 10:01:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 1514008 00:28:02.549 10:01:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:02.549 10:01:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:02.549 10:01:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:02.549 10:01:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:28:02.549 10:01:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:28:02.549 10:01:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:02.549 10:01:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:28:02.549 10:01:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:02.549 10:01:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:02.549 10:01:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:02.549 10:01:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:02.549 10:01:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:04.468 10:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:04.468 00:28:04.468 real 0m23.403s 00:28:04.468 user 0m27.521s 00:28:04.468 sys 0m7.145s 00:28:04.468 10:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:04.468 10:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:04.468 ************************************ 00:28:04.468 END TEST nvmf_discovery_remove_ifc 00:28:04.468 ************************************ 00:28:04.468 10:01:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:28:04.468 10:01:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:04.468 10:01:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:04.468 10:01:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.468 ************************************ 00:28:04.468 START TEST nvmf_identify_kernel_target 00:28:04.468 ************************************ 00:28:04.468 10:01:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:28:04.730 * Looking for test storage... 00:28:04.730 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:04.730 10:01:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:04.730 10:01:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:28:04.730 10:01:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:04.730 10:01:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:04.730 10:01:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:04.730 10:01:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:04.730 10:01:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:04.730 10:01:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:28:04.730 10:01:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:28:04.730 10:01:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:28:04.730 10:01:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:28:04.730 10:01:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:28:04.730 10:01:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:28:04.730 10:01:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:28:04.730 10:01:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:04.730 10:01:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:28:04.730 10:01:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:28:04.730 10:01:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:04.730 10:01:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:04.730 10:01:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:28:04.730 10:01:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:28:04.730 10:01:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:04.730 10:01:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:28:04.730 10:01:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:28:04.730 10:01:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:28:04.730 10:01:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:28:04.730 10:01:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:04.730 10:01:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:28:04.730 10:01:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:28:04.730 10:01:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:04.730 10:01:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:04.730 10:01:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:28:04.730 10:01:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:04.730 10:01:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:04.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:04.730 --rc genhtml_branch_coverage=1 00:28:04.730 --rc genhtml_function_coverage=1 00:28:04.730 --rc genhtml_legend=1 00:28:04.730 --rc geninfo_all_blocks=1 00:28:04.730 --rc geninfo_unexecuted_blocks=1 00:28:04.730 00:28:04.730 ' 00:28:04.730 10:01:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:04.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:04.730 --rc genhtml_branch_coverage=1 00:28:04.730 --rc genhtml_function_coverage=1 00:28:04.730 --rc genhtml_legend=1 00:28:04.730 --rc geninfo_all_blocks=1 00:28:04.730 --rc geninfo_unexecuted_blocks=1 00:28:04.730 00:28:04.730 ' 00:28:04.730 10:01:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:04.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:04.730 --rc genhtml_branch_coverage=1 00:28:04.730 --rc genhtml_function_coverage=1 00:28:04.730 --rc genhtml_legend=1 00:28:04.730 --rc geninfo_all_blocks=1 00:28:04.730 --rc geninfo_unexecuted_blocks=1 00:28:04.730 00:28:04.730 ' 00:28:04.730 10:01:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:04.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:04.730 --rc genhtml_branch_coverage=1 00:28:04.730 --rc genhtml_function_coverage=1 00:28:04.730 --rc genhtml_legend=1 00:28:04.730 --rc geninfo_all_blocks=1 00:28:04.730 --rc geninfo_unexecuted_blocks=1 00:28:04.730 00:28:04.730 ' 00:28:04.730 10:01:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:04.730 10:01:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:28:04.730 10:01:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:04.730 10:01:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:04.730 10:01:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:04.730 10:01:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:04.730 10:01:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:04.730 10:01:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:04.730 10:01:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:04.730 10:01:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:04.730 10:01:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:04.730 10:01:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:04.730 10:01:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:04.730 10:01:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:04.730 10:01:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:04.730 10:01:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:04.730 10:01:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:04.730 10:01:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:04.730 10:01:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:04.730 10:01:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:28:04.730 10:01:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:04.730 10:01:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:04.730 10:01:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:04.730 10:01:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:04.730 10:01:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:04.730 10:01:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:04.730 10:01:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:28:04.730 10:01:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:04.731 10:01:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:28:04.731 10:01:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:04.731 10:01:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:04.731 10:01:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:04.731 10:01:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:04.731 10:01:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:04.731 10:01:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:04.731 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:04.731 10:01:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:04.731 10:01:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:04.731 10:01:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:04.731 10:01:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:28:04.731 10:01:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:04.731 10:01:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:04.731 10:01:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:04.731 10:01:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:04.731 10:01:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:04.731 10:01:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:04.731 10:01:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:04.731 10:01:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:04.731 10:01:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:04.731 10:01:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:04.731 10:01:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:28:04.731 10:01:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:28:12.872 10:01:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:12.872 10:01:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:28:12.872 10:01:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:12.872 10:01:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:12.872 10:01:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:12.872 10:01:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:12.872 10:01:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:12.872 10:01:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:28:12.872 10:01:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:12.872 10:01:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:28:12.872 10:01:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:28:12.872 10:01:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:28:12.872 10:01:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:28:12.872 10:01:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:28:12.872 10:01:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:28:12.872 10:01:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:12.872 10:01:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:12.872 10:01:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:12.872 10:01:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:12.872 10:01:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:12.872 10:01:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:12.872 10:01:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:12.872 10:01:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:12.872 10:01:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:12.872 10:01:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:12.872 10:01:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:12.872 10:01:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:12.872 10:01:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:12.872 10:01:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:12.872 10:01:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:12.872 10:01:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:12.872 10:01:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:12.872 10:01:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:12.872 10:01:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:12.872 10:01:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:12.872 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:12.872 10:01:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:12.872 10:01:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:12.872 10:01:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:12.872 10:01:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:12.872 10:01:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:12.872 10:01:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:12.872 10:01:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:12.872 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:12.872 10:01:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:12.872 10:01:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:12.872 10:01:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:12.872 10:01:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:12.872 10:01:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:12.872 10:01:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:12.872 10:01:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:12.872 10:01:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:12.872 10:01:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:12.872 10:01:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:12.872 10:01:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:12.872 10:01:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:12.872 10:01:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:12.872 10:01:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:12.872 10:01:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:12.872 10:01:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:12.872 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:12.872 10:01:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:12.872 10:01:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:12.872 10:01:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:12.872 10:01:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:12.872 10:01:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:12.872 10:01:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:12.872 10:01:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:12.872 10:01:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:12.872 10:01:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:12.872 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:12.872 10:01:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:12.872 10:01:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:12.872 10:01:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:28:12.872 10:01:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:12.872 10:01:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:12.872 10:01:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:12.872 10:01:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:12.872 10:01:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:12.872 10:01:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:12.872 10:01:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:12.872 10:01:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:12.873 10:01:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:12.873 10:01:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:12.873 10:01:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:12.873 10:01:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:12.873 10:01:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:12.873 10:01:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:12.873 10:01:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:12.873 10:01:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:12.873 10:01:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:12.873 10:01:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:12.873 10:01:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:12.873 10:01:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:12.873 10:01:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:12.873 10:01:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:12.873 10:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:12.873 10:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:12.873 10:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:12.873 10:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:12.873 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:12.873 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.599 ms 00:28:12.873 00:28:12.873 --- 10.0.0.2 ping statistics --- 00:28:12.873 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:12.873 rtt min/avg/max/mdev = 0.599/0.599/0.599/0.000 ms 00:28:12.873 10:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:12.873 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:12.873 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:28:12.873 00:28:12.873 --- 10.0.0.1 ping statistics --- 00:28:12.873 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:12.873 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:28:12.873 10:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:12.873 10:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:28:12.873 10:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:12.873 10:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:12.873 10:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:12.873 10:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:12.873 10:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:12.873 10:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:12.873 10:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:12.873 10:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:28:12.873 10:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:28:12.873 10:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:28:12.873 10:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:12.873 10:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:12.873 10:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:12.873 10:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:12.873 10:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:12.873 10:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:12.873 10:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:12.873 10:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:12.873 10:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:12.873 10:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:28:12.873 10:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:28:12.873 10:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:28:12.873 10:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:28:12.873 10:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:12.873 10:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:12.873 10:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:12.873 10:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:28:12.873 10:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:28:12.873 10:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:28:12.873 10:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:12.873 10:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:16.174 Waiting for block devices as requested 00:28:16.174 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:16.174 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:16.174 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:16.174 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:16.174 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:16.174 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:16.434 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:16.434 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:16.434 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:28:16.695 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:16.695 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:16.955 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:16.955 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:16.955 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:17.217 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:17.217 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:17.217 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:17.478 10:01:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:28:17.739 10:01:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:17.739 10:01:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:28:17.739 10:01:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:28:17.739 10:01:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:17.739 10:01:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:17.739 10:01:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:28:17.739 10:01:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:28:17.739 10:01:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:28:17.739 No valid GPT data, bailing 00:28:17.739 10:01:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:17.739 10:01:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:28:17.739 10:01:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:28:17.739 10:01:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:28:17.739 10:01:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:28:17.739 10:01:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:17.739 10:01:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:17.739 10:01:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:17.739 10:01:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:28:17.739 10:01:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:28:17.739 10:01:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:28:17.739 10:01:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:28:17.739 10:01:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:28:17.739 10:01:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:28:17.739 10:01:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:28:17.739 10:01:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:28:17.739 10:01:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:17.739 10:01:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:28:17.739 00:28:17.739 Discovery Log Number of Records 2, Generation counter 2 00:28:17.739 =====Discovery Log Entry 0====== 00:28:17.739 trtype: tcp 00:28:17.739 adrfam: ipv4 00:28:17.739 subtype: current discovery subsystem 00:28:17.739 treq: not specified, sq flow control disable supported 00:28:17.739 portid: 1 00:28:17.739 trsvcid: 4420 00:28:17.739 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:17.739 traddr: 10.0.0.1 00:28:17.739 eflags: none 00:28:17.739 sectype: none 00:28:17.739 =====Discovery Log Entry 1====== 00:28:17.739 trtype: tcp 00:28:17.739 adrfam: ipv4 00:28:17.739 subtype: nvme subsystem 00:28:17.739 treq: not specified, sq flow control disable supported 00:28:17.739 portid: 1 00:28:17.739 trsvcid: 4420 00:28:17.739 subnqn: nqn.2016-06.io.spdk:testnqn 00:28:17.739 traddr: 10.0.0.1 00:28:17.739 eflags: none 00:28:17.739 sectype: none 00:28:17.739 10:01:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:28:17.739 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:28:18.001 ===================================================== 00:28:18.001 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:28:18.001 ===================================================== 00:28:18.001 Controller Capabilities/Features 00:28:18.001 ================================ 00:28:18.001 Vendor ID: 0000 00:28:18.001 Subsystem Vendor ID: 0000 00:28:18.001 Serial Number: 81ba2246266a00133843 00:28:18.001 Model Number: Linux 00:28:18.001 Firmware Version: 6.8.9-20 00:28:18.001 Recommended Arb Burst: 0 00:28:18.001 IEEE OUI Identifier: 00 00 00 00:28:18.001 Multi-path I/O 00:28:18.001 May have multiple subsystem ports: No 00:28:18.001 May have multiple controllers: No 00:28:18.001 Associated with SR-IOV VF: No 00:28:18.001 Max Data Transfer Size: Unlimited 00:28:18.001 Max Number of Namespaces: 0 00:28:18.001 Max Number of I/O Queues: 1024 00:28:18.001 NVMe Specification Version (VS): 1.3 00:28:18.001 NVMe Specification Version (Identify): 1.3 00:28:18.001 Maximum Queue Entries: 1024 00:28:18.001 Contiguous Queues Required: No 00:28:18.001 Arbitration Mechanisms Supported 00:28:18.001 Weighted Round Robin: Not Supported 00:28:18.001 Vendor Specific: Not Supported 00:28:18.001 Reset Timeout: 7500 ms 00:28:18.001 Doorbell Stride: 4 bytes 00:28:18.001 NVM Subsystem Reset: Not Supported 00:28:18.001 Command Sets Supported 00:28:18.001 NVM Command Set: Supported 00:28:18.001 Boot Partition: Not Supported 00:28:18.001 Memory Page Size Minimum: 4096 bytes 00:28:18.001 Memory Page Size Maximum: 4096 bytes 00:28:18.001 Persistent Memory Region: Not Supported 00:28:18.001 Optional Asynchronous Events Supported 00:28:18.001 Namespace Attribute Notices: Not Supported 00:28:18.001 Firmware Activation Notices: Not Supported 00:28:18.001 ANA Change Notices: Not Supported 00:28:18.001 PLE Aggregate Log Change Notices: Not Supported 00:28:18.001 LBA Status Info Alert Notices: Not Supported 00:28:18.001 EGE Aggregate Log Change Notices: Not Supported 00:28:18.001 Normal NVM Subsystem Shutdown event: Not Supported 00:28:18.001 Zone Descriptor Change Notices: Not Supported 00:28:18.001 Discovery Log Change Notices: Supported 00:28:18.001 Controller Attributes 00:28:18.001 128-bit Host Identifier: Not Supported 00:28:18.001 Non-Operational Permissive Mode: Not Supported 00:28:18.001 NVM Sets: Not Supported 00:28:18.001 Read Recovery Levels: Not Supported 00:28:18.001 Endurance Groups: Not Supported 00:28:18.001 Predictable Latency Mode: Not Supported 00:28:18.001 Traffic Based Keep ALive: Not Supported 00:28:18.001 Namespace Granularity: Not Supported 00:28:18.001 SQ Associations: Not Supported 00:28:18.001 UUID List: Not Supported 00:28:18.001 Multi-Domain Subsystem: Not Supported 00:28:18.001 Fixed Capacity Management: Not Supported 00:28:18.001 Variable Capacity Management: Not Supported 00:28:18.001 Delete Endurance Group: Not Supported 00:28:18.001 Delete NVM Set: Not Supported 00:28:18.001 Extended LBA Formats Supported: Not Supported 00:28:18.001 Flexible Data Placement Supported: Not Supported 00:28:18.001 00:28:18.001 Controller Memory Buffer Support 00:28:18.001 ================================ 00:28:18.001 Supported: No 00:28:18.001 00:28:18.001 Persistent Memory Region Support 00:28:18.001 ================================ 00:28:18.001 Supported: No 00:28:18.001 00:28:18.001 Admin Command Set Attributes 00:28:18.001 ============================ 00:28:18.001 Security Send/Receive: Not Supported 00:28:18.001 Format NVM: Not Supported 00:28:18.001 Firmware Activate/Download: Not Supported 00:28:18.001 Namespace Management: Not Supported 00:28:18.001 Device Self-Test: Not Supported 00:28:18.001 Directives: Not Supported 00:28:18.001 NVMe-MI: Not Supported 00:28:18.001 Virtualization Management: Not Supported 00:28:18.001 Doorbell Buffer Config: Not Supported 00:28:18.001 Get LBA Status Capability: Not Supported 00:28:18.001 Command & Feature Lockdown Capability: Not Supported 00:28:18.001 Abort Command Limit: 1 00:28:18.001 Async Event Request Limit: 1 00:28:18.001 Number of Firmware Slots: N/A 00:28:18.001 Firmware Slot 1 Read-Only: N/A 00:28:18.001 Firmware Activation Without Reset: N/A 00:28:18.001 Multiple Update Detection Support: N/A 00:28:18.001 Firmware Update Granularity: No Information Provided 00:28:18.001 Per-Namespace SMART Log: No 00:28:18.001 Asymmetric Namespace Access Log Page: Not Supported 00:28:18.001 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:28:18.001 Command Effects Log Page: Not Supported 00:28:18.001 Get Log Page Extended Data: Supported 00:28:18.001 Telemetry Log Pages: Not Supported 00:28:18.001 Persistent Event Log Pages: Not Supported 00:28:18.001 Supported Log Pages Log Page: May Support 00:28:18.001 Commands Supported & Effects Log Page: Not Supported 00:28:18.001 Feature Identifiers & Effects Log Page:May Support 00:28:18.001 NVMe-MI Commands & Effects Log Page: May Support 00:28:18.001 Data Area 4 for Telemetry Log: Not Supported 00:28:18.001 Error Log Page Entries Supported: 1 00:28:18.001 Keep Alive: Not Supported 00:28:18.001 00:28:18.001 NVM Command Set Attributes 00:28:18.001 ========================== 00:28:18.001 Submission Queue Entry Size 00:28:18.002 Max: 1 00:28:18.002 Min: 1 00:28:18.002 Completion Queue Entry Size 00:28:18.002 Max: 1 00:28:18.002 Min: 1 00:28:18.002 Number of Namespaces: 0 00:28:18.002 Compare Command: Not Supported 00:28:18.002 Write Uncorrectable Command: Not Supported 00:28:18.002 Dataset Management Command: Not Supported 00:28:18.002 Write Zeroes Command: Not Supported 00:28:18.002 Set Features Save Field: Not Supported 00:28:18.002 Reservations: Not Supported 00:28:18.002 Timestamp: Not Supported 00:28:18.002 Copy: Not Supported 00:28:18.002 Volatile Write Cache: Not Present 00:28:18.002 Atomic Write Unit (Normal): 1 00:28:18.002 Atomic Write Unit (PFail): 1 00:28:18.002 Atomic Compare & Write Unit: 1 00:28:18.002 Fused Compare & Write: Not Supported 00:28:18.002 Scatter-Gather List 00:28:18.002 SGL Command Set: Supported 00:28:18.002 SGL Keyed: Not Supported 00:28:18.002 SGL Bit Bucket Descriptor: Not Supported 00:28:18.002 SGL Metadata Pointer: Not Supported 00:28:18.002 Oversized SGL: Not Supported 00:28:18.002 SGL Metadata Address: Not Supported 00:28:18.002 SGL Offset: Supported 00:28:18.002 Transport SGL Data Block: Not Supported 00:28:18.002 Replay Protected Memory Block: Not Supported 00:28:18.002 00:28:18.002 Firmware Slot Information 00:28:18.002 ========================= 00:28:18.002 Active slot: 0 00:28:18.002 00:28:18.002 00:28:18.002 Error Log 00:28:18.002 ========= 00:28:18.002 00:28:18.002 Active Namespaces 00:28:18.002 ================= 00:28:18.002 Discovery Log Page 00:28:18.002 ================== 00:28:18.002 Generation Counter: 2 00:28:18.002 Number of Records: 2 00:28:18.002 Record Format: 0 00:28:18.002 00:28:18.002 Discovery Log Entry 0 00:28:18.002 ---------------------- 00:28:18.002 Transport Type: 3 (TCP) 00:28:18.002 Address Family: 1 (IPv4) 00:28:18.002 Subsystem Type: 3 (Current Discovery Subsystem) 00:28:18.002 Entry Flags: 00:28:18.002 Duplicate Returned Information: 0 00:28:18.002 Explicit Persistent Connection Support for Discovery: 0 00:28:18.002 Transport Requirements: 00:28:18.002 Secure Channel: Not Specified 00:28:18.002 Port ID: 1 (0x0001) 00:28:18.002 Controller ID: 65535 (0xffff) 00:28:18.002 Admin Max SQ Size: 32 00:28:18.002 Transport Service Identifier: 4420 00:28:18.002 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:28:18.002 Transport Address: 10.0.0.1 00:28:18.002 Discovery Log Entry 1 00:28:18.002 ---------------------- 00:28:18.002 Transport Type: 3 (TCP) 00:28:18.002 Address Family: 1 (IPv4) 00:28:18.002 Subsystem Type: 2 (NVM Subsystem) 00:28:18.002 Entry Flags: 00:28:18.002 Duplicate Returned Information: 0 00:28:18.002 Explicit Persistent Connection Support for Discovery: 0 00:28:18.002 Transport Requirements: 00:28:18.002 Secure Channel: Not Specified 00:28:18.002 Port ID: 1 (0x0001) 00:28:18.002 Controller ID: 65535 (0xffff) 00:28:18.002 Admin Max SQ Size: 32 00:28:18.002 Transport Service Identifier: 4420 00:28:18.002 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:28:18.002 Transport Address: 10.0.0.1 00:28:18.002 10:01:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:18.002 get_feature(0x01) failed 00:28:18.002 get_feature(0x02) failed 00:28:18.002 get_feature(0x04) failed 00:28:18.002 ===================================================== 00:28:18.002 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:18.002 ===================================================== 00:28:18.002 Controller Capabilities/Features 00:28:18.002 ================================ 00:28:18.002 Vendor ID: 0000 00:28:18.002 Subsystem Vendor ID: 0000 00:28:18.002 Serial Number: 82deff8e1aab7a67bda2 00:28:18.002 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:28:18.002 Firmware Version: 6.8.9-20 00:28:18.002 Recommended Arb Burst: 6 00:28:18.002 IEEE OUI Identifier: 00 00 00 00:28:18.002 Multi-path I/O 00:28:18.002 May have multiple subsystem ports: Yes 00:28:18.002 May have multiple controllers: Yes 00:28:18.002 Associated with SR-IOV VF: No 00:28:18.002 Max Data Transfer Size: Unlimited 00:28:18.002 Max Number of Namespaces: 1024 00:28:18.002 Max Number of I/O Queues: 128 00:28:18.002 NVMe Specification Version (VS): 1.3 00:28:18.002 NVMe Specification Version (Identify): 1.3 00:28:18.002 Maximum Queue Entries: 1024 00:28:18.002 Contiguous Queues Required: No 00:28:18.002 Arbitration Mechanisms Supported 00:28:18.002 Weighted Round Robin: Not Supported 00:28:18.002 Vendor Specific: Not Supported 00:28:18.002 Reset Timeout: 7500 ms 00:28:18.002 Doorbell Stride: 4 bytes 00:28:18.002 NVM Subsystem Reset: Not Supported 00:28:18.002 Command Sets Supported 00:28:18.002 NVM Command Set: Supported 00:28:18.002 Boot Partition: Not Supported 00:28:18.002 Memory Page Size Minimum: 4096 bytes 00:28:18.002 Memory Page Size Maximum: 4096 bytes 00:28:18.002 Persistent Memory Region: Not Supported 00:28:18.002 Optional Asynchronous Events Supported 00:28:18.002 Namespace Attribute Notices: Supported 00:28:18.002 Firmware Activation Notices: Not Supported 00:28:18.002 ANA Change Notices: Supported 00:28:18.002 PLE Aggregate Log Change Notices: Not Supported 00:28:18.002 LBA Status Info Alert Notices: Not Supported 00:28:18.002 EGE Aggregate Log Change Notices: Not Supported 00:28:18.002 Normal NVM Subsystem Shutdown event: Not Supported 00:28:18.002 Zone Descriptor Change Notices: Not Supported 00:28:18.002 Discovery Log Change Notices: Not Supported 00:28:18.002 Controller Attributes 00:28:18.002 128-bit Host Identifier: Supported 00:28:18.002 Non-Operational Permissive Mode: Not Supported 00:28:18.002 NVM Sets: Not Supported 00:28:18.002 Read Recovery Levels: Not Supported 00:28:18.002 Endurance Groups: Not Supported 00:28:18.002 Predictable Latency Mode: Not Supported 00:28:18.002 Traffic Based Keep ALive: Supported 00:28:18.002 Namespace Granularity: Not Supported 00:28:18.002 SQ Associations: Not Supported 00:28:18.002 UUID List: Not Supported 00:28:18.002 Multi-Domain Subsystem: Not Supported 00:28:18.002 Fixed Capacity Management: Not Supported 00:28:18.002 Variable Capacity Management: Not Supported 00:28:18.002 Delete Endurance Group: Not Supported 00:28:18.002 Delete NVM Set: Not Supported 00:28:18.002 Extended LBA Formats Supported: Not Supported 00:28:18.002 Flexible Data Placement Supported: Not Supported 00:28:18.002 00:28:18.002 Controller Memory Buffer Support 00:28:18.002 ================================ 00:28:18.002 Supported: No 00:28:18.002 00:28:18.002 Persistent Memory Region Support 00:28:18.002 ================================ 00:28:18.002 Supported: No 00:28:18.002 00:28:18.002 Admin Command Set Attributes 00:28:18.002 ============================ 00:28:18.002 Security Send/Receive: Not Supported 00:28:18.002 Format NVM: Not Supported 00:28:18.002 Firmware Activate/Download: Not Supported 00:28:18.002 Namespace Management: Not Supported 00:28:18.002 Device Self-Test: Not Supported 00:28:18.002 Directives: Not Supported 00:28:18.002 NVMe-MI: Not Supported 00:28:18.002 Virtualization Management: Not Supported 00:28:18.002 Doorbell Buffer Config: Not Supported 00:28:18.002 Get LBA Status Capability: Not Supported 00:28:18.002 Command & Feature Lockdown Capability: Not Supported 00:28:18.002 Abort Command Limit: 4 00:28:18.002 Async Event Request Limit: 4 00:28:18.002 Number of Firmware Slots: N/A 00:28:18.002 Firmware Slot 1 Read-Only: N/A 00:28:18.002 Firmware Activation Without Reset: N/A 00:28:18.002 Multiple Update Detection Support: N/A 00:28:18.002 Firmware Update Granularity: No Information Provided 00:28:18.002 Per-Namespace SMART Log: Yes 00:28:18.002 Asymmetric Namespace Access Log Page: Supported 00:28:18.002 ANA Transition Time : 10 sec 00:28:18.002 00:28:18.002 Asymmetric Namespace Access Capabilities 00:28:18.002 ANA Optimized State : Supported 00:28:18.002 ANA Non-Optimized State : Supported 00:28:18.002 ANA Inaccessible State : Supported 00:28:18.002 ANA Persistent Loss State : Supported 00:28:18.002 ANA Change State : Supported 00:28:18.002 ANAGRPID is not changed : No 00:28:18.002 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:28:18.002 00:28:18.002 ANA Group Identifier Maximum : 128 00:28:18.002 Number of ANA Group Identifiers : 128 00:28:18.002 Max Number of Allowed Namespaces : 1024 00:28:18.002 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:28:18.002 Command Effects Log Page: Supported 00:28:18.002 Get Log Page Extended Data: Supported 00:28:18.002 Telemetry Log Pages: Not Supported 00:28:18.002 Persistent Event Log Pages: Not Supported 00:28:18.002 Supported Log Pages Log Page: May Support 00:28:18.003 Commands Supported & Effects Log Page: Not Supported 00:28:18.003 Feature Identifiers & Effects Log Page:May Support 00:28:18.003 NVMe-MI Commands & Effects Log Page: May Support 00:28:18.003 Data Area 4 for Telemetry Log: Not Supported 00:28:18.003 Error Log Page Entries Supported: 128 00:28:18.003 Keep Alive: Supported 00:28:18.003 Keep Alive Granularity: 1000 ms 00:28:18.003 00:28:18.003 NVM Command Set Attributes 00:28:18.003 ========================== 00:28:18.003 Submission Queue Entry Size 00:28:18.003 Max: 64 00:28:18.003 Min: 64 00:28:18.003 Completion Queue Entry Size 00:28:18.003 Max: 16 00:28:18.003 Min: 16 00:28:18.003 Number of Namespaces: 1024 00:28:18.003 Compare Command: Not Supported 00:28:18.003 Write Uncorrectable Command: Not Supported 00:28:18.003 Dataset Management Command: Supported 00:28:18.003 Write Zeroes Command: Supported 00:28:18.003 Set Features Save Field: Not Supported 00:28:18.003 Reservations: Not Supported 00:28:18.003 Timestamp: Not Supported 00:28:18.003 Copy: Not Supported 00:28:18.003 Volatile Write Cache: Present 00:28:18.003 Atomic Write Unit (Normal): 1 00:28:18.003 Atomic Write Unit (PFail): 1 00:28:18.003 Atomic Compare & Write Unit: 1 00:28:18.003 Fused Compare & Write: Not Supported 00:28:18.003 Scatter-Gather List 00:28:18.003 SGL Command Set: Supported 00:28:18.003 SGL Keyed: Not Supported 00:28:18.003 SGL Bit Bucket Descriptor: Not Supported 00:28:18.003 SGL Metadata Pointer: Not Supported 00:28:18.003 Oversized SGL: Not Supported 00:28:18.003 SGL Metadata Address: Not Supported 00:28:18.003 SGL Offset: Supported 00:28:18.003 Transport SGL Data Block: Not Supported 00:28:18.003 Replay Protected Memory Block: Not Supported 00:28:18.003 00:28:18.003 Firmware Slot Information 00:28:18.003 ========================= 00:28:18.003 Active slot: 0 00:28:18.003 00:28:18.003 Asymmetric Namespace Access 00:28:18.003 =========================== 00:28:18.003 Change Count : 0 00:28:18.003 Number of ANA Group Descriptors : 1 00:28:18.003 ANA Group Descriptor : 0 00:28:18.003 ANA Group ID : 1 00:28:18.003 Number of NSID Values : 1 00:28:18.003 Change Count : 0 00:28:18.003 ANA State : 1 00:28:18.003 Namespace Identifier : 1 00:28:18.003 00:28:18.003 Commands Supported and Effects 00:28:18.003 ============================== 00:28:18.003 Admin Commands 00:28:18.003 -------------- 00:28:18.003 Get Log Page (02h): Supported 00:28:18.003 Identify (06h): Supported 00:28:18.003 Abort (08h): Supported 00:28:18.003 Set Features (09h): Supported 00:28:18.003 Get Features (0Ah): Supported 00:28:18.003 Asynchronous Event Request (0Ch): Supported 00:28:18.003 Keep Alive (18h): Supported 00:28:18.003 I/O Commands 00:28:18.003 ------------ 00:28:18.003 Flush (00h): Supported 00:28:18.003 Write (01h): Supported LBA-Change 00:28:18.003 Read (02h): Supported 00:28:18.003 Write Zeroes (08h): Supported LBA-Change 00:28:18.003 Dataset Management (09h): Supported 00:28:18.003 00:28:18.003 Error Log 00:28:18.003 ========= 00:28:18.003 Entry: 0 00:28:18.003 Error Count: 0x3 00:28:18.003 Submission Queue Id: 0x0 00:28:18.003 Command Id: 0x5 00:28:18.003 Phase Bit: 0 00:28:18.003 Status Code: 0x2 00:28:18.003 Status Code Type: 0x0 00:28:18.003 Do Not Retry: 1 00:28:18.003 Error Location: 0x28 00:28:18.003 LBA: 0x0 00:28:18.003 Namespace: 0x0 00:28:18.003 Vendor Log Page: 0x0 00:28:18.003 ----------- 00:28:18.003 Entry: 1 00:28:18.003 Error Count: 0x2 00:28:18.003 Submission Queue Id: 0x0 00:28:18.003 Command Id: 0x5 00:28:18.003 Phase Bit: 0 00:28:18.003 Status Code: 0x2 00:28:18.003 Status Code Type: 0x0 00:28:18.003 Do Not Retry: 1 00:28:18.003 Error Location: 0x28 00:28:18.003 LBA: 0x0 00:28:18.003 Namespace: 0x0 00:28:18.003 Vendor Log Page: 0x0 00:28:18.003 ----------- 00:28:18.003 Entry: 2 00:28:18.003 Error Count: 0x1 00:28:18.003 Submission Queue Id: 0x0 00:28:18.003 Command Id: 0x4 00:28:18.003 Phase Bit: 0 00:28:18.003 Status Code: 0x2 00:28:18.003 Status Code Type: 0x0 00:28:18.003 Do Not Retry: 1 00:28:18.003 Error Location: 0x28 00:28:18.003 LBA: 0x0 00:28:18.003 Namespace: 0x0 00:28:18.003 Vendor Log Page: 0x0 00:28:18.003 00:28:18.003 Number of Queues 00:28:18.003 ================ 00:28:18.003 Number of I/O Submission Queues: 128 00:28:18.003 Number of I/O Completion Queues: 128 00:28:18.003 00:28:18.003 ZNS Specific Controller Data 00:28:18.003 ============================ 00:28:18.003 Zone Append Size Limit: 0 00:28:18.003 00:28:18.003 00:28:18.003 Active Namespaces 00:28:18.003 ================= 00:28:18.003 get_feature(0x05) failed 00:28:18.003 Namespace ID:1 00:28:18.003 Command Set Identifier: NVM (00h) 00:28:18.003 Deallocate: Supported 00:28:18.003 Deallocated/Unwritten Error: Not Supported 00:28:18.003 Deallocated Read Value: Unknown 00:28:18.003 Deallocate in Write Zeroes: Not Supported 00:28:18.003 Deallocated Guard Field: 0xFFFF 00:28:18.003 Flush: Supported 00:28:18.003 Reservation: Not Supported 00:28:18.003 Namespace Sharing Capabilities: Multiple Controllers 00:28:18.003 Size (in LBAs): 3750748848 (1788GiB) 00:28:18.003 Capacity (in LBAs): 3750748848 (1788GiB) 00:28:18.003 Utilization (in LBAs): 3750748848 (1788GiB) 00:28:18.003 UUID: 5e6313fd-ab6b-4332-9a3d-1f097d058005 00:28:18.003 Thin Provisioning: Not Supported 00:28:18.003 Per-NS Atomic Units: Yes 00:28:18.003 Atomic Write Unit (Normal): 8 00:28:18.003 Atomic Write Unit (PFail): 8 00:28:18.003 Preferred Write Granularity: 8 00:28:18.003 Atomic Compare & Write Unit: 8 00:28:18.003 Atomic Boundary Size (Normal): 0 00:28:18.003 Atomic Boundary Size (PFail): 0 00:28:18.003 Atomic Boundary Offset: 0 00:28:18.003 NGUID/EUI64 Never Reused: No 00:28:18.003 ANA group ID: 1 00:28:18.003 Namespace Write Protected: No 00:28:18.003 Number of LBA Formats: 1 00:28:18.003 Current LBA Format: LBA Format #00 00:28:18.003 LBA Format #00: Data Size: 512 Metadata Size: 0 00:28:18.003 00:28:18.003 10:01:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:28:18.003 10:01:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:18.003 10:01:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:28:18.003 10:01:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:18.003 10:01:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:28:18.003 10:01:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:18.003 10:01:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:18.003 rmmod nvme_tcp 00:28:18.003 rmmod nvme_fabrics 00:28:18.003 10:01:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:18.003 10:01:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:28:18.003 10:01:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:28:18.003 10:01:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:28:18.003 10:01:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:18.003 10:01:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:18.003 10:01:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:18.003 10:01:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:28:18.003 10:01:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:28:18.003 10:01:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:18.003 10:01:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:28:18.003 10:01:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:18.003 10:01:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:18.003 10:01:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:18.003 10:01:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:18.003 10:01:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:20.549 10:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:20.549 10:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:28:20.549 10:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:28:20.549 10:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:28:20.549 10:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:20.549 10:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:20.549 10:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:20.549 10:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:20.549 10:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:28:20.549 10:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:28:20.549 10:01:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:23.852 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:23.852 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:23.852 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:23.852 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:23.852 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:23.852 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:23.852 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:23.852 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:23.852 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:23.852 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:23.852 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:23.852 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:23.852 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:23.852 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:23.852 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:23.852 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:23.852 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:28:24.424 00:28:24.424 real 0m19.725s 00:28:24.424 user 0m5.324s 00:28:24.424 sys 0m11.374s 00:28:24.424 10:01:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:24.424 10:01:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:28:24.424 ************************************ 00:28:24.424 END TEST nvmf_identify_kernel_target 00:28:24.424 ************************************ 00:28:24.424 10:01:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:28:24.424 10:01:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:24.424 10:01:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:24.424 10:01:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.424 ************************************ 00:28:24.424 START TEST nvmf_auth_host 00:28:24.424 ************************************ 00:28:24.424 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:28:24.424 * Looking for test storage... 00:28:24.424 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:24.424 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:24.424 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:28:24.424 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:24.687 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:24.687 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:24.687 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:24.687 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:24.687 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:28:24.687 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:28:24.687 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:28:24.687 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:28:24.687 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:28:24.687 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:28:24.687 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:28:24.687 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:24.687 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:28:24.687 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:28:24.687 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:24.687 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:24.687 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:28:24.687 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:28:24.687 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:24.687 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:28:24.687 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:28:24.687 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:28:24.687 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:28:24.687 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:24.687 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:28:24.687 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:28:24.687 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:24.687 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:24.687 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:28:24.687 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:24.687 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:24.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:24.687 --rc genhtml_branch_coverage=1 00:28:24.687 --rc genhtml_function_coverage=1 00:28:24.687 --rc genhtml_legend=1 00:28:24.687 --rc geninfo_all_blocks=1 00:28:24.687 --rc geninfo_unexecuted_blocks=1 00:28:24.687 00:28:24.687 ' 00:28:24.687 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:24.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:24.687 --rc genhtml_branch_coverage=1 00:28:24.687 --rc genhtml_function_coverage=1 00:28:24.687 --rc genhtml_legend=1 00:28:24.687 --rc geninfo_all_blocks=1 00:28:24.687 --rc geninfo_unexecuted_blocks=1 00:28:24.687 00:28:24.687 ' 00:28:24.687 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:24.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:24.687 --rc genhtml_branch_coverage=1 00:28:24.687 --rc genhtml_function_coverage=1 00:28:24.687 --rc genhtml_legend=1 00:28:24.687 --rc geninfo_all_blocks=1 00:28:24.687 --rc geninfo_unexecuted_blocks=1 00:28:24.687 00:28:24.687 ' 00:28:24.687 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:24.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:24.687 --rc genhtml_branch_coverage=1 00:28:24.687 --rc genhtml_function_coverage=1 00:28:24.687 --rc genhtml_legend=1 00:28:24.687 --rc geninfo_all_blocks=1 00:28:24.687 --rc geninfo_unexecuted_blocks=1 00:28:24.687 00:28:24.687 ' 00:28:24.687 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:24.687 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:28:24.687 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:24.687 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:24.687 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:24.687 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:24.687 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:24.687 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:24.687 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:24.687 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:24.687 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:24.687 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:24.687 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:24.687 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:24.687 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:24.687 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:24.687 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:24.687 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:24.687 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:24.687 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:28:24.687 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:24.687 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:24.687 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:24.687 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:24.687 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:24.687 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:24.687 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:28:24.687 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:24.687 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:28:24.687 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:24.687 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:24.687 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:24.687 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:24.687 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:24.687 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:24.687 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:24.687 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:24.687 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:24.687 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:24.687 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:28:24.687 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:28:24.687 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:28:24.687 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:28:24.687 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:24.687 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:24.687 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:28:24.687 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:28:24.687 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:28:24.687 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:24.687 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:24.687 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:24.687 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:24.687 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:24.687 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:24.687 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:24.687 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:24.687 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:24.687 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:24.687 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:28:24.687 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.827 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:32.827 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:28:32.827 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:32.827 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:32.827 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:32.827 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:32.827 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:32.827 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:28:32.827 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:32.827 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:28:32.827 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:28:32.827 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:28:32.827 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:28:32.827 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:28:32.827 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:28:32.827 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:32.827 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:32.827 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:32.828 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:32.828 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:32.828 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:32.828 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:32.828 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:32.828 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.518 ms 00:28:32.828 00:28:32.828 --- 10.0.0.2 ping statistics --- 00:28:32.828 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:32.828 rtt min/avg/max/mdev = 0.518/0.518/0.518/0.000 ms 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:32.828 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:32.828 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.303 ms 00:28:32.828 00:28:32.828 --- 10.0.0.1 ping statistics --- 00:28:32.828 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:32.828 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=1528522 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 1528522 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 1528522 ']' 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:32.828 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.089 10:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:33.089 10:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:28:33.089 10:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:33.089 10:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:33.089 10:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.089 10:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:33.089 10:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:28:33.089 10:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:28:33.089 10:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:33.089 10:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:33.089 10:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:33.089 10:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:28:33.089 10:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:28:33.089 10:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:33.089 10:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=4292e44c852579f75d14d55874abbfac 00:28:33.089 10:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:28:33.089 10:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Tpk 00:28:33.089 10:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 4292e44c852579f75d14d55874abbfac 0 00:28:33.089 10:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 4292e44c852579f75d14d55874abbfac 0 00:28:33.089 10:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:33.089 10:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:33.089 10:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=4292e44c852579f75d14d55874abbfac 00:28:33.089 10:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:28:33.089 10:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:33.089 10:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Tpk 00:28:33.089 10:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Tpk 00:28:33.089 10:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.Tpk 00:28:33.089 10:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:28:33.089 10:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:33.089 10:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:33.089 10:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:33.089 10:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:28:33.089 10:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:28:33.089 10:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:28:33.089 10:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=a5711fc7d54a842a34e27de3702d85477be030453b77472659a94ecb0574e9ef 00:28:33.089 10:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:28:33.089 10:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.aJ5 00:28:33.089 10:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key a5711fc7d54a842a34e27de3702d85477be030453b77472659a94ecb0574e9ef 3 00:28:33.089 10:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 a5711fc7d54a842a34e27de3702d85477be030453b77472659a94ecb0574e9ef 3 00:28:33.089 10:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:33.089 10:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:33.089 10:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=a5711fc7d54a842a34e27de3702d85477be030453b77472659a94ecb0574e9ef 00:28:33.089 10:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:28:33.089 10:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:33.089 10:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.aJ5 00:28:33.089 10:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.aJ5 00:28:33.089 10:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.aJ5 00:28:33.090 10:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:28:33.090 10:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:33.090 10:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:33.090 10:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:33.090 10:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:28:33.090 10:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:28:33.090 10:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:33.351 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=2e9c0a84a993c805769040e539f568a578e765ac50dd98bf 00:28:33.351 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:28:33.351 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.fKq 00:28:33.351 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 2e9c0a84a993c805769040e539f568a578e765ac50dd98bf 0 00:28:33.351 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 2e9c0a84a993c805769040e539f568a578e765ac50dd98bf 0 00:28:33.351 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:33.351 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:33.351 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=2e9c0a84a993c805769040e539f568a578e765ac50dd98bf 00:28:33.351 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:28:33.351 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:33.351 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.fKq 00:28:33.351 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.fKq 00:28:33.351 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.fKq 00:28:33.351 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:28:33.351 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:33.351 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:33.351 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:33.351 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:28:33.351 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:28:33.351 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:33.351 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d0009c540ba93150cd0c185d63f97f8c878bede700d7dbf5 00:28:33.351 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:28:33.351 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.tqw 00:28:33.351 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d0009c540ba93150cd0c185d63f97f8c878bede700d7dbf5 2 00:28:33.351 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d0009c540ba93150cd0c185d63f97f8c878bede700d7dbf5 2 00:28:33.351 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:33.351 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:33.351 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d0009c540ba93150cd0c185d63f97f8c878bede700d7dbf5 00:28:33.351 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:28:33.351 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:33.351 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.tqw 00:28:33.351 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.tqw 00:28:33.351 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.tqw 00:28:33.351 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:28:33.351 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:33.351 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:33.351 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:33.351 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:28:33.351 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:28:33.351 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:33.351 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b74f47b9dad90d31721979647992fb34 00:28:33.351 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:28:33.351 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.H95 00:28:33.351 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b74f47b9dad90d31721979647992fb34 1 00:28:33.351 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b74f47b9dad90d31721979647992fb34 1 00:28:33.351 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:33.351 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:33.351 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b74f47b9dad90d31721979647992fb34 00:28:33.351 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:28:33.351 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:33.351 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.H95 00:28:33.351 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.H95 00:28:33.351 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.H95 00:28:33.351 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:28:33.351 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:33.351 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:33.351 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:33.351 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:28:33.351 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:28:33.351 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:33.351 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=4900833cf1d75197c8d01e7644079b1d 00:28:33.351 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:28:33.351 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.iRe 00:28:33.351 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 4900833cf1d75197c8d01e7644079b1d 1 00:28:33.351 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 4900833cf1d75197c8d01e7644079b1d 1 00:28:33.351 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:33.352 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:33.352 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=4900833cf1d75197c8d01e7644079b1d 00:28:33.352 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:28:33.352 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:33.613 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.iRe 00:28:33.613 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.iRe 00:28:33.613 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.iRe 00:28:33.613 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:28:33.613 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:33.613 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:33.613 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:33.613 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:28:33.613 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:28:33.613 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:33.613 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b73a5dd3154a8f025fb676b03345e07fbd588eacfb208317 00:28:33.613 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:28:33.613 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.bGR 00:28:33.613 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b73a5dd3154a8f025fb676b03345e07fbd588eacfb208317 2 00:28:33.613 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b73a5dd3154a8f025fb676b03345e07fbd588eacfb208317 2 00:28:33.613 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:33.613 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:33.613 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b73a5dd3154a8f025fb676b03345e07fbd588eacfb208317 00:28:33.613 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:28:33.613 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:33.613 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.bGR 00:28:33.613 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.bGR 00:28:33.613 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.bGR 00:28:33.613 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:28:33.613 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:33.613 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:33.613 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:33.613 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:28:33.613 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:28:33.613 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:33.613 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=3ba5c6eed89963cd95d306284d896cb9 00:28:33.613 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:28:33.613 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.H6S 00:28:33.613 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 3ba5c6eed89963cd95d306284d896cb9 0 00:28:33.613 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 3ba5c6eed89963cd95d306284d896cb9 0 00:28:33.613 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:33.613 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:33.613 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=3ba5c6eed89963cd95d306284d896cb9 00:28:33.613 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:28:33.613 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:33.613 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.H6S 00:28:33.613 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.H6S 00:28:33.613 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.H6S 00:28:33.613 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:28:33.613 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:33.613 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:33.614 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:33.614 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:28:33.614 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:28:33.614 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:28:33.614 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=217cae302edf9b950d54cabc92e9261c3385ad9e7239177cbfdb749d0ceef2ce 00:28:33.614 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:28:33.614 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.WDg 00:28:33.614 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 217cae302edf9b950d54cabc92e9261c3385ad9e7239177cbfdb749d0ceef2ce 3 00:28:33.614 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 217cae302edf9b950d54cabc92e9261c3385ad9e7239177cbfdb749d0ceef2ce 3 00:28:33.614 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:33.614 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:33.614 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=217cae302edf9b950d54cabc92e9261c3385ad9e7239177cbfdb749d0ceef2ce 00:28:33.614 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:28:33.614 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:33.614 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.WDg 00:28:33.614 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.WDg 00:28:33.614 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.WDg 00:28:33.614 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:28:33.614 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1528522 00:28:33.614 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 1528522 ']' 00:28:33.614 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:33.614 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:33.614 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:33.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:33.614 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:33.614 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.875 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:33.875 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:28:33.875 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:33.875 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Tpk 00:28:33.875 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.875 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.875 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.875 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.aJ5 ]] 00:28:33.875 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.aJ5 00:28:33.875 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.875 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.875 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.875 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:33.875 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.fKq 00:28:33.875 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.875 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.875 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.875 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.tqw ]] 00:28:33.875 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.tqw 00:28:33.875 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.875 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.875 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.875 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:33.875 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.H95 00:28:33.875 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.875 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.875 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.875 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.iRe ]] 00:28:33.875 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.iRe 00:28:33.875 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.875 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.875 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.875 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:33.875 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.bGR 00:28:33.875 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.875 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.875 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.875 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.H6S ]] 00:28:33.875 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.H6S 00:28:33.875 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.875 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.875 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.875 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:33.875 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.WDg 00:28:33.875 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.875 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.875 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.875 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:28:33.875 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:28:33.875 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:28:33.875 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:33.875 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:33.875 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:33.875 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:33.875 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:33.875 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:33.875 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:33.875 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:33.875 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:33.875 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:33.875 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:28:33.875 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:28:33.875 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:28:33.875 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:33.875 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:33.875 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:33.875 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:28:33.875 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:28:34.136 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:28:34.136 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:34.136 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:37.437 Waiting for block devices as requested 00:28:37.437 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:37.437 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:37.698 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:37.698 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:37.698 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:37.958 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:37.958 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:37.958 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:37.958 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:28:38.218 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:38.218 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:38.479 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:38.479 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:38.479 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:38.479 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:38.740 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:38.740 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:39.815 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:28:39.815 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:39.815 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:28:39.816 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:28:39.816 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:39.816 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:39.816 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:28:39.816 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:28:39.816 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:28:39.816 No valid GPT data, bailing 00:28:39.816 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:39.816 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:28:39.816 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:28:39.816 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:28:39.816 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:28:39.816 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:39.816 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:39.816 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:39.816 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:28:39.816 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:28:39.816 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:28:39.816 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:28:39.816 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:28:39.816 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:28:39.816 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:28:39.816 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:28:39.816 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:39.816 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:28:39.816 00:28:39.816 Discovery Log Number of Records 2, Generation counter 2 00:28:39.816 =====Discovery Log Entry 0====== 00:28:39.816 trtype: tcp 00:28:39.816 adrfam: ipv4 00:28:39.816 subtype: current discovery subsystem 00:28:39.816 treq: not specified, sq flow control disable supported 00:28:39.816 portid: 1 00:28:39.816 trsvcid: 4420 00:28:39.816 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:39.816 traddr: 10.0.0.1 00:28:39.816 eflags: none 00:28:39.816 sectype: none 00:28:39.816 =====Discovery Log Entry 1====== 00:28:39.816 trtype: tcp 00:28:39.816 adrfam: ipv4 00:28:39.816 subtype: nvme subsystem 00:28:39.816 treq: not specified, sq flow control disable supported 00:28:39.816 portid: 1 00:28:39.816 trsvcid: 4420 00:28:39.816 subnqn: nqn.2024-02.io.spdk:cnode0 00:28:39.816 traddr: 10.0.0.1 00:28:39.816 eflags: none 00:28:39.816 sectype: none 00:28:39.816 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:39.816 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:28:39.816 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:39.816 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:39.816 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:39.816 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:39.816 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:39.816 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:39.816 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmU5YzBhODRhOTkzYzgwNTc2OTA0MGU1MzlmNTY4YTU3OGU3NjVhYzUwZGQ5OGJmYY7Ohw==: 00:28:39.816 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDAwMDljNTQwYmE5MzE1MGNkMGMxODVkNjNmOTdmOGM4NzhiZWRlNzAwZDdkYmY1Nr4tDA==: 00:28:39.816 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:39.816 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:39.816 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmU5YzBhODRhOTkzYzgwNTc2OTA0MGU1MzlmNTY4YTU3OGU3NjVhYzUwZGQ5OGJmYY7Ohw==: 00:28:39.816 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDAwMDljNTQwYmE5MzE1MGNkMGMxODVkNjNmOTdmOGM4NzhiZWRlNzAwZDdkYmY1Nr4tDA==: ]] 00:28:39.816 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDAwMDljNTQwYmE5MzE1MGNkMGMxODVkNjNmOTdmOGM4NzhiZWRlNzAwZDdkYmY1Nr4tDA==: 00:28:39.816 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:28:39.816 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:28:39.816 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:28:39.816 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:39.816 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:28:39.817 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:39.817 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:28:39.817 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:39.817 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:39.817 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:39.817 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:39.817 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.817 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.817 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.817 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:39.817 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:39.817 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:39.817 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:39.817 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:39.817 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:39.817 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:39.817 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:39.817 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:39.817 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:39.817 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:39.817 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:39.817 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.817 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.817 nvme0n1 00:28:39.817 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.078 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:40.078 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:40.078 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.078 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.078 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.078 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:40.078 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:40.078 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.078 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.078 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.078 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:40.078 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:40.078 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:40.078 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:28:40.078 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:40.078 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:40.078 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:40.078 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:40.078 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDI5MmU0NGM4NTI1NzlmNzVkMTRkNTU4NzRhYmJmYWNScNWe: 00:28:40.078 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTU3MTFmYzdkNTRhODQyYTM0ZTI3ZGUzNzAyZDg1NDc3YmUwMzA0NTNiNzc0NzI2NTlhOTRlY2IwNTc0ZTllZkJmO88=: 00:28:40.078 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:40.078 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:40.078 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDI5MmU0NGM4NTI1NzlmNzVkMTRkNTU4NzRhYmJmYWNScNWe: 00:28:40.078 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTU3MTFmYzdkNTRhODQyYTM0ZTI3ZGUzNzAyZDg1NDc3YmUwMzA0NTNiNzc0NzI2NTlhOTRlY2IwNTc0ZTllZkJmO88=: ]] 00:28:40.078 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTU3MTFmYzdkNTRhODQyYTM0ZTI3ZGUzNzAyZDg1NDc3YmUwMzA0NTNiNzc0NzI2NTlhOTRlY2IwNTc0ZTllZkJmO88=: 00:28:40.078 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:28:40.078 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:40.078 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:40.078 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:40.078 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:40.078 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:40.078 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:40.078 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.078 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.078 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.078 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:40.078 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:40.078 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:40.078 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:40.078 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:40.078 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:40.078 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:40.078 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:40.078 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:40.078 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:40.078 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:40.078 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:40.078 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.078 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.078 nvme0n1 00:28:40.078 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.078 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:40.078 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:40.078 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.078 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.078 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.340 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:40.340 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:40.340 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.340 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.340 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.340 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:40.340 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:40.340 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:40.340 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:40.340 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:40.340 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:40.340 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmU5YzBhODRhOTkzYzgwNTc2OTA0MGU1MzlmNTY4YTU3OGU3NjVhYzUwZGQ5OGJmYY7Ohw==: 00:28:40.340 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDAwMDljNTQwYmE5MzE1MGNkMGMxODVkNjNmOTdmOGM4NzhiZWRlNzAwZDdkYmY1Nr4tDA==: 00:28:40.340 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:40.340 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:40.340 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmU5YzBhODRhOTkzYzgwNTc2OTA0MGU1MzlmNTY4YTU3OGU3NjVhYzUwZGQ5OGJmYY7Ohw==: 00:28:40.340 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDAwMDljNTQwYmE5MzE1MGNkMGMxODVkNjNmOTdmOGM4NzhiZWRlNzAwZDdkYmY1Nr4tDA==: ]] 00:28:40.340 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDAwMDljNTQwYmE5MzE1MGNkMGMxODVkNjNmOTdmOGM4NzhiZWRlNzAwZDdkYmY1Nr4tDA==: 00:28:40.340 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:28:40.340 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:40.340 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:40.340 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:40.340 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:40.340 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:40.340 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:40.340 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.340 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.340 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.340 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:40.340 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:40.340 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:40.340 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:40.340 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:40.340 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:40.340 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:40.340 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:40.340 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:40.340 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:40.340 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:40.340 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:40.340 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.340 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.340 nvme0n1 00:28:40.340 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.340 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:40.340 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:40.340 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.340 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.340 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.340 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:40.340 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:40.340 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.340 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.602 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.602 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:40.602 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:40.602 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:40.602 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:40.602 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:40.602 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:40.602 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yjc0ZjQ3YjlkYWQ5MGQzMTcyMTk3OTY0Nzk5MmZiMzR4DGHh: 00:28:40.602 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDkwMDgzM2NmMWQ3NTE5N2M4ZDAxZTc2NDQwNzliMWTrPdmE: 00:28:40.602 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:40.602 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:40.602 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yjc0ZjQ3YjlkYWQ5MGQzMTcyMTk3OTY0Nzk5MmZiMzR4DGHh: 00:28:40.602 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDkwMDgzM2NmMWQ3NTE5N2M4ZDAxZTc2NDQwNzliMWTrPdmE: ]] 00:28:40.602 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDkwMDgzM2NmMWQ3NTE5N2M4ZDAxZTc2NDQwNzliMWTrPdmE: 00:28:40.602 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:28:40.602 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:40.602 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:40.602 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:40.602 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:40.602 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:40.602 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:40.602 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.602 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.602 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.602 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:40.602 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:40.602 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:40.602 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:40.602 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:40.602 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:40.602 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:40.602 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:40.602 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:40.602 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:40.602 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:40.602 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:40.602 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.602 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.602 nvme0n1 00:28:40.602 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.602 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:40.602 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:40.602 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.602 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.602 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.602 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:40.602 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:40.602 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.602 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.602 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.602 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:40.602 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:28:40.602 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:40.602 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:40.602 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:40.602 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:40.602 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjczYTVkZDMxNTRhOGYwMjVmYjY3NmIwMzM0NWUwN2ZiZDU4OGVhY2ZiMjA4MzE3NMEhBw==: 00:28:40.602 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2JhNWM2ZWVkODk5NjNjZDk1ZDMwNjI4NGQ4OTZjYjkuIBlV: 00:28:40.602 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:40.602 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:40.602 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjczYTVkZDMxNTRhOGYwMjVmYjY3NmIwMzM0NWUwN2ZiZDU4OGVhY2ZiMjA4MzE3NMEhBw==: 00:28:40.602 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2JhNWM2ZWVkODk5NjNjZDk1ZDMwNjI4NGQ4OTZjYjkuIBlV: ]] 00:28:40.602 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2JhNWM2ZWVkODk5NjNjZDk1ZDMwNjI4NGQ4OTZjYjkuIBlV: 00:28:40.602 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:28:40.602 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:40.602 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:40.602 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:40.602 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:40.602 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:40.602 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:40.602 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.602 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.602 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.602 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:40.602 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:40.602 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:40.602 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:40.602 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:40.603 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:40.603 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:40.603 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:40.603 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:40.603 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:40.603 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:40.603 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:40.603 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.603 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.864 nvme0n1 00:28:40.864 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.864 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:40.864 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:40.864 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.864 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.864 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.864 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:40.865 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:40.865 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.865 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.865 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.865 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:40.865 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:28:40.865 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:40.865 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:40.865 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:40.865 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:40.865 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjE3Y2FlMzAyZWRmOWI5NTBkNTRjYWJjOTJlOTI2MWMzMzg1YWQ5ZTcyMzkxNzdjYmZkYjc0OWQwY2VlZjJjZWdNROU=: 00:28:40.865 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:40.865 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:40.865 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:40.865 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjE3Y2FlMzAyZWRmOWI5NTBkNTRjYWJjOTJlOTI2MWMzMzg1YWQ5ZTcyMzkxNzdjYmZkYjc0OWQwY2VlZjJjZWdNROU=: 00:28:40.865 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:40.865 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:28:40.865 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:40.865 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:40.865 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:40.865 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:40.865 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:40.865 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:40.865 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.865 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.865 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.865 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:40.865 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:40.865 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:40.865 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:40.865 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:40.865 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:40.865 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:40.865 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:40.865 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:40.865 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:40.865 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:40.865 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:40.865 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.865 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.126 nvme0n1 00:28:41.126 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.126 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:41.126 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:41.126 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.126 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.126 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.126 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:41.126 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:41.126 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.126 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.126 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.126 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:41.126 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:41.126 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:28:41.126 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:41.126 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:41.126 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:41.126 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:41.126 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDI5MmU0NGM4NTI1NzlmNzVkMTRkNTU4NzRhYmJmYWNScNWe: 00:28:41.126 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTU3MTFmYzdkNTRhODQyYTM0ZTI3ZGUzNzAyZDg1NDc3YmUwMzA0NTNiNzc0NzI2NTlhOTRlY2IwNTc0ZTllZkJmO88=: 00:28:41.126 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:41.126 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:41.126 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDI5MmU0NGM4NTI1NzlmNzVkMTRkNTU4NzRhYmJmYWNScNWe: 00:28:41.126 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTU3MTFmYzdkNTRhODQyYTM0ZTI3ZGUzNzAyZDg1NDc3YmUwMzA0NTNiNzc0NzI2NTlhOTRlY2IwNTc0ZTllZkJmO88=: ]] 00:28:41.126 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTU3MTFmYzdkNTRhODQyYTM0ZTI3ZGUzNzAyZDg1NDc3YmUwMzA0NTNiNzc0NzI2NTlhOTRlY2IwNTc0ZTllZkJmO88=: 00:28:41.126 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:28:41.126 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:41.126 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:41.126 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:41.126 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:41.126 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:41.126 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:41.126 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.126 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.126 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.126 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:41.127 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:41.127 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:41.127 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:41.127 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:41.127 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:41.127 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:41.127 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:41.127 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:41.127 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:41.127 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:41.127 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:41.127 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.127 10:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.387 nvme0n1 00:28:41.387 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.387 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:41.387 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:41.387 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.387 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.387 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.387 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:41.387 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:41.387 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.387 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.387 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.387 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:41.387 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:28:41.387 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:41.387 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:41.387 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:41.387 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:41.387 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmU5YzBhODRhOTkzYzgwNTc2OTA0MGU1MzlmNTY4YTU3OGU3NjVhYzUwZGQ5OGJmYY7Ohw==: 00:28:41.387 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDAwMDljNTQwYmE5MzE1MGNkMGMxODVkNjNmOTdmOGM4NzhiZWRlNzAwZDdkYmY1Nr4tDA==: 00:28:41.387 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:41.387 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:41.387 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmU5YzBhODRhOTkzYzgwNTc2OTA0MGU1MzlmNTY4YTU3OGU3NjVhYzUwZGQ5OGJmYY7Ohw==: 00:28:41.387 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDAwMDljNTQwYmE5MzE1MGNkMGMxODVkNjNmOTdmOGM4NzhiZWRlNzAwZDdkYmY1Nr4tDA==: ]] 00:28:41.387 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDAwMDljNTQwYmE5MzE1MGNkMGMxODVkNjNmOTdmOGM4NzhiZWRlNzAwZDdkYmY1Nr4tDA==: 00:28:41.387 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:28:41.387 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:41.387 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:41.387 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:41.387 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:41.387 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:41.387 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:41.387 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.387 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.387 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.387 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:41.387 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:41.387 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:41.387 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:41.387 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:41.387 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:41.387 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:41.387 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:41.387 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:41.387 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:41.387 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:41.387 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:41.387 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.387 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.648 nvme0n1 00:28:41.648 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.648 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:41.648 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:41.648 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.648 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.648 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.648 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:41.648 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:41.648 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.648 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.648 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.648 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:41.648 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:28:41.648 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:41.648 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:41.648 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:41.648 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:41.648 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yjc0ZjQ3YjlkYWQ5MGQzMTcyMTk3OTY0Nzk5MmZiMzR4DGHh: 00:28:41.648 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDkwMDgzM2NmMWQ3NTE5N2M4ZDAxZTc2NDQwNzliMWTrPdmE: 00:28:41.648 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:41.648 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:41.648 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yjc0ZjQ3YjlkYWQ5MGQzMTcyMTk3OTY0Nzk5MmZiMzR4DGHh: 00:28:41.648 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDkwMDgzM2NmMWQ3NTE5N2M4ZDAxZTc2NDQwNzliMWTrPdmE: ]] 00:28:41.648 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDkwMDgzM2NmMWQ3NTE5N2M4ZDAxZTc2NDQwNzliMWTrPdmE: 00:28:41.648 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:28:41.648 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:41.648 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:41.648 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:41.648 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:41.648 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:41.648 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:41.648 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.648 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.648 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.648 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:41.648 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:41.648 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:41.648 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:41.648 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:41.648 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:41.648 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:41.648 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:41.648 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:41.648 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:41.648 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:41.648 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:41.648 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.648 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.909 nvme0n1 00:28:41.909 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.909 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:41.909 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:41.909 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.909 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.909 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.909 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:41.909 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:41.909 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.909 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.909 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.909 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:41.909 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:28:41.909 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:41.909 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:41.909 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:41.909 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:41.909 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjczYTVkZDMxNTRhOGYwMjVmYjY3NmIwMzM0NWUwN2ZiZDU4OGVhY2ZiMjA4MzE3NMEhBw==: 00:28:41.909 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2JhNWM2ZWVkODk5NjNjZDk1ZDMwNjI4NGQ4OTZjYjkuIBlV: 00:28:41.909 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:41.909 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:41.909 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjczYTVkZDMxNTRhOGYwMjVmYjY3NmIwMzM0NWUwN2ZiZDU4OGVhY2ZiMjA4MzE3NMEhBw==: 00:28:41.909 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2JhNWM2ZWVkODk5NjNjZDk1ZDMwNjI4NGQ4OTZjYjkuIBlV: ]] 00:28:41.909 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2JhNWM2ZWVkODk5NjNjZDk1ZDMwNjI4NGQ4OTZjYjkuIBlV: 00:28:41.909 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:28:41.909 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:41.909 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:41.909 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:41.909 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:41.909 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:41.909 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:41.909 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.909 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.909 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.909 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:41.909 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:41.909 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:41.909 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:41.909 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:41.909 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:41.909 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:41.909 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:41.909 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:41.909 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:41.909 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:41.909 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:41.909 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.909 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.170 nvme0n1 00:28:42.170 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.170 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:42.170 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:42.170 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.170 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.170 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.170 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:42.170 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:42.170 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.170 10:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.170 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.170 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:42.170 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:28:42.170 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:42.170 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:42.170 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:42.170 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:42.170 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjE3Y2FlMzAyZWRmOWI5NTBkNTRjYWJjOTJlOTI2MWMzMzg1YWQ5ZTcyMzkxNzdjYmZkYjc0OWQwY2VlZjJjZWdNROU=: 00:28:42.170 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:42.170 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:42.170 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:42.170 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjE3Y2FlMzAyZWRmOWI5NTBkNTRjYWJjOTJlOTI2MWMzMzg1YWQ5ZTcyMzkxNzdjYmZkYjc0OWQwY2VlZjJjZWdNROU=: 00:28:42.170 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:42.170 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:28:42.170 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:42.170 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:42.170 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:42.170 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:42.170 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:42.170 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:42.170 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.170 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.170 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.170 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:42.170 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:42.170 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:42.170 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:42.170 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:42.170 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:42.170 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:42.170 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:42.170 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:42.170 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:42.170 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:42.170 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:42.170 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.170 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.432 nvme0n1 00:28:42.432 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.432 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:42.432 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:42.432 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.432 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.432 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.432 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:42.432 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:42.432 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.432 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.432 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.432 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:42.432 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:42.432 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:28:42.432 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:42.432 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:42.432 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:42.432 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:42.432 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDI5MmU0NGM4NTI1NzlmNzVkMTRkNTU4NzRhYmJmYWNScNWe: 00:28:42.432 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTU3MTFmYzdkNTRhODQyYTM0ZTI3ZGUzNzAyZDg1NDc3YmUwMzA0NTNiNzc0NzI2NTlhOTRlY2IwNTc0ZTllZkJmO88=: 00:28:42.432 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:42.432 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:42.432 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDI5MmU0NGM4NTI1NzlmNzVkMTRkNTU4NzRhYmJmYWNScNWe: 00:28:42.432 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTU3MTFmYzdkNTRhODQyYTM0ZTI3ZGUzNzAyZDg1NDc3YmUwMzA0NTNiNzc0NzI2NTlhOTRlY2IwNTc0ZTllZkJmO88=: ]] 00:28:42.432 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTU3MTFmYzdkNTRhODQyYTM0ZTI3ZGUzNzAyZDg1NDc3YmUwMzA0NTNiNzc0NzI2NTlhOTRlY2IwNTc0ZTllZkJmO88=: 00:28:42.432 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:28:42.432 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:42.432 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:42.432 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:42.432 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:42.432 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:42.432 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:42.432 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.432 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.432 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.432 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:42.432 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:42.432 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:42.432 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:42.432 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:42.432 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:42.432 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:42.432 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:42.432 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:42.432 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:42.432 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:42.432 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:42.432 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.432 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.693 nvme0n1 00:28:42.693 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.693 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:42.693 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:42.693 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.693 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.693 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.693 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:42.693 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:42.693 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.693 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.954 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.954 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:42.954 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:28:42.954 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:42.954 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:42.954 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:42.954 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:42.954 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmU5YzBhODRhOTkzYzgwNTc2OTA0MGU1MzlmNTY4YTU3OGU3NjVhYzUwZGQ5OGJmYY7Ohw==: 00:28:42.954 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDAwMDljNTQwYmE5MzE1MGNkMGMxODVkNjNmOTdmOGM4NzhiZWRlNzAwZDdkYmY1Nr4tDA==: 00:28:42.954 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:42.954 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:42.954 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmU5YzBhODRhOTkzYzgwNTc2OTA0MGU1MzlmNTY4YTU3OGU3NjVhYzUwZGQ5OGJmYY7Ohw==: 00:28:42.954 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDAwMDljNTQwYmE5MzE1MGNkMGMxODVkNjNmOTdmOGM4NzhiZWRlNzAwZDdkYmY1Nr4tDA==: ]] 00:28:42.954 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDAwMDljNTQwYmE5MzE1MGNkMGMxODVkNjNmOTdmOGM4NzhiZWRlNzAwZDdkYmY1Nr4tDA==: 00:28:42.954 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:28:42.954 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:42.954 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:42.954 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:42.954 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:42.954 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:42.954 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:42.954 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.954 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.954 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.954 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:42.954 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:42.954 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:42.954 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:42.954 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:42.955 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:42.955 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:42.955 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:42.955 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:42.955 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:42.955 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:42.955 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:42.955 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.955 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.216 nvme0n1 00:28:43.216 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.216 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:43.216 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:43.216 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.216 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.216 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.216 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:43.216 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:43.216 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.216 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.216 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.216 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:43.216 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:28:43.216 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:43.216 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:43.216 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:43.216 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:43.216 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yjc0ZjQ3YjlkYWQ5MGQzMTcyMTk3OTY0Nzk5MmZiMzR4DGHh: 00:28:43.216 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDkwMDgzM2NmMWQ3NTE5N2M4ZDAxZTc2NDQwNzliMWTrPdmE: 00:28:43.216 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:43.216 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:43.216 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yjc0ZjQ3YjlkYWQ5MGQzMTcyMTk3OTY0Nzk5MmZiMzR4DGHh: 00:28:43.216 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDkwMDgzM2NmMWQ3NTE5N2M4ZDAxZTc2NDQwNzliMWTrPdmE: ]] 00:28:43.216 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDkwMDgzM2NmMWQ3NTE5N2M4ZDAxZTc2NDQwNzliMWTrPdmE: 00:28:43.216 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:28:43.216 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:43.216 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:43.216 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:43.216 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:43.216 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:43.216 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:43.216 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.216 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.216 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.216 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:43.216 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:43.216 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:43.216 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:43.216 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:43.216 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:43.216 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:43.216 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:43.216 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:43.216 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:43.216 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:43.216 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:43.216 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.216 10:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.477 nvme0n1 00:28:43.477 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.477 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:43.477 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:43.477 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.477 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.477 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.477 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:43.477 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:43.478 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.478 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.478 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.478 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:43.478 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:28:43.478 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:43.478 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:43.478 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:43.478 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:43.478 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjczYTVkZDMxNTRhOGYwMjVmYjY3NmIwMzM0NWUwN2ZiZDU4OGVhY2ZiMjA4MzE3NMEhBw==: 00:28:43.478 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2JhNWM2ZWVkODk5NjNjZDk1ZDMwNjI4NGQ4OTZjYjkuIBlV: 00:28:43.478 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:43.478 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:43.478 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjczYTVkZDMxNTRhOGYwMjVmYjY3NmIwMzM0NWUwN2ZiZDU4OGVhY2ZiMjA4MzE3NMEhBw==: 00:28:43.478 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2JhNWM2ZWVkODk5NjNjZDk1ZDMwNjI4NGQ4OTZjYjkuIBlV: ]] 00:28:43.478 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2JhNWM2ZWVkODk5NjNjZDk1ZDMwNjI4NGQ4OTZjYjkuIBlV: 00:28:43.478 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:28:43.478 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:43.478 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:43.478 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:43.478 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:43.478 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:43.478 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:43.478 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.478 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.478 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.478 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:43.478 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:43.478 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:43.478 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:43.478 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:43.478 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:43.478 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:43.478 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:43.478 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:43.478 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:43.478 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:43.478 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:43.478 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.478 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.739 nvme0n1 00:28:43.739 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.739 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:43.739 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:43.739 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.739 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.739 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.739 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:43.739 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:43.739 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.739 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.739 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.739 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:43.739 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:28:43.739 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:43.739 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:43.739 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:43.739 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:43.739 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjE3Y2FlMzAyZWRmOWI5NTBkNTRjYWJjOTJlOTI2MWMzMzg1YWQ5ZTcyMzkxNzdjYmZkYjc0OWQwY2VlZjJjZWdNROU=: 00:28:43.739 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:43.739 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:43.739 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:43.739 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjE3Y2FlMzAyZWRmOWI5NTBkNTRjYWJjOTJlOTI2MWMzMzg1YWQ5ZTcyMzkxNzdjYmZkYjc0OWQwY2VlZjJjZWdNROU=: 00:28:43.739 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:43.739 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:28:43.739 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:43.739 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:43.739 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:43.739 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:43.739 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:43.739 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:43.739 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.739 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.739 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.739 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:43.739 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:43.739 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:43.739 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:43.739 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:44.000 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:44.000 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:44.000 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:44.000 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:44.000 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:44.000 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:44.000 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:44.000 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.000 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.000 nvme0n1 00:28:44.000 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.000 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:44.000 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:44.000 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.000 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.262 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.262 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:44.262 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:44.262 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.262 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.262 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.262 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:44.262 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:44.262 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:28:44.262 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:44.262 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:44.262 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:44.262 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:44.262 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDI5MmU0NGM4NTI1NzlmNzVkMTRkNTU4NzRhYmJmYWNScNWe: 00:28:44.262 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTU3MTFmYzdkNTRhODQyYTM0ZTI3ZGUzNzAyZDg1NDc3YmUwMzA0NTNiNzc0NzI2NTlhOTRlY2IwNTc0ZTllZkJmO88=: 00:28:44.262 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:44.262 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:44.262 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDI5MmU0NGM4NTI1NzlmNzVkMTRkNTU4NzRhYmJmYWNScNWe: 00:28:44.262 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTU3MTFmYzdkNTRhODQyYTM0ZTI3ZGUzNzAyZDg1NDc3YmUwMzA0NTNiNzc0NzI2NTlhOTRlY2IwNTc0ZTllZkJmO88=: ]] 00:28:44.262 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTU3MTFmYzdkNTRhODQyYTM0ZTI3ZGUzNzAyZDg1NDc3YmUwMzA0NTNiNzc0NzI2NTlhOTRlY2IwNTc0ZTllZkJmO88=: 00:28:44.262 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:28:44.262 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:44.262 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:44.262 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:44.262 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:44.262 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:44.262 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:44.262 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.262 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.262 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.262 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:44.262 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:44.262 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:44.262 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:44.262 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:44.262 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:44.262 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:44.262 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:44.262 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:44.262 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:44.262 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:44.262 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:44.262 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.262 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.523 nvme0n1 00:28:44.523 10:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.523 10:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:44.523 10:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:44.523 10:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.523 10:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.523 10:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.523 10:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:44.523 10:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:44.523 10:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.523 10:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.785 10:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.785 10:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:44.785 10:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:28:44.785 10:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:44.785 10:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:44.785 10:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:44.785 10:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:44.785 10:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmU5YzBhODRhOTkzYzgwNTc2OTA0MGU1MzlmNTY4YTU3OGU3NjVhYzUwZGQ5OGJmYY7Ohw==: 00:28:44.785 10:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDAwMDljNTQwYmE5MzE1MGNkMGMxODVkNjNmOTdmOGM4NzhiZWRlNzAwZDdkYmY1Nr4tDA==: 00:28:44.785 10:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:44.785 10:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:44.785 10:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmU5YzBhODRhOTkzYzgwNTc2OTA0MGU1MzlmNTY4YTU3OGU3NjVhYzUwZGQ5OGJmYY7Ohw==: 00:28:44.785 10:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDAwMDljNTQwYmE5MzE1MGNkMGMxODVkNjNmOTdmOGM4NzhiZWRlNzAwZDdkYmY1Nr4tDA==: ]] 00:28:44.785 10:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDAwMDljNTQwYmE5MzE1MGNkMGMxODVkNjNmOTdmOGM4NzhiZWRlNzAwZDdkYmY1Nr4tDA==: 00:28:44.785 10:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:28:44.785 10:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:44.785 10:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:44.785 10:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:44.785 10:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:44.785 10:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:44.785 10:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:44.785 10:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.785 10:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.785 10:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.785 10:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:44.785 10:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:44.785 10:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:44.785 10:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:44.785 10:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:44.785 10:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:44.785 10:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:44.785 10:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:44.785 10:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:44.785 10:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:44.785 10:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:44.785 10:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:44.785 10:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.785 10:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.046 nvme0n1 00:28:45.046 10:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.046 10:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:45.047 10:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:45.047 10:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.047 10:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.047 10:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.047 10:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:45.047 10:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:45.047 10:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.047 10:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.047 10:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.047 10:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:45.047 10:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:28:45.047 10:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:45.047 10:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:45.047 10:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:45.047 10:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:45.047 10:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yjc0ZjQ3YjlkYWQ5MGQzMTcyMTk3OTY0Nzk5MmZiMzR4DGHh: 00:28:45.047 10:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDkwMDgzM2NmMWQ3NTE5N2M4ZDAxZTc2NDQwNzliMWTrPdmE: 00:28:45.047 10:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:45.047 10:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:45.047 10:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yjc0ZjQ3YjlkYWQ5MGQzMTcyMTk3OTY0Nzk5MmZiMzR4DGHh: 00:28:45.047 10:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDkwMDgzM2NmMWQ3NTE5N2M4ZDAxZTc2NDQwNzliMWTrPdmE: ]] 00:28:45.047 10:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDkwMDgzM2NmMWQ3NTE5N2M4ZDAxZTc2NDQwNzliMWTrPdmE: 00:28:45.047 10:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:28:45.047 10:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:45.047 10:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:45.047 10:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:45.047 10:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:45.047 10:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:45.047 10:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:45.047 10:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.047 10:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.307 10:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.307 10:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:45.307 10:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:45.307 10:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:45.307 10:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:45.307 10:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:45.307 10:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:45.307 10:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:45.307 10:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:45.307 10:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:45.307 10:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:45.307 10:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:45.307 10:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:45.307 10:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.307 10:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.568 nvme0n1 00:28:45.568 10:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.568 10:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:45.568 10:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:45.568 10:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.568 10:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.568 10:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.568 10:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:45.568 10:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:45.568 10:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.568 10:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.568 10:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.568 10:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:45.568 10:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:28:45.568 10:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:45.568 10:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:45.568 10:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:45.568 10:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:45.568 10:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjczYTVkZDMxNTRhOGYwMjVmYjY3NmIwMzM0NWUwN2ZiZDU4OGVhY2ZiMjA4MzE3NMEhBw==: 00:28:45.568 10:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2JhNWM2ZWVkODk5NjNjZDk1ZDMwNjI4NGQ4OTZjYjkuIBlV: 00:28:45.568 10:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:45.568 10:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:45.568 10:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjczYTVkZDMxNTRhOGYwMjVmYjY3NmIwMzM0NWUwN2ZiZDU4OGVhY2ZiMjA4MzE3NMEhBw==: 00:28:45.568 10:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2JhNWM2ZWVkODk5NjNjZDk1ZDMwNjI4NGQ4OTZjYjkuIBlV: ]] 00:28:45.568 10:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2JhNWM2ZWVkODk5NjNjZDk1ZDMwNjI4NGQ4OTZjYjkuIBlV: 00:28:45.568 10:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:28:45.568 10:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:45.568 10:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:45.568 10:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:45.568 10:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:45.568 10:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:45.568 10:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:45.568 10:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.568 10:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.568 10:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.568 10:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:45.568 10:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:45.568 10:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:45.568 10:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:45.568 10:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:45.568 10:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:45.568 10:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:45.568 10:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:45.568 10:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:45.568 10:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:45.568 10:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:45.568 10:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:45.568 10:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.568 10:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.140 nvme0n1 00:28:46.140 10:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.140 10:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:46.140 10:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:46.140 10:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.140 10:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.140 10:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.140 10:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:46.140 10:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:46.140 10:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.140 10:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.140 10:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.140 10:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:46.140 10:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:28:46.140 10:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:46.140 10:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:46.140 10:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:46.140 10:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:46.140 10:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjE3Y2FlMzAyZWRmOWI5NTBkNTRjYWJjOTJlOTI2MWMzMzg1YWQ5ZTcyMzkxNzdjYmZkYjc0OWQwY2VlZjJjZWdNROU=: 00:28:46.140 10:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:46.140 10:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:46.140 10:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:46.140 10:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjE3Y2FlMzAyZWRmOWI5NTBkNTRjYWJjOTJlOTI2MWMzMzg1YWQ5ZTcyMzkxNzdjYmZkYjc0OWQwY2VlZjJjZWdNROU=: 00:28:46.140 10:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:46.140 10:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:28:46.140 10:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:46.140 10:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:46.140 10:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:46.140 10:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:46.140 10:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:46.140 10:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:46.140 10:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.140 10:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.140 10:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.140 10:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:46.140 10:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:46.140 10:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:46.140 10:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:46.140 10:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:46.140 10:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:46.140 10:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:46.140 10:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:46.140 10:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:46.140 10:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:46.140 10:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:46.140 10:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:46.140 10:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.140 10:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.712 nvme0n1 00:28:46.712 10:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.712 10:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:46.712 10:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:46.712 10:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.712 10:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.712 10:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.712 10:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:46.712 10:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:46.712 10:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.712 10:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.712 10:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.713 10:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:46.713 10:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:46.713 10:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:28:46.713 10:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:46.713 10:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:46.713 10:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:46.713 10:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:46.713 10:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDI5MmU0NGM4NTI1NzlmNzVkMTRkNTU4NzRhYmJmYWNScNWe: 00:28:46.713 10:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTU3MTFmYzdkNTRhODQyYTM0ZTI3ZGUzNzAyZDg1NDc3YmUwMzA0NTNiNzc0NzI2NTlhOTRlY2IwNTc0ZTllZkJmO88=: 00:28:46.713 10:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:46.713 10:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:46.713 10:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDI5MmU0NGM4NTI1NzlmNzVkMTRkNTU4NzRhYmJmYWNScNWe: 00:28:46.713 10:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTU3MTFmYzdkNTRhODQyYTM0ZTI3ZGUzNzAyZDg1NDc3YmUwMzA0NTNiNzc0NzI2NTlhOTRlY2IwNTc0ZTllZkJmO88=: ]] 00:28:46.713 10:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTU3MTFmYzdkNTRhODQyYTM0ZTI3ZGUzNzAyZDg1NDc3YmUwMzA0NTNiNzc0NzI2NTlhOTRlY2IwNTc0ZTllZkJmO88=: 00:28:46.713 10:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:28:46.713 10:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:46.713 10:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:46.713 10:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:46.713 10:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:46.713 10:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:46.713 10:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:46.713 10:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.713 10:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.713 10:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.713 10:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:46.713 10:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:46.713 10:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:46.713 10:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:46.713 10:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:46.713 10:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:46.713 10:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:46.713 10:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:46.713 10:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:46.713 10:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:46.713 10:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:46.713 10:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:46.713 10:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.713 10:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.284 nvme0n1 00:28:47.284 10:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.284 10:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:47.284 10:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:47.284 10:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.284 10:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.284 10:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.284 10:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:47.284 10:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:47.284 10:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.284 10:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.284 10:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.284 10:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:47.284 10:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:28:47.284 10:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:47.284 10:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:47.284 10:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:47.284 10:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:47.284 10:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmU5YzBhODRhOTkzYzgwNTc2OTA0MGU1MzlmNTY4YTU3OGU3NjVhYzUwZGQ5OGJmYY7Ohw==: 00:28:47.284 10:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDAwMDljNTQwYmE5MzE1MGNkMGMxODVkNjNmOTdmOGM4NzhiZWRlNzAwZDdkYmY1Nr4tDA==: 00:28:47.284 10:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:47.284 10:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:47.284 10:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmU5YzBhODRhOTkzYzgwNTc2OTA0MGU1MzlmNTY4YTU3OGU3NjVhYzUwZGQ5OGJmYY7Ohw==: 00:28:47.284 10:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDAwMDljNTQwYmE5MzE1MGNkMGMxODVkNjNmOTdmOGM4NzhiZWRlNzAwZDdkYmY1Nr4tDA==: ]] 00:28:47.284 10:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDAwMDljNTQwYmE5MzE1MGNkMGMxODVkNjNmOTdmOGM4NzhiZWRlNzAwZDdkYmY1Nr4tDA==: 00:28:47.284 10:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:28:47.284 10:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:47.284 10:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:47.284 10:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:47.284 10:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:47.285 10:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:47.285 10:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:47.285 10:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.285 10:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.285 10:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.285 10:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:47.285 10:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:47.285 10:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:47.285 10:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:47.285 10:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:47.285 10:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:47.285 10:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:47.285 10:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:47.285 10:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:47.285 10:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:47.285 10:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:47.285 10:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:47.285 10:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.285 10:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.228 nvme0n1 00:28:48.228 10:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.228 10:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:48.228 10:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:48.228 10:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.228 10:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.228 10:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.228 10:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:48.228 10:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:48.228 10:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.229 10:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.229 10:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.229 10:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:48.229 10:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:28:48.229 10:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:48.229 10:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:48.229 10:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:48.229 10:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:48.229 10:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yjc0ZjQ3YjlkYWQ5MGQzMTcyMTk3OTY0Nzk5MmZiMzR4DGHh: 00:28:48.229 10:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDkwMDgzM2NmMWQ3NTE5N2M4ZDAxZTc2NDQwNzliMWTrPdmE: 00:28:48.229 10:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:48.229 10:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:48.229 10:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yjc0ZjQ3YjlkYWQ5MGQzMTcyMTk3OTY0Nzk5MmZiMzR4DGHh: 00:28:48.229 10:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDkwMDgzM2NmMWQ3NTE5N2M4ZDAxZTc2NDQwNzliMWTrPdmE: ]] 00:28:48.229 10:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDkwMDgzM2NmMWQ3NTE5N2M4ZDAxZTc2NDQwNzliMWTrPdmE: 00:28:48.229 10:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:28:48.229 10:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:48.229 10:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:48.229 10:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:48.229 10:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:48.229 10:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:48.229 10:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:48.229 10:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.229 10:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.229 10:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.229 10:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:48.229 10:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:48.229 10:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:48.229 10:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:48.229 10:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:48.229 10:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:48.229 10:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:48.229 10:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:48.229 10:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:48.229 10:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:48.229 10:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:48.229 10:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:48.229 10:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.229 10:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.804 nvme0n1 00:28:48.804 10:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.804 10:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:48.804 10:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:48.804 10:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.804 10:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.804 10:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.804 10:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:48.804 10:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:48.804 10:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.804 10:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.804 10:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.804 10:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:48.804 10:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:28:48.804 10:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:48.804 10:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:48.804 10:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:48.804 10:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:48.804 10:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjczYTVkZDMxNTRhOGYwMjVmYjY3NmIwMzM0NWUwN2ZiZDU4OGVhY2ZiMjA4MzE3NMEhBw==: 00:28:48.804 10:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2JhNWM2ZWVkODk5NjNjZDk1ZDMwNjI4NGQ4OTZjYjkuIBlV: 00:28:48.804 10:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:48.804 10:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:48.804 10:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjczYTVkZDMxNTRhOGYwMjVmYjY3NmIwMzM0NWUwN2ZiZDU4OGVhY2ZiMjA4MzE3NMEhBw==: 00:28:48.804 10:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2JhNWM2ZWVkODk5NjNjZDk1ZDMwNjI4NGQ4OTZjYjkuIBlV: ]] 00:28:48.804 10:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2JhNWM2ZWVkODk5NjNjZDk1ZDMwNjI4NGQ4OTZjYjkuIBlV: 00:28:48.804 10:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:28:48.804 10:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:48.804 10:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:48.804 10:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:48.804 10:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:48.804 10:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:48.804 10:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:48.804 10:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.804 10:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.804 10:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.804 10:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:48.804 10:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:48.804 10:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:48.804 10:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:48.804 10:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:48.804 10:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:48.804 10:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:48.804 10:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:48.804 10:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:48.804 10:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:48.804 10:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:48.804 10:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:48.804 10:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.804 10:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.373 nvme0n1 00:28:49.373 10:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.373 10:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:49.373 10:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:49.373 10:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.373 10:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.633 10:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.633 10:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:49.633 10:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:49.633 10:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.633 10:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.633 10:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.633 10:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:49.633 10:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:28:49.633 10:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:49.633 10:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:49.633 10:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:49.633 10:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:49.633 10:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjE3Y2FlMzAyZWRmOWI5NTBkNTRjYWJjOTJlOTI2MWMzMzg1YWQ5ZTcyMzkxNzdjYmZkYjc0OWQwY2VlZjJjZWdNROU=: 00:28:49.633 10:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:49.633 10:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:49.633 10:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:49.634 10:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjE3Y2FlMzAyZWRmOWI5NTBkNTRjYWJjOTJlOTI2MWMzMzg1YWQ5ZTcyMzkxNzdjYmZkYjc0OWQwY2VlZjJjZWdNROU=: 00:28:49.634 10:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:49.634 10:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:28:49.634 10:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:49.634 10:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:49.634 10:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:49.634 10:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:49.634 10:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:49.634 10:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:49.634 10:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.634 10:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.634 10:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.634 10:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:49.634 10:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:49.634 10:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:49.634 10:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:49.634 10:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:49.634 10:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:49.634 10:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:49.634 10:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:49.634 10:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:49.634 10:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:49.634 10:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:49.634 10:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:49.634 10:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.634 10:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.298 nvme0n1 00:28:50.298 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.298 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:50.298 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:50.298 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.298 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.298 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.298 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:50.298 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:50.298 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.298 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.298 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.298 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:50.298 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:50.298 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:50.298 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:28:50.298 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:50.298 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:50.298 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:50.298 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:50.298 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDI5MmU0NGM4NTI1NzlmNzVkMTRkNTU4NzRhYmJmYWNScNWe: 00:28:50.298 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTU3MTFmYzdkNTRhODQyYTM0ZTI3ZGUzNzAyZDg1NDc3YmUwMzA0NTNiNzc0NzI2NTlhOTRlY2IwNTc0ZTllZkJmO88=: 00:28:50.298 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:50.298 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:50.298 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDI5MmU0NGM4NTI1NzlmNzVkMTRkNTU4NzRhYmJmYWNScNWe: 00:28:50.298 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTU3MTFmYzdkNTRhODQyYTM0ZTI3ZGUzNzAyZDg1NDc3YmUwMzA0NTNiNzc0NzI2NTlhOTRlY2IwNTc0ZTllZkJmO88=: ]] 00:28:50.298 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTU3MTFmYzdkNTRhODQyYTM0ZTI3ZGUzNzAyZDg1NDc3YmUwMzA0NTNiNzc0NzI2NTlhOTRlY2IwNTc0ZTllZkJmO88=: 00:28:50.298 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:28:50.298 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:50.298 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:50.298 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:50.298 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:50.298 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:50.298 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:50.298 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.298 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.298 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.298 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:50.298 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:50.298 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:50.298 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:50.298 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:50.298 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:50.298 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:50.298 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:50.298 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:50.298 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:50.298 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:50.298 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:50.298 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.298 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.558 nvme0n1 00:28:50.558 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.558 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:50.558 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:50.558 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.558 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.558 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.558 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:50.558 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:50.558 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.558 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.558 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.558 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:50.558 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:28:50.558 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:50.558 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:50.558 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:50.558 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:50.558 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmU5YzBhODRhOTkzYzgwNTc2OTA0MGU1MzlmNTY4YTU3OGU3NjVhYzUwZGQ5OGJmYY7Ohw==: 00:28:50.558 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDAwMDljNTQwYmE5MzE1MGNkMGMxODVkNjNmOTdmOGM4NzhiZWRlNzAwZDdkYmY1Nr4tDA==: 00:28:50.558 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:50.558 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:50.558 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmU5YzBhODRhOTkzYzgwNTc2OTA0MGU1MzlmNTY4YTU3OGU3NjVhYzUwZGQ5OGJmYY7Ohw==: 00:28:50.558 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDAwMDljNTQwYmE5MzE1MGNkMGMxODVkNjNmOTdmOGM4NzhiZWRlNzAwZDdkYmY1Nr4tDA==: ]] 00:28:50.558 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDAwMDljNTQwYmE5MzE1MGNkMGMxODVkNjNmOTdmOGM4NzhiZWRlNzAwZDdkYmY1Nr4tDA==: 00:28:50.558 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:28:50.558 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:50.558 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:50.558 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:50.558 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:50.559 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:50.559 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:50.559 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.559 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.559 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.559 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:50.559 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:50.559 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:50.559 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:50.559 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:50.559 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:50.559 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:50.559 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:50.559 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:50.559 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:50.559 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:50.559 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:50.559 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.559 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.819 nvme0n1 00:28:50.819 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.819 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:50.819 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:50.819 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.819 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.819 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.819 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:50.819 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:50.819 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.819 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.819 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.820 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:50.820 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:28:50.820 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:50.820 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:50.820 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:50.820 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:50.820 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yjc0ZjQ3YjlkYWQ5MGQzMTcyMTk3OTY0Nzk5MmZiMzR4DGHh: 00:28:50.820 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDkwMDgzM2NmMWQ3NTE5N2M4ZDAxZTc2NDQwNzliMWTrPdmE: 00:28:50.820 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:50.820 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:50.820 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yjc0ZjQ3YjlkYWQ5MGQzMTcyMTk3OTY0Nzk5MmZiMzR4DGHh: 00:28:50.820 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDkwMDgzM2NmMWQ3NTE5N2M4ZDAxZTc2NDQwNzliMWTrPdmE: ]] 00:28:50.820 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDkwMDgzM2NmMWQ3NTE5N2M4ZDAxZTc2NDQwNzliMWTrPdmE: 00:28:50.820 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:28:50.820 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:50.820 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:50.820 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:50.820 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:50.820 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:50.820 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:50.820 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.820 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.820 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.820 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:50.820 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:50.820 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:50.820 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:50.820 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:50.820 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:50.820 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:50.820 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:50.820 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:50.820 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:50.820 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:50.820 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:50.820 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.820 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.820 nvme0n1 00:28:50.820 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.820 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:50.820 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:50.820 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.820 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.820 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.081 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:51.081 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:51.081 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.081 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.081 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.081 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:51.081 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:28:51.081 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:51.081 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:51.081 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:51.081 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:51.081 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjczYTVkZDMxNTRhOGYwMjVmYjY3NmIwMzM0NWUwN2ZiZDU4OGVhY2ZiMjA4MzE3NMEhBw==: 00:28:51.081 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2JhNWM2ZWVkODk5NjNjZDk1ZDMwNjI4NGQ4OTZjYjkuIBlV: 00:28:51.081 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:51.081 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:51.081 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjczYTVkZDMxNTRhOGYwMjVmYjY3NmIwMzM0NWUwN2ZiZDU4OGVhY2ZiMjA4MzE3NMEhBw==: 00:28:51.081 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2JhNWM2ZWVkODk5NjNjZDk1ZDMwNjI4NGQ4OTZjYjkuIBlV: ]] 00:28:51.081 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2JhNWM2ZWVkODk5NjNjZDk1ZDMwNjI4NGQ4OTZjYjkuIBlV: 00:28:51.081 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:28:51.081 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:51.081 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:51.081 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:51.081 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:51.081 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:51.081 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:51.081 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.081 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.081 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.081 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:51.081 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:51.081 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:51.081 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:51.081 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:51.081 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:51.081 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:51.081 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:51.081 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:51.081 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:51.081 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:51.081 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:51.081 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.081 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.081 nvme0n1 00:28:51.081 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.081 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:51.081 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:51.081 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.081 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.081 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.081 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:51.081 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:51.081 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.081 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.343 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.343 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:51.343 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:28:51.343 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:51.343 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:51.343 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:51.343 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:51.343 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjE3Y2FlMzAyZWRmOWI5NTBkNTRjYWJjOTJlOTI2MWMzMzg1YWQ5ZTcyMzkxNzdjYmZkYjc0OWQwY2VlZjJjZWdNROU=: 00:28:51.343 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:51.343 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:51.343 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:51.343 10:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjE3Y2FlMzAyZWRmOWI5NTBkNTRjYWJjOTJlOTI2MWMzMzg1YWQ5ZTcyMzkxNzdjYmZkYjc0OWQwY2VlZjJjZWdNROU=: 00:28:51.343 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:51.343 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:28:51.343 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:51.343 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:51.343 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:51.343 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:51.343 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:51.343 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:51.343 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.343 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.343 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.343 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:51.343 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:51.343 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:51.343 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:51.343 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:51.343 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:51.343 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:51.343 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:51.343 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:51.343 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:51.343 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:51.343 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:51.343 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.343 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.343 nvme0n1 00:28:51.343 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.343 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:51.343 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:51.343 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.343 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.343 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.343 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:51.343 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:51.343 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.343 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.343 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.343 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:51.343 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:51.343 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:28:51.343 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:51.344 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:51.344 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:51.344 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:51.344 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDI5MmU0NGM4NTI1NzlmNzVkMTRkNTU4NzRhYmJmYWNScNWe: 00:28:51.344 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTU3MTFmYzdkNTRhODQyYTM0ZTI3ZGUzNzAyZDg1NDc3YmUwMzA0NTNiNzc0NzI2NTlhOTRlY2IwNTc0ZTllZkJmO88=: 00:28:51.344 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:51.344 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:51.344 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDI5MmU0NGM4NTI1NzlmNzVkMTRkNTU4NzRhYmJmYWNScNWe: 00:28:51.344 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTU3MTFmYzdkNTRhODQyYTM0ZTI3ZGUzNzAyZDg1NDc3YmUwMzA0NTNiNzc0NzI2NTlhOTRlY2IwNTc0ZTllZkJmO88=: ]] 00:28:51.344 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTU3MTFmYzdkNTRhODQyYTM0ZTI3ZGUzNzAyZDg1NDc3YmUwMzA0NTNiNzc0NzI2NTlhOTRlY2IwNTc0ZTllZkJmO88=: 00:28:51.344 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:28:51.344 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:51.344 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:51.344 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:51.344 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:51.344 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:51.344 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:51.344 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.344 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.344 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.344 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:51.344 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:51.344 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:51.344 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:51.344 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:51.344 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:51.344 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:51.344 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:51.344 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:51.344 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:51.344 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:51.344 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:51.344 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.344 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.605 nvme0n1 00:28:51.605 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.605 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:51.605 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:51.605 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.606 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.606 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.606 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:51.606 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:51.606 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.606 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.606 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.606 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:51.606 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:28:51.606 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:51.606 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:51.606 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:51.606 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:51.606 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmU5YzBhODRhOTkzYzgwNTc2OTA0MGU1MzlmNTY4YTU3OGU3NjVhYzUwZGQ5OGJmYY7Ohw==: 00:28:51.606 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDAwMDljNTQwYmE5MzE1MGNkMGMxODVkNjNmOTdmOGM4NzhiZWRlNzAwZDdkYmY1Nr4tDA==: 00:28:51.606 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:51.606 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:51.606 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmU5YzBhODRhOTkzYzgwNTc2OTA0MGU1MzlmNTY4YTU3OGU3NjVhYzUwZGQ5OGJmYY7Ohw==: 00:28:51.606 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDAwMDljNTQwYmE5MzE1MGNkMGMxODVkNjNmOTdmOGM4NzhiZWRlNzAwZDdkYmY1Nr4tDA==: ]] 00:28:51.606 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDAwMDljNTQwYmE5MzE1MGNkMGMxODVkNjNmOTdmOGM4NzhiZWRlNzAwZDdkYmY1Nr4tDA==: 00:28:51.606 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:28:51.606 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:51.606 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:51.606 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:51.606 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:51.606 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:51.606 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:51.606 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.606 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.606 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.606 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:51.606 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:51.606 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:51.606 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:51.606 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:51.606 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:51.606 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:51.606 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:51.606 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:51.606 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:51.606 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:51.606 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:51.606 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.606 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.866 nvme0n1 00:28:51.866 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.866 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:51.866 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:51.866 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.866 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.866 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.866 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:51.866 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:51.866 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.866 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.866 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.866 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:51.866 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:28:51.866 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:51.866 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:51.866 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:51.866 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:51.866 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yjc0ZjQ3YjlkYWQ5MGQzMTcyMTk3OTY0Nzk5MmZiMzR4DGHh: 00:28:51.866 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDkwMDgzM2NmMWQ3NTE5N2M4ZDAxZTc2NDQwNzliMWTrPdmE: 00:28:51.867 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:51.867 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:51.867 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yjc0ZjQ3YjlkYWQ5MGQzMTcyMTk3OTY0Nzk5MmZiMzR4DGHh: 00:28:51.867 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDkwMDgzM2NmMWQ3NTE5N2M4ZDAxZTc2NDQwNzliMWTrPdmE: ]] 00:28:51.867 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDkwMDgzM2NmMWQ3NTE5N2M4ZDAxZTc2NDQwNzliMWTrPdmE: 00:28:51.867 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:28:51.867 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:51.867 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:51.867 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:51.867 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:51.867 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:51.867 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:51.867 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.867 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.867 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.867 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:51.867 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:51.867 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:51.867 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:51.867 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:51.867 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:51.867 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:51.867 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:51.867 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:51.867 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:51.867 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:52.127 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:52.127 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.127 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.127 nvme0n1 00:28:52.127 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.127 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:52.127 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:52.127 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.127 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.127 10:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.127 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:52.127 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:52.127 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.127 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.127 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.127 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:52.127 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:28:52.127 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:52.127 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:52.127 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:52.127 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:52.127 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjczYTVkZDMxNTRhOGYwMjVmYjY3NmIwMzM0NWUwN2ZiZDU4OGVhY2ZiMjA4MzE3NMEhBw==: 00:28:52.127 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2JhNWM2ZWVkODk5NjNjZDk1ZDMwNjI4NGQ4OTZjYjkuIBlV: 00:28:52.127 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:52.127 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:52.127 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjczYTVkZDMxNTRhOGYwMjVmYjY3NmIwMzM0NWUwN2ZiZDU4OGVhY2ZiMjA4MzE3NMEhBw==: 00:28:52.127 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2JhNWM2ZWVkODk5NjNjZDk1ZDMwNjI4NGQ4OTZjYjkuIBlV: ]] 00:28:52.127 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2JhNWM2ZWVkODk5NjNjZDk1ZDMwNjI4NGQ4OTZjYjkuIBlV: 00:28:52.127 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:28:52.127 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:52.127 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:52.127 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:52.127 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:52.127 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:52.127 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:52.127 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.127 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.387 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.387 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:52.387 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:52.387 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:52.387 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:52.387 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:52.387 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:52.387 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:52.387 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:52.387 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:52.387 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:52.387 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:52.387 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:52.387 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.387 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.387 nvme0n1 00:28:52.387 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.387 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:52.387 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.387 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:52.387 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.387 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.387 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:52.387 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:52.387 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.388 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.388 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.388 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:52.388 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:28:52.388 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:52.388 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:52.388 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:52.388 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:52.388 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjE3Y2FlMzAyZWRmOWI5NTBkNTRjYWJjOTJlOTI2MWMzMzg1YWQ5ZTcyMzkxNzdjYmZkYjc0OWQwY2VlZjJjZWdNROU=: 00:28:52.388 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:52.388 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:52.388 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:52.388 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjE3Y2FlMzAyZWRmOWI5NTBkNTRjYWJjOTJlOTI2MWMzMzg1YWQ5ZTcyMzkxNzdjYmZkYjc0OWQwY2VlZjJjZWdNROU=: 00:28:52.388 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:52.388 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:28:52.388 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:52.388 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:52.388 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:52.388 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:52.388 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:52.388 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:52.388 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.388 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.648 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.648 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:52.648 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:52.648 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:52.648 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:52.648 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:52.648 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:52.648 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:52.648 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:52.648 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:52.648 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:52.648 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:52.648 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:52.648 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.648 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.648 nvme0n1 00:28:52.648 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.649 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:52.649 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:52.649 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.649 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.649 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.649 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:52.649 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:52.649 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.649 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.649 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.649 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:52.649 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:52.649 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:28:52.649 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:52.649 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:52.649 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:52.649 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:52.649 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDI5MmU0NGM4NTI1NzlmNzVkMTRkNTU4NzRhYmJmYWNScNWe: 00:28:52.649 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTU3MTFmYzdkNTRhODQyYTM0ZTI3ZGUzNzAyZDg1NDc3YmUwMzA0NTNiNzc0NzI2NTlhOTRlY2IwNTc0ZTllZkJmO88=: 00:28:52.649 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:52.649 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:52.649 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDI5MmU0NGM4NTI1NzlmNzVkMTRkNTU4NzRhYmJmYWNScNWe: 00:28:52.649 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTU3MTFmYzdkNTRhODQyYTM0ZTI3ZGUzNzAyZDg1NDc3YmUwMzA0NTNiNzc0NzI2NTlhOTRlY2IwNTc0ZTllZkJmO88=: ]] 00:28:52.649 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTU3MTFmYzdkNTRhODQyYTM0ZTI3ZGUzNzAyZDg1NDc3YmUwMzA0NTNiNzc0NzI2NTlhOTRlY2IwNTc0ZTllZkJmO88=: 00:28:52.649 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:28:52.649 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:52.649 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:52.649 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:52.649 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:52.649 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:52.649 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:52.649 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.649 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.910 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.910 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:52.910 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:52.910 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:52.911 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:52.911 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:52.911 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:52.911 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:52.911 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:52.911 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:52.911 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:52.911 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:52.911 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:52.911 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.911 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.172 nvme0n1 00:28:53.172 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.172 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:53.172 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:53.172 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.172 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.172 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.172 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:53.172 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:53.172 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.172 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.172 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.172 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:53.172 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:28:53.172 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:53.172 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:53.172 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:53.172 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:53.172 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmU5YzBhODRhOTkzYzgwNTc2OTA0MGU1MzlmNTY4YTU3OGU3NjVhYzUwZGQ5OGJmYY7Ohw==: 00:28:53.172 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDAwMDljNTQwYmE5MzE1MGNkMGMxODVkNjNmOTdmOGM4NzhiZWRlNzAwZDdkYmY1Nr4tDA==: 00:28:53.172 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:53.172 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:53.172 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmU5YzBhODRhOTkzYzgwNTc2OTA0MGU1MzlmNTY4YTU3OGU3NjVhYzUwZGQ5OGJmYY7Ohw==: 00:28:53.172 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDAwMDljNTQwYmE5MzE1MGNkMGMxODVkNjNmOTdmOGM4NzhiZWRlNzAwZDdkYmY1Nr4tDA==: ]] 00:28:53.172 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDAwMDljNTQwYmE5MzE1MGNkMGMxODVkNjNmOTdmOGM4NzhiZWRlNzAwZDdkYmY1Nr4tDA==: 00:28:53.172 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:28:53.172 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:53.172 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:53.172 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:53.172 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:53.172 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:53.172 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:53.172 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.172 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.172 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.172 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:53.172 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:53.172 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:53.172 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:53.172 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:53.172 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:53.172 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:53.172 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:53.172 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:53.172 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:53.172 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:53.172 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:53.172 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.172 10:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.433 nvme0n1 00:28:53.433 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.433 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:53.433 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:53.433 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.433 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.433 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.433 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:53.433 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:53.433 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.433 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.434 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.434 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:53.434 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:28:53.434 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:53.434 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:53.434 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:53.434 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:53.434 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yjc0ZjQ3YjlkYWQ5MGQzMTcyMTk3OTY0Nzk5MmZiMzR4DGHh: 00:28:53.434 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDkwMDgzM2NmMWQ3NTE5N2M4ZDAxZTc2NDQwNzliMWTrPdmE: 00:28:53.434 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:53.434 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:53.434 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yjc0ZjQ3YjlkYWQ5MGQzMTcyMTk3OTY0Nzk5MmZiMzR4DGHh: 00:28:53.434 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDkwMDgzM2NmMWQ3NTE5N2M4ZDAxZTc2NDQwNzliMWTrPdmE: ]] 00:28:53.434 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDkwMDgzM2NmMWQ3NTE5N2M4ZDAxZTc2NDQwNzliMWTrPdmE: 00:28:53.434 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:28:53.434 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:53.434 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:53.434 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:53.434 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:53.434 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:53.434 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:53.434 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.434 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.434 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.434 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:53.434 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:53.434 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:53.434 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:53.434 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:53.434 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:53.434 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:53.434 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:53.434 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:53.434 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:53.434 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:53.434 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:53.434 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.434 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.695 nvme0n1 00:28:53.695 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.695 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:53.695 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:53.695 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.695 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.695 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.695 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:53.695 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:53.695 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.695 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.695 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.695 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:53.695 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:28:53.695 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:53.695 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:53.695 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:53.695 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:53.695 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjczYTVkZDMxNTRhOGYwMjVmYjY3NmIwMzM0NWUwN2ZiZDU4OGVhY2ZiMjA4MzE3NMEhBw==: 00:28:53.695 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2JhNWM2ZWVkODk5NjNjZDk1ZDMwNjI4NGQ4OTZjYjkuIBlV: 00:28:53.695 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:53.695 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:53.695 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjczYTVkZDMxNTRhOGYwMjVmYjY3NmIwMzM0NWUwN2ZiZDU4OGVhY2ZiMjA4MzE3NMEhBw==: 00:28:53.695 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2JhNWM2ZWVkODk5NjNjZDk1ZDMwNjI4NGQ4OTZjYjkuIBlV: ]] 00:28:53.695 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2JhNWM2ZWVkODk5NjNjZDk1ZDMwNjI4NGQ4OTZjYjkuIBlV: 00:28:53.695 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:28:53.695 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:53.695 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:53.695 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:53.695 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:53.695 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:53.695 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:53.695 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.695 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.695 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.695 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:53.695 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:53.695 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:53.695 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:53.695 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:53.695 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:53.695 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:53.695 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:53.695 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:53.695 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:53.695 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:53.695 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:53.695 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.695 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.955 nvme0n1 00:28:53.955 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.955 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:53.955 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:53.955 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.955 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.955 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.216 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:54.216 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:54.216 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.216 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.216 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.216 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:54.216 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:28:54.216 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:54.216 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:54.216 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:54.216 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:54.216 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjE3Y2FlMzAyZWRmOWI5NTBkNTRjYWJjOTJlOTI2MWMzMzg1YWQ5ZTcyMzkxNzdjYmZkYjc0OWQwY2VlZjJjZWdNROU=: 00:28:54.216 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:54.216 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:54.216 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:54.216 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjE3Y2FlMzAyZWRmOWI5NTBkNTRjYWJjOTJlOTI2MWMzMzg1YWQ5ZTcyMzkxNzdjYmZkYjc0OWQwY2VlZjJjZWdNROU=: 00:28:54.216 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:54.216 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:28:54.216 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:54.216 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:54.216 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:54.216 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:54.216 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:54.216 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:54.216 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.216 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.216 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.216 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:54.216 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:54.216 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:54.216 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:54.216 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:54.216 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:54.216 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:54.216 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:54.216 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:54.216 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:54.216 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:54.216 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:54.216 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.216 10:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.476 nvme0n1 00:28:54.477 10:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.477 10:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:54.477 10:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.477 10:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:54.477 10:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.477 10:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.477 10:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:54.477 10:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:54.477 10:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.477 10:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.477 10:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.477 10:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:54.477 10:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:54.477 10:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:28:54.477 10:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:54.477 10:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:54.477 10:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:54.477 10:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:54.477 10:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDI5MmU0NGM4NTI1NzlmNzVkMTRkNTU4NzRhYmJmYWNScNWe: 00:28:54.477 10:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTU3MTFmYzdkNTRhODQyYTM0ZTI3ZGUzNzAyZDg1NDc3YmUwMzA0NTNiNzc0NzI2NTlhOTRlY2IwNTc0ZTllZkJmO88=: 00:28:54.477 10:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:54.477 10:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:54.477 10:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDI5MmU0NGM4NTI1NzlmNzVkMTRkNTU4NzRhYmJmYWNScNWe: 00:28:54.477 10:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTU3MTFmYzdkNTRhODQyYTM0ZTI3ZGUzNzAyZDg1NDc3YmUwMzA0NTNiNzc0NzI2NTlhOTRlY2IwNTc0ZTllZkJmO88=: ]] 00:28:54.477 10:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTU3MTFmYzdkNTRhODQyYTM0ZTI3ZGUzNzAyZDg1NDc3YmUwMzA0NTNiNzc0NzI2NTlhOTRlY2IwNTc0ZTllZkJmO88=: 00:28:54.477 10:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:28:54.477 10:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:54.477 10:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:54.477 10:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:54.477 10:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:54.477 10:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:54.477 10:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:54.477 10:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.477 10:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.477 10:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.477 10:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:54.477 10:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:54.477 10:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:54.477 10:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:54.477 10:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:54.477 10:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:54.477 10:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:54.477 10:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:54.477 10:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:54.477 10:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:54.477 10:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:54.477 10:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:54.477 10:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.477 10:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.737 nvme0n1 00:28:54.737 10:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.737 10:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:54.737 10:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:54.997 10:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.997 10:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.997 10:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.997 10:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:54.997 10:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:54.997 10:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.997 10:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.997 10:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.997 10:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:54.997 10:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:28:54.997 10:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:54.997 10:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:54.997 10:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:54.997 10:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:54.997 10:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmU5YzBhODRhOTkzYzgwNTc2OTA0MGU1MzlmNTY4YTU3OGU3NjVhYzUwZGQ5OGJmYY7Ohw==: 00:28:54.997 10:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDAwMDljNTQwYmE5MzE1MGNkMGMxODVkNjNmOTdmOGM4NzhiZWRlNzAwZDdkYmY1Nr4tDA==: 00:28:54.997 10:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:54.997 10:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:54.997 10:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmU5YzBhODRhOTkzYzgwNTc2OTA0MGU1MzlmNTY4YTU3OGU3NjVhYzUwZGQ5OGJmYY7Ohw==: 00:28:54.997 10:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDAwMDljNTQwYmE5MzE1MGNkMGMxODVkNjNmOTdmOGM4NzhiZWRlNzAwZDdkYmY1Nr4tDA==: ]] 00:28:54.997 10:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDAwMDljNTQwYmE5MzE1MGNkMGMxODVkNjNmOTdmOGM4NzhiZWRlNzAwZDdkYmY1Nr4tDA==: 00:28:54.997 10:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:28:54.997 10:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:54.997 10:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:54.997 10:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:54.997 10:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:54.997 10:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:54.997 10:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:54.997 10:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.997 10:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.997 10:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.997 10:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:54.997 10:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:54.997 10:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:54.997 10:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:54.997 10:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:54.997 10:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:54.997 10:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:54.997 10:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:54.997 10:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:54.997 10:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:54.997 10:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:54.997 10:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:54.997 10:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.997 10:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.258 nvme0n1 00:28:55.258 10:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.258 10:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:55.258 10:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:55.258 10:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.258 10:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.258 10:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.520 10:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:55.520 10:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:55.520 10:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.520 10:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.520 10:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.520 10:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:55.520 10:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:28:55.520 10:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:55.520 10:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:55.520 10:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:55.520 10:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:55.520 10:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yjc0ZjQ3YjlkYWQ5MGQzMTcyMTk3OTY0Nzk5MmZiMzR4DGHh: 00:28:55.520 10:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDkwMDgzM2NmMWQ3NTE5N2M4ZDAxZTc2NDQwNzliMWTrPdmE: 00:28:55.520 10:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:55.520 10:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:55.520 10:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yjc0ZjQ3YjlkYWQ5MGQzMTcyMTk3OTY0Nzk5MmZiMzR4DGHh: 00:28:55.520 10:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDkwMDgzM2NmMWQ3NTE5N2M4ZDAxZTc2NDQwNzliMWTrPdmE: ]] 00:28:55.520 10:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDkwMDgzM2NmMWQ3NTE5N2M4ZDAxZTc2NDQwNzliMWTrPdmE: 00:28:55.520 10:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:28:55.520 10:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:55.520 10:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:55.520 10:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:55.520 10:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:55.520 10:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:55.520 10:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:55.520 10:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.520 10:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.520 10:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.520 10:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:55.520 10:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:55.520 10:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:55.520 10:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:55.520 10:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:55.520 10:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:55.520 10:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:55.520 10:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:55.520 10:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:55.520 10:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:55.520 10:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:55.520 10:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:55.520 10:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.520 10:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.781 nvme0n1 00:28:55.781 10:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.781 10:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:55.781 10:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:55.781 10:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.781 10:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.781 10:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.781 10:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:55.781 10:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:55.781 10:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.781 10:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.042 10:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.042 10:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:56.042 10:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:28:56.042 10:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:56.042 10:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:56.042 10:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:56.042 10:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:56.042 10:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjczYTVkZDMxNTRhOGYwMjVmYjY3NmIwMzM0NWUwN2ZiZDU4OGVhY2ZiMjA4MzE3NMEhBw==: 00:28:56.042 10:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2JhNWM2ZWVkODk5NjNjZDk1ZDMwNjI4NGQ4OTZjYjkuIBlV: 00:28:56.042 10:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:56.042 10:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:56.042 10:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjczYTVkZDMxNTRhOGYwMjVmYjY3NmIwMzM0NWUwN2ZiZDU4OGVhY2ZiMjA4MzE3NMEhBw==: 00:28:56.042 10:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2JhNWM2ZWVkODk5NjNjZDk1ZDMwNjI4NGQ4OTZjYjkuIBlV: ]] 00:28:56.042 10:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2JhNWM2ZWVkODk5NjNjZDk1ZDMwNjI4NGQ4OTZjYjkuIBlV: 00:28:56.042 10:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:28:56.042 10:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:56.042 10:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:56.042 10:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:56.042 10:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:56.042 10:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:56.042 10:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:56.042 10:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.042 10:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.042 10:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.042 10:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:56.042 10:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:56.042 10:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:56.042 10:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:56.042 10:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:56.042 10:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:56.042 10:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:56.042 10:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:56.042 10:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:56.042 10:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:56.042 10:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:56.042 10:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:56.042 10:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.042 10:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.304 nvme0n1 00:28:56.304 10:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.304 10:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:56.304 10:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:56.304 10:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.304 10:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.304 10:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.304 10:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:56.304 10:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:56.304 10:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.304 10:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.304 10:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.304 10:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:56.304 10:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:28:56.304 10:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:56.304 10:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:56.304 10:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:56.304 10:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:56.304 10:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjE3Y2FlMzAyZWRmOWI5NTBkNTRjYWJjOTJlOTI2MWMzMzg1YWQ5ZTcyMzkxNzdjYmZkYjc0OWQwY2VlZjJjZWdNROU=: 00:28:56.304 10:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:56.304 10:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:56.304 10:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:56.304 10:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjE3Y2FlMzAyZWRmOWI5NTBkNTRjYWJjOTJlOTI2MWMzMzg1YWQ5ZTcyMzkxNzdjYmZkYjc0OWQwY2VlZjJjZWdNROU=: 00:28:56.304 10:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:56.304 10:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:28:56.304 10:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:56.304 10:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:56.304 10:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:56.304 10:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:56.304 10:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:56.304 10:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:56.304 10:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.304 10:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.565 10:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.565 10:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:56.565 10:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:56.565 10:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:56.565 10:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:56.565 10:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:56.565 10:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:56.565 10:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:56.565 10:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:56.565 10:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:56.565 10:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:56.565 10:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:56.565 10:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:56.565 10:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.565 10:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.826 nvme0n1 00:28:56.826 10:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.826 10:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:56.826 10:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:56.826 10:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.826 10:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.826 10:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.826 10:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:56.826 10:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:56.826 10:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.826 10:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.826 10:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.826 10:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:56.826 10:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:56.826 10:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:28:56.826 10:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:56.826 10:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:56.826 10:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:56.826 10:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:56.826 10:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDI5MmU0NGM4NTI1NzlmNzVkMTRkNTU4NzRhYmJmYWNScNWe: 00:28:56.826 10:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTU3MTFmYzdkNTRhODQyYTM0ZTI3ZGUzNzAyZDg1NDc3YmUwMzA0NTNiNzc0NzI2NTlhOTRlY2IwNTc0ZTllZkJmO88=: 00:28:56.826 10:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:56.826 10:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:56.826 10:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDI5MmU0NGM4NTI1NzlmNzVkMTRkNTU4NzRhYmJmYWNScNWe: 00:28:56.826 10:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTU3MTFmYzdkNTRhODQyYTM0ZTI3ZGUzNzAyZDg1NDc3YmUwMzA0NTNiNzc0NzI2NTlhOTRlY2IwNTc0ZTllZkJmO88=: ]] 00:28:56.826 10:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTU3MTFmYzdkNTRhODQyYTM0ZTI3ZGUzNzAyZDg1NDc3YmUwMzA0NTNiNzc0NzI2NTlhOTRlY2IwNTc0ZTllZkJmO88=: 00:28:56.826 10:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:28:56.826 10:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:56.826 10:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:56.826 10:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:56.827 10:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:56.827 10:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:56.827 10:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:56.827 10:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.827 10:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.827 10:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.827 10:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:56.827 10:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:56.827 10:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:56.827 10:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:56.827 10:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:56.827 10:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:56.827 10:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:56.827 10:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:56.827 10:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:56.827 10:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:56.827 10:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:56.827 10:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:56.827 10:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.827 10:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.770 nvme0n1 00:28:57.770 10:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.770 10:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:57.770 10:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:57.770 10:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.770 10:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.770 10:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.770 10:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:57.770 10:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:57.770 10:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.770 10:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.770 10:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.770 10:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:57.770 10:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:28:57.770 10:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:57.770 10:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:57.770 10:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:57.770 10:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:57.770 10:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmU5YzBhODRhOTkzYzgwNTc2OTA0MGU1MzlmNTY4YTU3OGU3NjVhYzUwZGQ5OGJmYY7Ohw==: 00:28:57.770 10:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDAwMDljNTQwYmE5MzE1MGNkMGMxODVkNjNmOTdmOGM4NzhiZWRlNzAwZDdkYmY1Nr4tDA==: 00:28:57.770 10:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:57.770 10:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:57.770 10:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmU5YzBhODRhOTkzYzgwNTc2OTA0MGU1MzlmNTY4YTU3OGU3NjVhYzUwZGQ5OGJmYY7Ohw==: 00:28:57.770 10:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDAwMDljNTQwYmE5MzE1MGNkMGMxODVkNjNmOTdmOGM4NzhiZWRlNzAwZDdkYmY1Nr4tDA==: ]] 00:28:57.770 10:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDAwMDljNTQwYmE5MzE1MGNkMGMxODVkNjNmOTdmOGM4NzhiZWRlNzAwZDdkYmY1Nr4tDA==: 00:28:57.770 10:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:28:57.770 10:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:57.770 10:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:57.770 10:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:57.770 10:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:57.770 10:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:57.770 10:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:57.770 10:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.770 10:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.770 10:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.770 10:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:57.770 10:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:57.770 10:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:57.770 10:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:57.770 10:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:57.770 10:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:57.770 10:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:57.770 10:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:57.770 10:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:57.770 10:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:57.770 10:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:57.770 10:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:57.770 10:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.770 10:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.342 nvme0n1 00:28:58.342 10:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.342 10:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:58.342 10:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:58.342 10:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.342 10:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.342 10:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.342 10:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:58.342 10:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:58.342 10:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.342 10:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.342 10:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.342 10:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:58.342 10:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:28:58.342 10:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:58.342 10:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:58.342 10:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:58.342 10:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:58.342 10:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yjc0ZjQ3YjlkYWQ5MGQzMTcyMTk3OTY0Nzk5MmZiMzR4DGHh: 00:28:58.342 10:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDkwMDgzM2NmMWQ3NTE5N2M4ZDAxZTc2NDQwNzliMWTrPdmE: 00:28:58.342 10:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:58.342 10:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:58.342 10:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yjc0ZjQ3YjlkYWQ5MGQzMTcyMTk3OTY0Nzk5MmZiMzR4DGHh: 00:28:58.342 10:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDkwMDgzM2NmMWQ3NTE5N2M4ZDAxZTc2NDQwNzliMWTrPdmE: ]] 00:28:58.342 10:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDkwMDgzM2NmMWQ3NTE5N2M4ZDAxZTc2NDQwNzliMWTrPdmE: 00:28:58.342 10:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:28:58.342 10:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:58.342 10:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:58.342 10:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:58.342 10:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:58.342 10:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:58.342 10:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:58.342 10:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.342 10:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.342 10:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.342 10:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:58.342 10:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:58.342 10:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:58.342 10:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:58.342 10:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:58.342 10:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:58.343 10:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:58.343 10:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:58.343 10:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:58.343 10:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:58.343 10:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:58.343 10:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:58.343 10:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.343 10:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.913 nvme0n1 00:28:58.913 10:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.913 10:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:58.913 10:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:58.913 10:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.913 10:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.174 10:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.174 10:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:59.174 10:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:59.174 10:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.174 10:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.174 10:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.174 10:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:59.174 10:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:28:59.174 10:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:59.174 10:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:59.174 10:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:59.174 10:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:59.174 10:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjczYTVkZDMxNTRhOGYwMjVmYjY3NmIwMzM0NWUwN2ZiZDU4OGVhY2ZiMjA4MzE3NMEhBw==: 00:28:59.174 10:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2JhNWM2ZWVkODk5NjNjZDk1ZDMwNjI4NGQ4OTZjYjkuIBlV: 00:28:59.174 10:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:59.174 10:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:59.174 10:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjczYTVkZDMxNTRhOGYwMjVmYjY3NmIwMzM0NWUwN2ZiZDU4OGVhY2ZiMjA4MzE3NMEhBw==: 00:28:59.174 10:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2JhNWM2ZWVkODk5NjNjZDk1ZDMwNjI4NGQ4OTZjYjkuIBlV: ]] 00:28:59.174 10:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2JhNWM2ZWVkODk5NjNjZDk1ZDMwNjI4NGQ4OTZjYjkuIBlV: 00:28:59.174 10:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:28:59.174 10:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:59.174 10:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:59.174 10:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:59.174 10:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:59.174 10:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:59.174 10:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:59.174 10:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.174 10:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.174 10:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.174 10:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:59.174 10:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:59.174 10:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:59.174 10:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:59.174 10:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:59.174 10:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:59.174 10:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:59.174 10:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:59.174 10:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:59.174 10:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:59.174 10:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:59.174 10:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:59.174 10:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.174 10:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.747 nvme0n1 00:28:59.747 10:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.747 10:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:59.747 10:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:59.747 10:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.747 10:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.747 10:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.747 10:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:59.747 10:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:59.747 10:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.747 10:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.747 10:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.747 10:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:59.747 10:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:28:59.747 10:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:59.747 10:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:59.747 10:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:59.747 10:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:59.747 10:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjE3Y2FlMzAyZWRmOWI5NTBkNTRjYWJjOTJlOTI2MWMzMzg1YWQ5ZTcyMzkxNzdjYmZkYjc0OWQwY2VlZjJjZWdNROU=: 00:28:59.747 10:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:59.747 10:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:59.747 10:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:59.747 10:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjE3Y2FlMzAyZWRmOWI5NTBkNTRjYWJjOTJlOTI2MWMzMzg1YWQ5ZTcyMzkxNzdjYmZkYjc0OWQwY2VlZjJjZWdNROU=: 00:28:59.747 10:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:59.747 10:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:28:59.747 10:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:59.747 10:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:59.747 10:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:59.747 10:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:59.747 10:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:59.747 10:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:59.747 10:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.747 10:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.747 10:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.747 10:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:59.747 10:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:59.747 10:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:59.748 10:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:59.748 10:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:59.748 10:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:59.748 10:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:59.748 10:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:59.748 10:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:59.748 10:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:59.748 10:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:59.748 10:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:59.748 10:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.748 10:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.692 nvme0n1 00:29:00.692 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.692 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:00.692 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:00.692 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.692 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.692 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.692 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:00.692 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:00.692 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.692 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.692 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.692 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:29:00.692 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:00.692 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:00.692 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:29:00.692 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:00.692 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:00.692 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:00.692 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:00.692 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDI5MmU0NGM4NTI1NzlmNzVkMTRkNTU4NzRhYmJmYWNScNWe: 00:29:00.692 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTU3MTFmYzdkNTRhODQyYTM0ZTI3ZGUzNzAyZDg1NDc3YmUwMzA0NTNiNzc0NzI2NTlhOTRlY2IwNTc0ZTllZkJmO88=: 00:29:00.692 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:00.692 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:00.692 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDI5MmU0NGM4NTI1NzlmNzVkMTRkNTU4NzRhYmJmYWNScNWe: 00:29:00.692 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTU3MTFmYzdkNTRhODQyYTM0ZTI3ZGUzNzAyZDg1NDc3YmUwMzA0NTNiNzc0NzI2NTlhOTRlY2IwNTc0ZTllZkJmO88=: ]] 00:29:00.692 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTU3MTFmYzdkNTRhODQyYTM0ZTI3ZGUzNzAyZDg1NDc3YmUwMzA0NTNiNzc0NzI2NTlhOTRlY2IwNTc0ZTllZkJmO88=: 00:29:00.692 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:29:00.692 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:00.692 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:00.692 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:00.692 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:00.692 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:00.692 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:00.692 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.692 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.692 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.692 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:00.692 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:00.692 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:00.692 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:00.692 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:00.692 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:00.692 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:00.692 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:00.692 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:00.692 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:00.692 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:00.692 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:00.692 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.692 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.692 nvme0n1 00:29:00.692 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.692 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:00.692 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:00.692 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.692 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.692 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.692 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:00.692 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:00.692 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.692 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.692 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.692 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:00.692 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:29:00.692 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:00.692 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:00.692 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:00.692 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:00.692 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmU5YzBhODRhOTkzYzgwNTc2OTA0MGU1MzlmNTY4YTU3OGU3NjVhYzUwZGQ5OGJmYY7Ohw==: 00:29:00.692 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDAwMDljNTQwYmE5MzE1MGNkMGMxODVkNjNmOTdmOGM4NzhiZWRlNzAwZDdkYmY1Nr4tDA==: 00:29:00.692 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:00.692 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:00.692 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmU5YzBhODRhOTkzYzgwNTc2OTA0MGU1MzlmNTY4YTU3OGU3NjVhYzUwZGQ5OGJmYY7Ohw==: 00:29:00.692 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDAwMDljNTQwYmE5MzE1MGNkMGMxODVkNjNmOTdmOGM4NzhiZWRlNzAwZDdkYmY1Nr4tDA==: ]] 00:29:00.692 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDAwMDljNTQwYmE5MzE1MGNkMGMxODVkNjNmOTdmOGM4NzhiZWRlNzAwZDdkYmY1Nr4tDA==: 00:29:00.692 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:29:00.692 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:00.692 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:00.692 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:00.692 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:00.692 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:00.692 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:00.692 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.692 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.692 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.692 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:00.692 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:00.692 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:00.692 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:00.692 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:00.692 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:00.692 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:00.692 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:00.693 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:00.693 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:00.693 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:00.693 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:00.693 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.693 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.955 nvme0n1 00:29:00.955 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.955 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:00.955 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:00.955 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.955 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.955 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.955 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:00.955 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:00.955 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.955 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.955 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.955 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:00.955 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:29:00.955 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:00.955 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:00.955 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:00.955 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:00.955 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yjc0ZjQ3YjlkYWQ5MGQzMTcyMTk3OTY0Nzk5MmZiMzR4DGHh: 00:29:00.955 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDkwMDgzM2NmMWQ3NTE5N2M4ZDAxZTc2NDQwNzliMWTrPdmE: 00:29:00.955 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:00.955 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:00.955 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yjc0ZjQ3YjlkYWQ5MGQzMTcyMTk3OTY0Nzk5MmZiMzR4DGHh: 00:29:00.955 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDkwMDgzM2NmMWQ3NTE5N2M4ZDAxZTc2NDQwNzliMWTrPdmE: ]] 00:29:00.955 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDkwMDgzM2NmMWQ3NTE5N2M4ZDAxZTc2NDQwNzliMWTrPdmE: 00:29:00.955 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:29:00.955 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:00.955 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:00.955 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:00.955 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:00.955 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:00.955 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:00.955 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.955 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.955 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.955 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:00.955 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:00.955 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:00.955 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:00.955 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:00.955 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:00.955 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:00.955 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:00.955 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:00.955 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:00.955 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:00.955 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:00.955 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.955 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.216 nvme0n1 00:29:01.216 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.216 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:01.216 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:01.216 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.216 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.216 10:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.216 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:01.216 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:01.216 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.216 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.216 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.216 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:01.216 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:29:01.216 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:01.216 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:01.216 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:01.216 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:01.216 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjczYTVkZDMxNTRhOGYwMjVmYjY3NmIwMzM0NWUwN2ZiZDU4OGVhY2ZiMjA4MzE3NMEhBw==: 00:29:01.216 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2JhNWM2ZWVkODk5NjNjZDk1ZDMwNjI4NGQ4OTZjYjkuIBlV: 00:29:01.216 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:01.216 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:01.216 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjczYTVkZDMxNTRhOGYwMjVmYjY3NmIwMzM0NWUwN2ZiZDU4OGVhY2ZiMjA4MzE3NMEhBw==: 00:29:01.216 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2JhNWM2ZWVkODk5NjNjZDk1ZDMwNjI4NGQ4OTZjYjkuIBlV: ]] 00:29:01.216 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2JhNWM2ZWVkODk5NjNjZDk1ZDMwNjI4NGQ4OTZjYjkuIBlV: 00:29:01.216 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:29:01.216 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:01.216 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:01.216 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:01.216 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:01.216 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:01.216 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:01.216 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.216 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.216 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.216 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:01.216 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:01.216 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:01.216 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:01.216 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:01.216 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:01.216 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:01.216 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:01.216 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:01.216 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:01.216 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:01.216 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:01.216 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.216 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.479 nvme0n1 00:29:01.479 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.479 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:01.479 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:01.479 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.479 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.479 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.479 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:01.479 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:01.479 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.479 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.479 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.479 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:01.479 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:29:01.479 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:01.479 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:01.479 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:01.479 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:01.479 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjE3Y2FlMzAyZWRmOWI5NTBkNTRjYWJjOTJlOTI2MWMzMzg1YWQ5ZTcyMzkxNzdjYmZkYjc0OWQwY2VlZjJjZWdNROU=: 00:29:01.479 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:01.479 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:01.479 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:01.479 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjE3Y2FlMzAyZWRmOWI5NTBkNTRjYWJjOTJlOTI2MWMzMzg1YWQ5ZTcyMzkxNzdjYmZkYjc0OWQwY2VlZjJjZWdNROU=: 00:29:01.479 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:01.479 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:29:01.479 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:01.479 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:01.479 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:01.479 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:01.479 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:01.479 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:01.479 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.479 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.479 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.479 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:01.479 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:01.479 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:01.479 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:01.479 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:01.479 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:01.479 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:01.479 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:01.479 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:01.479 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:01.479 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:01.479 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:01.479 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.479 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.740 nvme0n1 00:29:01.740 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.740 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:01.740 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:01.740 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.740 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.740 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.740 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:01.740 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:01.740 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.740 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.740 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.740 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:01.740 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:01.740 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:29:01.740 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:01.740 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:01.740 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:01.740 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:01.740 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDI5MmU0NGM4NTI1NzlmNzVkMTRkNTU4NzRhYmJmYWNScNWe: 00:29:01.740 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTU3MTFmYzdkNTRhODQyYTM0ZTI3ZGUzNzAyZDg1NDc3YmUwMzA0NTNiNzc0NzI2NTlhOTRlY2IwNTc0ZTllZkJmO88=: 00:29:01.740 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:01.740 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:01.740 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDI5MmU0NGM4NTI1NzlmNzVkMTRkNTU4NzRhYmJmYWNScNWe: 00:29:01.740 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTU3MTFmYzdkNTRhODQyYTM0ZTI3ZGUzNzAyZDg1NDc3YmUwMzA0NTNiNzc0NzI2NTlhOTRlY2IwNTc0ZTllZkJmO88=: ]] 00:29:01.740 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTU3MTFmYzdkNTRhODQyYTM0ZTI3ZGUzNzAyZDg1NDc3YmUwMzA0NTNiNzc0NzI2NTlhOTRlY2IwNTc0ZTllZkJmO88=: 00:29:01.740 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:29:01.740 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:01.740 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:01.740 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:01.740 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:01.740 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:01.740 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:01.740 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.740 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.740 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.740 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:01.740 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:01.740 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:01.740 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:01.740 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:01.740 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:01.740 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:01.740 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:01.740 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:01.740 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:01.740 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:01.740 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:01.740 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.740 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.002 nvme0n1 00:29:02.002 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.002 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:02.002 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.002 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:02.002 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.002 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.002 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:02.002 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:02.002 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.002 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.002 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.002 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:02.002 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:29:02.002 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:02.002 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:02.002 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:02.002 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:02.002 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmU5YzBhODRhOTkzYzgwNTc2OTA0MGU1MzlmNTY4YTU3OGU3NjVhYzUwZGQ5OGJmYY7Ohw==: 00:29:02.002 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDAwMDljNTQwYmE5MzE1MGNkMGMxODVkNjNmOTdmOGM4NzhiZWRlNzAwZDdkYmY1Nr4tDA==: 00:29:02.002 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:02.002 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:02.002 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmU5YzBhODRhOTkzYzgwNTc2OTA0MGU1MzlmNTY4YTU3OGU3NjVhYzUwZGQ5OGJmYY7Ohw==: 00:29:02.002 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDAwMDljNTQwYmE5MzE1MGNkMGMxODVkNjNmOTdmOGM4NzhiZWRlNzAwZDdkYmY1Nr4tDA==: ]] 00:29:02.002 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDAwMDljNTQwYmE5MzE1MGNkMGMxODVkNjNmOTdmOGM4NzhiZWRlNzAwZDdkYmY1Nr4tDA==: 00:29:02.002 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:29:02.002 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:02.002 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:02.002 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:02.002 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:02.002 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:02.002 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:02.003 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.003 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.003 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.003 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:02.003 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:02.003 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:02.003 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:02.003 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:02.003 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:02.003 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:02.003 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:02.003 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:02.003 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:02.003 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:02.003 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:02.003 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.003 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.264 nvme0n1 00:29:02.264 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.264 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:02.264 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:02.264 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.264 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.264 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.264 10:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:02.264 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:02.264 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.264 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.264 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.264 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:02.264 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:29:02.264 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:02.264 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:02.264 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:02.264 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:02.264 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yjc0ZjQ3YjlkYWQ5MGQzMTcyMTk3OTY0Nzk5MmZiMzR4DGHh: 00:29:02.265 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDkwMDgzM2NmMWQ3NTE5N2M4ZDAxZTc2NDQwNzliMWTrPdmE: 00:29:02.265 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:02.265 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:02.265 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yjc0ZjQ3YjlkYWQ5MGQzMTcyMTk3OTY0Nzk5MmZiMzR4DGHh: 00:29:02.265 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDkwMDgzM2NmMWQ3NTE5N2M4ZDAxZTc2NDQwNzliMWTrPdmE: ]] 00:29:02.265 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDkwMDgzM2NmMWQ3NTE5N2M4ZDAxZTc2NDQwNzliMWTrPdmE: 00:29:02.265 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:29:02.265 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:02.265 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:02.265 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:02.265 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:02.265 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:02.265 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:02.265 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.265 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.265 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.265 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:02.265 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:02.265 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:02.265 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:02.265 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:02.265 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:02.265 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:02.265 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:02.265 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:02.265 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:02.265 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:02.265 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:02.265 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.265 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.526 nvme0n1 00:29:02.526 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.526 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:02.526 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.526 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:02.526 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.526 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.526 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:02.526 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:02.526 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.526 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.526 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.526 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:02.526 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:29:02.526 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:02.526 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:02.526 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:02.526 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:02.526 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjczYTVkZDMxNTRhOGYwMjVmYjY3NmIwMzM0NWUwN2ZiZDU4OGVhY2ZiMjA4MzE3NMEhBw==: 00:29:02.526 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2JhNWM2ZWVkODk5NjNjZDk1ZDMwNjI4NGQ4OTZjYjkuIBlV: 00:29:02.527 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:02.527 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:02.527 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjczYTVkZDMxNTRhOGYwMjVmYjY3NmIwMzM0NWUwN2ZiZDU4OGVhY2ZiMjA4MzE3NMEhBw==: 00:29:02.527 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2JhNWM2ZWVkODk5NjNjZDk1ZDMwNjI4NGQ4OTZjYjkuIBlV: ]] 00:29:02.527 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2JhNWM2ZWVkODk5NjNjZDk1ZDMwNjI4NGQ4OTZjYjkuIBlV: 00:29:02.527 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:29:02.527 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:02.527 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:02.527 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:02.527 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:02.527 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:02.527 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:02.527 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.527 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.527 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.527 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:02.527 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:02.527 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:02.527 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:02.527 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:02.527 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:02.527 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:02.527 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:02.527 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:02.527 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:02.527 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:02.527 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:02.527 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.527 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.788 nvme0n1 00:29:02.788 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.788 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:02.788 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:02.788 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.788 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.788 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.788 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:02.788 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:02.788 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.788 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.788 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.788 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:02.788 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:29:02.788 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:02.788 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:02.788 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:02.788 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:02.788 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjE3Y2FlMzAyZWRmOWI5NTBkNTRjYWJjOTJlOTI2MWMzMzg1YWQ5ZTcyMzkxNzdjYmZkYjc0OWQwY2VlZjJjZWdNROU=: 00:29:02.788 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:02.788 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:02.788 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:02.788 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjE3Y2FlMzAyZWRmOWI5NTBkNTRjYWJjOTJlOTI2MWMzMzg1YWQ5ZTcyMzkxNzdjYmZkYjc0OWQwY2VlZjJjZWdNROU=: 00:29:02.788 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:02.788 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:29:02.788 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:02.788 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:02.788 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:02.788 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:02.788 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:02.788 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:02.788 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.788 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.788 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.788 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:02.788 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:02.788 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:02.788 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:02.788 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:02.788 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:02.788 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:02.788 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:02.788 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:02.788 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:02.788 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:02.788 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:02.788 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.788 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.049 nvme0n1 00:29:03.049 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.049 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:03.049 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:03.049 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.049 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.049 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.049 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:03.049 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:03.049 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.049 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.049 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.049 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:03.049 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:03.049 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:29:03.049 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:03.049 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:03.049 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:03.049 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:03.049 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDI5MmU0NGM4NTI1NzlmNzVkMTRkNTU4NzRhYmJmYWNScNWe: 00:29:03.049 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTU3MTFmYzdkNTRhODQyYTM0ZTI3ZGUzNzAyZDg1NDc3YmUwMzA0NTNiNzc0NzI2NTlhOTRlY2IwNTc0ZTllZkJmO88=: 00:29:03.049 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:03.049 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:03.049 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDI5MmU0NGM4NTI1NzlmNzVkMTRkNTU4NzRhYmJmYWNScNWe: 00:29:03.049 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTU3MTFmYzdkNTRhODQyYTM0ZTI3ZGUzNzAyZDg1NDc3YmUwMzA0NTNiNzc0NzI2NTlhOTRlY2IwNTc0ZTllZkJmO88=: ]] 00:29:03.049 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTU3MTFmYzdkNTRhODQyYTM0ZTI3ZGUzNzAyZDg1NDc3YmUwMzA0NTNiNzc0NzI2NTlhOTRlY2IwNTc0ZTllZkJmO88=: 00:29:03.049 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:29:03.049 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:03.049 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:03.049 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:03.049 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:03.049 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:03.049 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:03.049 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.049 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.049 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.049 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:03.049 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:03.050 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:03.050 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:03.050 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:03.050 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:03.050 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:03.050 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:03.050 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:03.050 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:03.050 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:03.050 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:03.050 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.050 10:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.311 nvme0n1 00:29:03.311 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.311 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:03.311 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:03.311 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.311 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.311 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.311 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:03.311 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:03.311 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.311 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.311 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.311 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:03.311 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:29:03.311 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:03.311 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:03.311 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:03.311 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:03.311 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmU5YzBhODRhOTkzYzgwNTc2OTA0MGU1MzlmNTY4YTU3OGU3NjVhYzUwZGQ5OGJmYY7Ohw==: 00:29:03.311 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDAwMDljNTQwYmE5MzE1MGNkMGMxODVkNjNmOTdmOGM4NzhiZWRlNzAwZDdkYmY1Nr4tDA==: 00:29:03.311 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:03.311 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:03.311 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmU5YzBhODRhOTkzYzgwNTc2OTA0MGU1MzlmNTY4YTU3OGU3NjVhYzUwZGQ5OGJmYY7Ohw==: 00:29:03.311 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDAwMDljNTQwYmE5MzE1MGNkMGMxODVkNjNmOTdmOGM4NzhiZWRlNzAwZDdkYmY1Nr4tDA==: ]] 00:29:03.311 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDAwMDljNTQwYmE5MzE1MGNkMGMxODVkNjNmOTdmOGM4NzhiZWRlNzAwZDdkYmY1Nr4tDA==: 00:29:03.311 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:29:03.311 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:03.311 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:03.311 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:03.311 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:03.311 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:03.312 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:03.312 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.312 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.312 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.312 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:03.312 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:03.312 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:03.312 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:03.312 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:03.312 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:03.312 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:03.312 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:03.312 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:03.312 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:03.312 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:03.312 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:03.312 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.312 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.573 nvme0n1 00:29:03.573 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.573 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:03.573 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:03.573 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.573 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.573 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.573 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:03.573 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:03.573 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.573 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.834 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.834 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:03.834 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:29:03.834 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:03.834 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:03.834 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:03.834 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:03.834 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yjc0ZjQ3YjlkYWQ5MGQzMTcyMTk3OTY0Nzk5MmZiMzR4DGHh: 00:29:03.834 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDkwMDgzM2NmMWQ3NTE5N2M4ZDAxZTc2NDQwNzliMWTrPdmE: 00:29:03.834 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:03.834 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:03.834 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yjc0ZjQ3YjlkYWQ5MGQzMTcyMTk3OTY0Nzk5MmZiMzR4DGHh: 00:29:03.834 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDkwMDgzM2NmMWQ3NTE5N2M4ZDAxZTc2NDQwNzliMWTrPdmE: ]] 00:29:03.834 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDkwMDgzM2NmMWQ3NTE5N2M4ZDAxZTc2NDQwNzliMWTrPdmE: 00:29:03.834 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:29:03.834 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:03.834 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:03.834 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:03.834 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:03.834 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:03.834 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:03.834 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.834 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.834 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.834 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:03.834 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:03.834 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:03.834 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:03.834 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:03.834 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:03.834 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:03.834 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:03.834 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:03.834 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:03.834 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:03.834 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:03.834 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.834 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.095 nvme0n1 00:29:04.095 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.095 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:04.095 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:04.095 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.095 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.095 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.095 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:04.095 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:04.095 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.095 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.095 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.095 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:04.095 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:29:04.095 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:04.096 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:04.096 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:04.096 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:04.096 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjczYTVkZDMxNTRhOGYwMjVmYjY3NmIwMzM0NWUwN2ZiZDU4OGVhY2ZiMjA4MzE3NMEhBw==: 00:29:04.096 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2JhNWM2ZWVkODk5NjNjZDk1ZDMwNjI4NGQ4OTZjYjkuIBlV: 00:29:04.096 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:04.096 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:04.096 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjczYTVkZDMxNTRhOGYwMjVmYjY3NmIwMzM0NWUwN2ZiZDU4OGVhY2ZiMjA4MzE3NMEhBw==: 00:29:04.096 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2JhNWM2ZWVkODk5NjNjZDk1ZDMwNjI4NGQ4OTZjYjkuIBlV: ]] 00:29:04.096 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2JhNWM2ZWVkODk5NjNjZDk1ZDMwNjI4NGQ4OTZjYjkuIBlV: 00:29:04.096 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:29:04.096 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:04.096 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:04.096 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:04.096 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:04.096 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:04.096 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:04.096 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.096 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.096 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.096 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:04.096 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:04.096 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:04.096 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:04.096 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:04.096 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:04.096 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:04.096 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:04.096 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:04.096 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:04.096 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:04.096 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:04.096 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.096 10:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.356 nvme0n1 00:29:04.356 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.356 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:04.356 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:04.356 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.356 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.356 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.356 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:04.356 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:04.356 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.356 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.356 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.356 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:04.356 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:29:04.357 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:04.357 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:04.357 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:04.357 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:04.357 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjE3Y2FlMzAyZWRmOWI5NTBkNTRjYWJjOTJlOTI2MWMzMzg1YWQ5ZTcyMzkxNzdjYmZkYjc0OWQwY2VlZjJjZWdNROU=: 00:29:04.357 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:04.357 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:04.357 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:04.357 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjE3Y2FlMzAyZWRmOWI5NTBkNTRjYWJjOTJlOTI2MWMzMzg1YWQ5ZTcyMzkxNzdjYmZkYjc0OWQwY2VlZjJjZWdNROU=: 00:29:04.357 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:04.357 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:29:04.357 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:04.357 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:04.357 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:04.357 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:04.357 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:04.357 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:04.357 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.357 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.357 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.357 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:04.357 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:04.357 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:04.357 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:04.357 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:04.357 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:04.357 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:04.357 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:04.357 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:04.357 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:04.357 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:04.357 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:04.357 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.357 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.617 nvme0n1 00:29:04.617 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.617 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:04.617 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:04.617 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.617 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.617 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.617 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:04.617 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:04.617 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.617 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.617 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.617 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:04.617 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:04.617 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:29:04.617 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:04.617 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:04.617 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:04.617 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:04.617 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDI5MmU0NGM4NTI1NzlmNzVkMTRkNTU4NzRhYmJmYWNScNWe: 00:29:04.617 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTU3MTFmYzdkNTRhODQyYTM0ZTI3ZGUzNzAyZDg1NDc3YmUwMzA0NTNiNzc0NzI2NTlhOTRlY2IwNTc0ZTllZkJmO88=: 00:29:04.617 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:04.617 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:04.617 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDI5MmU0NGM4NTI1NzlmNzVkMTRkNTU4NzRhYmJmYWNScNWe: 00:29:04.617 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTU3MTFmYzdkNTRhODQyYTM0ZTI3ZGUzNzAyZDg1NDc3YmUwMzA0NTNiNzc0NzI2NTlhOTRlY2IwNTc0ZTllZkJmO88=: ]] 00:29:04.617 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTU3MTFmYzdkNTRhODQyYTM0ZTI3ZGUzNzAyZDg1NDc3YmUwMzA0NTNiNzc0NzI2NTlhOTRlY2IwNTc0ZTllZkJmO88=: 00:29:04.617 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:29:04.617 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:04.617 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:04.618 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:04.618 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:04.618 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:04.618 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:04.618 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.618 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.618 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.878 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:04.878 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:04.878 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:04.878 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:04.878 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:04.878 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:04.878 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:04.878 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:04.878 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:04.878 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:04.878 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:04.878 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:04.878 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.878 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.139 nvme0n1 00:29:05.139 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.139 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:05.139 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:05.139 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.139 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.139 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.139 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:05.139 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:05.139 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.139 10:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.139 10:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.139 10:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:05.139 10:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:29:05.139 10:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:05.139 10:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:05.139 10:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:05.139 10:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:05.139 10:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmU5YzBhODRhOTkzYzgwNTc2OTA0MGU1MzlmNTY4YTU3OGU3NjVhYzUwZGQ5OGJmYY7Ohw==: 00:29:05.139 10:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDAwMDljNTQwYmE5MzE1MGNkMGMxODVkNjNmOTdmOGM4NzhiZWRlNzAwZDdkYmY1Nr4tDA==: 00:29:05.139 10:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:05.139 10:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:05.139 10:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmU5YzBhODRhOTkzYzgwNTc2OTA0MGU1MzlmNTY4YTU3OGU3NjVhYzUwZGQ5OGJmYY7Ohw==: 00:29:05.139 10:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDAwMDljNTQwYmE5MzE1MGNkMGMxODVkNjNmOTdmOGM4NzhiZWRlNzAwZDdkYmY1Nr4tDA==: ]] 00:29:05.139 10:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDAwMDljNTQwYmE5MzE1MGNkMGMxODVkNjNmOTdmOGM4NzhiZWRlNzAwZDdkYmY1Nr4tDA==: 00:29:05.139 10:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:29:05.139 10:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:05.139 10:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:05.139 10:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:05.139 10:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:05.139 10:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:05.139 10:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:05.139 10:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.139 10:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.139 10:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.139 10:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:05.139 10:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:05.139 10:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:05.139 10:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:05.139 10:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:05.139 10:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:05.139 10:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:05.139 10:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:05.139 10:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:05.139 10:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:05.139 10:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:05.139 10:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:05.139 10:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.139 10:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.711 nvme0n1 00:29:05.711 10:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.711 10:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:05.711 10:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:05.711 10:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.711 10:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.711 10:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.711 10:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:05.711 10:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:05.711 10:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.711 10:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.711 10:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.711 10:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:05.711 10:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:29:05.711 10:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:05.711 10:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:05.711 10:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:05.711 10:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:05.711 10:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yjc0ZjQ3YjlkYWQ5MGQzMTcyMTk3OTY0Nzk5MmZiMzR4DGHh: 00:29:05.711 10:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDkwMDgzM2NmMWQ3NTE5N2M4ZDAxZTc2NDQwNzliMWTrPdmE: 00:29:05.711 10:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:05.711 10:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:05.711 10:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yjc0ZjQ3YjlkYWQ5MGQzMTcyMTk3OTY0Nzk5MmZiMzR4DGHh: 00:29:05.711 10:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDkwMDgzM2NmMWQ3NTE5N2M4ZDAxZTc2NDQwNzliMWTrPdmE: ]] 00:29:05.711 10:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDkwMDgzM2NmMWQ3NTE5N2M4ZDAxZTc2NDQwNzliMWTrPdmE: 00:29:05.711 10:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:29:05.711 10:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:05.711 10:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:05.711 10:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:05.711 10:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:05.711 10:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:05.711 10:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:05.711 10:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.711 10:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.711 10:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.711 10:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:05.711 10:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:05.711 10:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:05.711 10:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:05.711 10:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:05.711 10:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:05.711 10:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:05.711 10:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:05.711 10:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:05.711 10:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:05.711 10:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:05.711 10:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:05.711 10:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.711 10:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.284 nvme0n1 00:29:06.284 10:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.284 10:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:06.284 10:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:06.284 10:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.284 10:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.284 10:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.284 10:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:06.284 10:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:06.284 10:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.284 10:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.284 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.284 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:06.284 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:29:06.284 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:06.284 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:06.284 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:06.284 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:06.284 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjczYTVkZDMxNTRhOGYwMjVmYjY3NmIwMzM0NWUwN2ZiZDU4OGVhY2ZiMjA4MzE3NMEhBw==: 00:29:06.284 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2JhNWM2ZWVkODk5NjNjZDk1ZDMwNjI4NGQ4OTZjYjkuIBlV: 00:29:06.284 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:06.284 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:06.284 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjczYTVkZDMxNTRhOGYwMjVmYjY3NmIwMzM0NWUwN2ZiZDU4OGVhY2ZiMjA4MzE3NMEhBw==: 00:29:06.284 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2JhNWM2ZWVkODk5NjNjZDk1ZDMwNjI4NGQ4OTZjYjkuIBlV: ]] 00:29:06.284 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2JhNWM2ZWVkODk5NjNjZDk1ZDMwNjI4NGQ4OTZjYjkuIBlV: 00:29:06.284 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:29:06.284 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:06.284 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:06.284 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:06.284 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:06.284 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:06.284 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:06.284 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.284 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.284 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.284 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:06.284 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:06.284 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:06.284 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:06.284 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:06.284 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:06.284 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:06.284 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:06.284 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:06.284 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:06.284 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:06.284 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:06.284 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.284 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.544 nvme0n1 00:29:06.544 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.544 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:06.544 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:06.544 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.544 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.544 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.804 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:06.804 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:06.804 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.804 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.804 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.804 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:06.804 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:29:06.804 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:06.804 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:06.804 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:06.804 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:06.804 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjE3Y2FlMzAyZWRmOWI5NTBkNTRjYWJjOTJlOTI2MWMzMzg1YWQ5ZTcyMzkxNzdjYmZkYjc0OWQwY2VlZjJjZWdNROU=: 00:29:06.804 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:06.804 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:06.804 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:06.804 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjE3Y2FlMzAyZWRmOWI5NTBkNTRjYWJjOTJlOTI2MWMzMzg1YWQ5ZTcyMzkxNzdjYmZkYjc0OWQwY2VlZjJjZWdNROU=: 00:29:06.804 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:06.804 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:29:06.804 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:06.804 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:06.804 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:06.804 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:06.804 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:06.804 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:06.804 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.804 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.804 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.804 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:06.804 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:06.804 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:06.804 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:06.804 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:06.804 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:06.804 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:06.804 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:06.804 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:06.804 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:06.804 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:06.804 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:06.804 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.804 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.064 nvme0n1 00:29:07.064 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.064 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:07.064 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:07.064 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.064 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.064 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.064 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:07.064 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:07.324 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.324 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.324 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.324 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:07.324 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:07.324 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:29:07.324 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:07.324 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:07.324 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:07.324 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:07.324 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDI5MmU0NGM4NTI1NzlmNzVkMTRkNTU4NzRhYmJmYWNScNWe: 00:29:07.324 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTU3MTFmYzdkNTRhODQyYTM0ZTI3ZGUzNzAyZDg1NDc3YmUwMzA0NTNiNzc0NzI2NTlhOTRlY2IwNTc0ZTllZkJmO88=: 00:29:07.324 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:07.324 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:07.324 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDI5MmU0NGM4NTI1NzlmNzVkMTRkNTU4NzRhYmJmYWNScNWe: 00:29:07.324 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTU3MTFmYzdkNTRhODQyYTM0ZTI3ZGUzNzAyZDg1NDc3YmUwMzA0NTNiNzc0NzI2NTlhOTRlY2IwNTc0ZTllZkJmO88=: ]] 00:29:07.324 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTU3MTFmYzdkNTRhODQyYTM0ZTI3ZGUzNzAyZDg1NDc3YmUwMzA0NTNiNzc0NzI2NTlhOTRlY2IwNTc0ZTllZkJmO88=: 00:29:07.324 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:29:07.324 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:07.324 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:07.324 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:07.324 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:07.324 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:07.324 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:07.324 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.324 10:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.325 10:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.325 10:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:07.325 10:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:07.325 10:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:07.325 10:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:07.325 10:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:07.325 10:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:07.325 10:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:07.325 10:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:07.325 10:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:07.325 10:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:07.325 10:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:07.325 10:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:07.325 10:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.325 10:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.896 nvme0n1 00:29:07.896 10:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.896 10:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:07.896 10:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:07.896 10:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.896 10:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.896 10:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.896 10:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:07.896 10:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:07.896 10:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.896 10:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.896 10:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.896 10:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:07.896 10:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:29:07.896 10:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:07.896 10:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:07.896 10:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:07.896 10:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:07.896 10:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmU5YzBhODRhOTkzYzgwNTc2OTA0MGU1MzlmNTY4YTU3OGU3NjVhYzUwZGQ5OGJmYY7Ohw==: 00:29:07.896 10:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDAwMDljNTQwYmE5MzE1MGNkMGMxODVkNjNmOTdmOGM4NzhiZWRlNzAwZDdkYmY1Nr4tDA==: 00:29:07.896 10:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:07.896 10:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:07.896 10:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmU5YzBhODRhOTkzYzgwNTc2OTA0MGU1MzlmNTY4YTU3OGU3NjVhYzUwZGQ5OGJmYY7Ohw==: 00:29:07.896 10:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDAwMDljNTQwYmE5MzE1MGNkMGMxODVkNjNmOTdmOGM4NzhiZWRlNzAwZDdkYmY1Nr4tDA==: ]] 00:29:07.896 10:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDAwMDljNTQwYmE5MzE1MGNkMGMxODVkNjNmOTdmOGM4NzhiZWRlNzAwZDdkYmY1Nr4tDA==: 00:29:07.896 10:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:29:07.896 10:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:07.896 10:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:07.896 10:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:07.896 10:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:07.896 10:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:07.896 10:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:07.896 10:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.896 10:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.896 10:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.896 10:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:07.896 10:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:07.896 10:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:07.896 10:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:07.896 10:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:07.896 10:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:07.896 10:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:07.896 10:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:07.896 10:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:07.896 10:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:07.896 10:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:07.896 10:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:07.896 10:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.896 10:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.838 nvme0n1 00:29:08.838 10:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.838 10:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:08.838 10:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:08.838 10:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.838 10:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.838 10:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.838 10:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:08.838 10:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:08.838 10:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.838 10:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.838 10:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.838 10:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:08.838 10:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:29:08.838 10:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:08.838 10:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:08.838 10:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:08.838 10:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:08.838 10:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yjc0ZjQ3YjlkYWQ5MGQzMTcyMTk3OTY0Nzk5MmZiMzR4DGHh: 00:29:08.839 10:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDkwMDgzM2NmMWQ3NTE5N2M4ZDAxZTc2NDQwNzliMWTrPdmE: 00:29:08.839 10:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:08.839 10:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:08.839 10:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yjc0ZjQ3YjlkYWQ5MGQzMTcyMTk3OTY0Nzk5MmZiMzR4DGHh: 00:29:08.839 10:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDkwMDgzM2NmMWQ3NTE5N2M4ZDAxZTc2NDQwNzliMWTrPdmE: ]] 00:29:08.839 10:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDkwMDgzM2NmMWQ3NTE5N2M4ZDAxZTc2NDQwNzliMWTrPdmE: 00:29:08.839 10:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:29:08.839 10:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:08.839 10:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:08.839 10:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:08.839 10:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:08.839 10:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:08.839 10:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:08.839 10:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.839 10:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.839 10:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.839 10:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:08.839 10:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:08.839 10:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:08.839 10:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:08.839 10:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:08.839 10:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:08.839 10:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:08.839 10:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:08.839 10:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:08.839 10:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:08.839 10:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:08.839 10:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:08.839 10:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.839 10:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.410 nvme0n1 00:29:09.410 10:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.410 10:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:09.410 10:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:09.410 10:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.410 10:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.410 10:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.410 10:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:09.410 10:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:09.410 10:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.410 10:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.410 10:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.410 10:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:09.410 10:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:29:09.410 10:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:09.410 10:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:09.410 10:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:09.410 10:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:09.410 10:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjczYTVkZDMxNTRhOGYwMjVmYjY3NmIwMzM0NWUwN2ZiZDU4OGVhY2ZiMjA4MzE3NMEhBw==: 00:29:09.410 10:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2JhNWM2ZWVkODk5NjNjZDk1ZDMwNjI4NGQ4OTZjYjkuIBlV: 00:29:09.410 10:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:09.410 10:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:09.410 10:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjczYTVkZDMxNTRhOGYwMjVmYjY3NmIwMzM0NWUwN2ZiZDU4OGVhY2ZiMjA4MzE3NMEhBw==: 00:29:09.410 10:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2JhNWM2ZWVkODk5NjNjZDk1ZDMwNjI4NGQ4OTZjYjkuIBlV: ]] 00:29:09.410 10:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2JhNWM2ZWVkODk5NjNjZDk1ZDMwNjI4NGQ4OTZjYjkuIBlV: 00:29:09.410 10:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:29:09.410 10:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:09.410 10:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:09.410 10:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:09.410 10:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:09.410 10:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:09.410 10:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:09.410 10:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.410 10:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.410 10:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.410 10:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:09.410 10:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:09.410 10:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:09.410 10:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:09.410 10:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:09.410 10:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:09.410 10:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:09.410 10:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:09.410 10:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:09.410 10:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:09.410 10:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:09.410 10:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:09.410 10:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.410 10:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.980 nvme0n1 00:29:09.980 10:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.980 10:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:09.980 10:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:09.980 10:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.980 10:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.980 10:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.980 10:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:09.980 10:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:09.980 10:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.980 10:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.240 10:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.240 10:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:10.240 10:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:29:10.240 10:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:10.240 10:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:10.240 10:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:10.240 10:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:10.240 10:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjE3Y2FlMzAyZWRmOWI5NTBkNTRjYWJjOTJlOTI2MWMzMzg1YWQ5ZTcyMzkxNzdjYmZkYjc0OWQwY2VlZjJjZWdNROU=: 00:29:10.240 10:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:10.240 10:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:10.240 10:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:10.240 10:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjE3Y2FlMzAyZWRmOWI5NTBkNTRjYWJjOTJlOTI2MWMzMzg1YWQ5ZTcyMzkxNzdjYmZkYjc0OWQwY2VlZjJjZWdNROU=: 00:29:10.240 10:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:10.240 10:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:29:10.240 10:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:10.240 10:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:10.240 10:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:10.240 10:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:10.240 10:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:10.240 10:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:10.240 10:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.240 10:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.241 10:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.241 10:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:10.241 10:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:10.241 10:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:10.241 10:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:10.241 10:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:10.241 10:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:10.241 10:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:10.241 10:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:10.241 10:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:10.241 10:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:10.241 10:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:10.241 10:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:10.241 10:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.241 10:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.813 nvme0n1 00:29:10.813 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.813 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:10.813 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:10.813 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.813 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.813 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.813 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:10.813 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:10.813 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.813 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.813 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.813 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:29:10.813 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:10.813 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:10.813 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:10.813 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:10.813 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmU5YzBhODRhOTkzYzgwNTc2OTA0MGU1MzlmNTY4YTU3OGU3NjVhYzUwZGQ5OGJmYY7Ohw==: 00:29:10.813 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDAwMDljNTQwYmE5MzE1MGNkMGMxODVkNjNmOTdmOGM4NzhiZWRlNzAwZDdkYmY1Nr4tDA==: 00:29:10.813 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:10.813 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:10.813 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmU5YzBhODRhOTkzYzgwNTc2OTA0MGU1MzlmNTY4YTU3OGU3NjVhYzUwZGQ5OGJmYY7Ohw==: 00:29:10.813 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDAwMDljNTQwYmE5MzE1MGNkMGMxODVkNjNmOTdmOGM4NzhiZWRlNzAwZDdkYmY1Nr4tDA==: ]] 00:29:10.813 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDAwMDljNTQwYmE5MzE1MGNkMGMxODVkNjNmOTdmOGM4NzhiZWRlNzAwZDdkYmY1Nr4tDA==: 00:29:10.813 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:10.813 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.813 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.813 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.813 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:29:10.813 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:10.813 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:10.813 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:10.813 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:10.813 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:10.813 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:10.813 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:10.813 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:10.813 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:10.813 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:10.813 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:29:10.813 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:29:10.813 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:29:10.813 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:10.813 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:10.813 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:10.813 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:10.813 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:29:10.813 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.813 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.813 request: 00:29:10.813 { 00:29:10.813 "name": "nvme0", 00:29:10.813 "trtype": "tcp", 00:29:10.813 "traddr": "10.0.0.1", 00:29:10.813 "adrfam": "ipv4", 00:29:10.813 "trsvcid": "4420", 00:29:10.813 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:29:10.813 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:29:10.813 "prchk_reftag": false, 00:29:10.813 "prchk_guard": false, 00:29:10.813 "hdgst": false, 00:29:10.813 "ddgst": false, 00:29:10.813 "allow_unrecognized_csi": false, 00:29:10.813 "method": "bdev_nvme_attach_controller", 00:29:10.813 "req_id": 1 00:29:10.813 } 00:29:10.813 Got JSON-RPC error response 00:29:10.813 response: 00:29:10.813 { 00:29:10.813 "code": -5, 00:29:10.813 "message": "Input/output error" 00:29:10.813 } 00:29:10.813 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:10.813 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:29:10.813 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:10.813 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:10.813 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:10.813 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:29:10.813 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:29:10.813 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.813 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.813 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.074 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:29:11.074 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:29:11.074 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:11.074 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:11.074 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:11.074 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:11.074 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:11.074 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:11.074 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:11.074 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:11.074 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:11.074 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:11.074 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:11.074 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:29:11.074 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:11.074 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:11.074 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:11.074 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:11.074 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:11.074 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:11.074 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.074 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.074 request: 00:29:11.074 { 00:29:11.074 "name": "nvme0", 00:29:11.074 "trtype": "tcp", 00:29:11.074 "traddr": "10.0.0.1", 00:29:11.074 "adrfam": "ipv4", 00:29:11.074 "trsvcid": "4420", 00:29:11.074 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:29:11.074 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:29:11.074 "prchk_reftag": false, 00:29:11.074 "prchk_guard": false, 00:29:11.074 "hdgst": false, 00:29:11.074 "ddgst": false, 00:29:11.074 "dhchap_key": "key2", 00:29:11.074 "allow_unrecognized_csi": false, 00:29:11.074 "method": "bdev_nvme_attach_controller", 00:29:11.074 "req_id": 1 00:29:11.074 } 00:29:11.074 Got JSON-RPC error response 00:29:11.074 response: 00:29:11.074 { 00:29:11.074 "code": -5, 00:29:11.074 "message": "Input/output error" 00:29:11.074 } 00:29:11.074 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:11.074 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:29:11.074 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:11.074 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:11.074 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:11.074 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:29:11.074 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:29:11.074 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.074 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.074 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.074 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:29:11.074 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:29:11.074 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:11.074 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:11.074 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:11.074 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:11.074 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:11.074 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:11.074 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:11.075 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:11.075 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:11.075 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:11.075 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:11.075 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:29:11.075 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:11.075 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:11.075 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:11.075 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:11.075 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:11.075 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:11.075 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.075 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.075 request: 00:29:11.075 { 00:29:11.075 "name": "nvme0", 00:29:11.075 "trtype": "tcp", 00:29:11.075 "traddr": "10.0.0.1", 00:29:11.075 "adrfam": "ipv4", 00:29:11.075 "trsvcid": "4420", 00:29:11.075 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:29:11.075 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:29:11.075 "prchk_reftag": false, 00:29:11.075 "prchk_guard": false, 00:29:11.075 "hdgst": false, 00:29:11.075 "ddgst": false, 00:29:11.075 "dhchap_key": "key1", 00:29:11.075 "dhchap_ctrlr_key": "ckey2", 00:29:11.075 "allow_unrecognized_csi": false, 00:29:11.075 "method": "bdev_nvme_attach_controller", 00:29:11.075 "req_id": 1 00:29:11.075 } 00:29:11.075 Got JSON-RPC error response 00:29:11.075 response: 00:29:11.075 { 00:29:11.075 "code": -5, 00:29:11.075 "message": "Input/output error" 00:29:11.075 } 00:29:11.075 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:11.075 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:29:11.075 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:11.075 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:11.075 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:11.075 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:29:11.075 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:11.075 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:11.075 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:11.075 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:11.075 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:11.075 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:11.075 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:11.075 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:11.075 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:11.075 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:11.075 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:29:11.075 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.075 10:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.336 nvme0n1 00:29:11.336 10:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.336 10:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:29:11.336 10:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:11.336 10:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:11.336 10:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:11.336 10:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:11.336 10:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yjc0ZjQ3YjlkYWQ5MGQzMTcyMTk3OTY0Nzk5MmZiMzR4DGHh: 00:29:11.336 10:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDkwMDgzM2NmMWQ3NTE5N2M4ZDAxZTc2NDQwNzliMWTrPdmE: 00:29:11.336 10:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:11.336 10:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:11.336 10:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yjc0ZjQ3YjlkYWQ5MGQzMTcyMTk3OTY0Nzk5MmZiMzR4DGHh: 00:29:11.336 10:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDkwMDgzM2NmMWQ3NTE5N2M4ZDAxZTc2NDQwNzliMWTrPdmE: ]] 00:29:11.336 10:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDkwMDgzM2NmMWQ3NTE5N2M4ZDAxZTc2NDQwNzliMWTrPdmE: 00:29:11.336 10:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:11.336 10:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.336 10:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.336 10:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.336 10:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:29:11.336 10:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:29:11.336 10:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.336 10:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.336 10:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.336 10:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:11.336 10:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:11.336 10:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:29:11.336 10:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:11.336 10:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:11.336 10:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:11.336 10:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:11.336 10:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:11.336 10:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:11.336 10:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.336 10:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.598 request: 00:29:11.598 { 00:29:11.598 "name": "nvme0", 00:29:11.598 "dhchap_key": "key1", 00:29:11.598 "dhchap_ctrlr_key": "ckey2", 00:29:11.598 "method": "bdev_nvme_set_keys", 00:29:11.598 "req_id": 1 00:29:11.598 } 00:29:11.598 Got JSON-RPC error response 00:29:11.598 response: 00:29:11.598 { 00:29:11.598 "code": -13, 00:29:11.598 "message": "Permission denied" 00:29:11.598 } 00:29:11.598 10:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:11.598 10:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:29:11.598 10:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:11.598 10:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:11.598 10:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:11.598 10:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:29:11.598 10:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:29:11.598 10:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.598 10:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.598 10:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.598 10:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:29:11.598 10:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:29:12.539 10:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:29:12.539 10:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:29:12.539 10:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.539 10:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.539 10:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.539 10:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:29:12.539 10:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:29:13.482 10:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:29:13.482 10:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:29:13.482 10:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.482 10:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.743 10:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.743 10:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:29:13.743 10:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:29:13.743 10:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:13.743 10:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:13.743 10:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:13.743 10:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:13.743 10:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmU5YzBhODRhOTkzYzgwNTc2OTA0MGU1MzlmNTY4YTU3OGU3NjVhYzUwZGQ5OGJmYY7Ohw==: 00:29:13.743 10:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDAwMDljNTQwYmE5MzE1MGNkMGMxODVkNjNmOTdmOGM4NzhiZWRlNzAwZDdkYmY1Nr4tDA==: 00:29:13.743 10:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:13.743 10:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:13.743 10:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmU5YzBhODRhOTkzYzgwNTc2OTA0MGU1MzlmNTY4YTU3OGU3NjVhYzUwZGQ5OGJmYY7Ohw==: 00:29:13.743 10:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDAwMDljNTQwYmE5MzE1MGNkMGMxODVkNjNmOTdmOGM4NzhiZWRlNzAwZDdkYmY1Nr4tDA==: ]] 00:29:13.743 10:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDAwMDljNTQwYmE5MzE1MGNkMGMxODVkNjNmOTdmOGM4NzhiZWRlNzAwZDdkYmY1Nr4tDA==: 00:29:13.743 10:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:29:13.743 10:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:13.743 10:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:13.743 10:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:13.743 10:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:13.743 10:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:13.743 10:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:13.743 10:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:13.743 10:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:13.743 10:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:13.743 10:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:13.743 10:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:29:13.743 10:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.743 10:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.743 nvme0n1 00:29:13.743 10:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.743 10:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:29:13.743 10:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:13.743 10:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:13.743 10:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:13.743 10:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:13.743 10:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yjc0ZjQ3YjlkYWQ5MGQzMTcyMTk3OTY0Nzk5MmZiMzR4DGHh: 00:29:13.743 10:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDkwMDgzM2NmMWQ3NTE5N2M4ZDAxZTc2NDQwNzliMWTrPdmE: 00:29:13.743 10:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:13.743 10:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:13.743 10:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yjc0ZjQ3YjlkYWQ5MGQzMTcyMTk3OTY0Nzk5MmZiMzR4DGHh: 00:29:13.744 10:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDkwMDgzM2NmMWQ3NTE5N2M4ZDAxZTc2NDQwNzliMWTrPdmE: ]] 00:29:13.744 10:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDkwMDgzM2NmMWQ3NTE5N2M4ZDAxZTc2NDQwNzliMWTrPdmE: 00:29:13.744 10:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:29:13.744 10:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:29:13.744 10:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:29:13.744 10:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:13.744 10:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:13.744 10:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:13.744 10:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:13.744 10:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:29:13.744 10:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.744 10:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.004 request: 00:29:14.004 { 00:29:14.004 "name": "nvme0", 00:29:14.004 "dhchap_key": "key2", 00:29:14.004 "dhchap_ctrlr_key": "ckey1", 00:29:14.004 "method": "bdev_nvme_set_keys", 00:29:14.004 "req_id": 1 00:29:14.004 } 00:29:14.004 Got JSON-RPC error response 00:29:14.004 response: 00:29:14.004 { 00:29:14.004 "code": -13, 00:29:14.004 "message": "Permission denied" 00:29:14.004 } 00:29:14.004 10:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:14.004 10:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:29:14.004 10:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:14.004 10:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:14.004 10:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:14.004 10:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:29:14.004 10:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:29:14.004 10:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.004 10:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.004 10:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.004 10:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:29:14.004 10:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:29:14.947 10:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:29:14.947 10:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:29:14.947 10:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.947 10:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.947 10:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.947 10:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:29:14.947 10:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:29:14.947 10:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:29:14.947 10:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:29:14.947 10:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:14.947 10:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:29:14.947 10:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:14.947 10:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:29:14.947 10:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:14.947 10:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:14.947 rmmod nvme_tcp 00:29:14.947 rmmod nvme_fabrics 00:29:14.947 10:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:14.947 10:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:29:14.947 10:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:29:14.947 10:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 1528522 ']' 00:29:14.947 10:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 1528522 00:29:14.947 10:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 1528522 ']' 00:29:14.947 10:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 1528522 00:29:14.947 10:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:29:14.947 10:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:14.947 10:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1528522 00:29:15.208 10:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:15.208 10:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:15.208 10:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1528522' 00:29:15.208 killing process with pid 1528522 00:29:15.208 10:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 1528522 00:29:15.208 10:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 1528522 00:29:15.208 10:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:15.208 10:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:15.208 10:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:15.208 10:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:29:15.208 10:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:29:15.208 10:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:15.208 10:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:29:15.208 10:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:15.208 10:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:15.208 10:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:15.208 10:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:15.208 10:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:17.776 10:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:17.776 10:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:29:17.776 10:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:29:17.776 10:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:29:17.776 10:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:29:17.776 10:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:29:17.776 10:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:17.777 10:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:29:17.777 10:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:29:17.777 10:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:17.777 10:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:29:17.777 10:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:29:17.777 10:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:21.166 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:29:21.166 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:29:21.166 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:29:21.166 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:29:21.166 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:29:21.166 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:29:21.166 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:29:21.166 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:29:21.166 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:29:21.166 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:29:21.166 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:29:21.166 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:29:21.166 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:29:21.166 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:29:21.166 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:29:21.166 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:29:21.166 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:29:21.428 10:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.Tpk /tmp/spdk.key-null.fKq /tmp/spdk.key-sha256.H95 /tmp/spdk.key-sha384.bGR /tmp/spdk.key-sha512.WDg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:29:21.428 10:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:24.733 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:29:24.733 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:29:24.733 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:29:24.733 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:29:24.733 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:29:24.733 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:29:24.733 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:29:24.733 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:29:24.733 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:29:24.733 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:29:24.733 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:29:24.733 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:29:24.733 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:29:24.733 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:29:24.733 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:29:24.733 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:29:24.733 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:29:25.305 00:29:25.305 real 1m0.826s 00:29:25.305 user 0m54.666s 00:29:25.305 sys 0m16.057s 00:29:25.305 10:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:25.305 10:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.305 ************************************ 00:29:25.305 END TEST nvmf_auth_host 00:29:25.305 ************************************ 00:29:25.305 10:02:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:29:25.305 10:02:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:29:25.305 10:02:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:25.305 10:02:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:25.305 10:02:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.305 ************************************ 00:29:25.305 START TEST nvmf_digest 00:29:25.305 ************************************ 00:29:25.305 10:02:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:29:25.305 * Looking for test storage... 00:29:25.305 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:25.305 10:02:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:25.305 10:02:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:29:25.305 10:02:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:25.567 10:02:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:25.567 10:02:56 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:25.567 10:02:56 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:25.567 10:02:56 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:25.567 10:02:56 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:29:25.567 10:02:56 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:29:25.567 10:02:56 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:29:25.567 10:02:56 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:29:25.567 10:02:56 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:29:25.567 10:02:56 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:29:25.567 10:02:56 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:29:25.567 10:02:56 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:25.567 10:02:56 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:29:25.567 10:02:56 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:29:25.567 10:02:56 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:25.567 10:02:56 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:25.567 10:02:56 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:29:25.567 10:02:56 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:29:25.567 10:02:56 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:25.567 10:02:56 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:29:25.567 10:02:56 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:29:25.567 10:02:56 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:29:25.567 10:02:56 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:29:25.567 10:02:56 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:25.567 10:02:56 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:29:25.567 10:02:56 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:29:25.567 10:02:56 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:25.567 10:02:56 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:25.567 10:02:56 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:29:25.567 10:02:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:25.567 10:02:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:25.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:25.567 --rc genhtml_branch_coverage=1 00:29:25.567 --rc genhtml_function_coverage=1 00:29:25.567 --rc genhtml_legend=1 00:29:25.567 --rc geninfo_all_blocks=1 00:29:25.567 --rc geninfo_unexecuted_blocks=1 00:29:25.567 00:29:25.567 ' 00:29:25.567 10:02:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:25.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:25.567 --rc genhtml_branch_coverage=1 00:29:25.567 --rc genhtml_function_coverage=1 00:29:25.567 --rc genhtml_legend=1 00:29:25.567 --rc geninfo_all_blocks=1 00:29:25.567 --rc geninfo_unexecuted_blocks=1 00:29:25.567 00:29:25.567 ' 00:29:25.567 10:02:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:25.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:25.567 --rc genhtml_branch_coverage=1 00:29:25.567 --rc genhtml_function_coverage=1 00:29:25.567 --rc genhtml_legend=1 00:29:25.567 --rc geninfo_all_blocks=1 00:29:25.567 --rc geninfo_unexecuted_blocks=1 00:29:25.567 00:29:25.567 ' 00:29:25.567 10:02:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:25.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:25.567 --rc genhtml_branch_coverage=1 00:29:25.567 --rc genhtml_function_coverage=1 00:29:25.567 --rc genhtml_legend=1 00:29:25.567 --rc geninfo_all_blocks=1 00:29:25.567 --rc geninfo_unexecuted_blocks=1 00:29:25.567 00:29:25.567 ' 00:29:25.567 10:02:56 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:25.567 10:02:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:29:25.567 10:02:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:25.567 10:02:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:25.567 10:02:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:25.567 10:02:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:25.567 10:02:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:25.567 10:02:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:25.567 10:02:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:25.567 10:02:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:25.567 10:02:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:25.567 10:02:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:25.567 10:02:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:25.567 10:02:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:25.567 10:02:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:25.567 10:02:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:25.567 10:02:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:25.567 10:02:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:25.567 10:02:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:25.567 10:02:56 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:29:25.567 10:02:56 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:25.567 10:02:56 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:25.567 10:02:56 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:25.567 10:02:56 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:25.567 10:02:56 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:25.567 10:02:56 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:25.567 10:02:56 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:29:25.567 10:02:56 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:25.567 10:02:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:29:25.567 10:02:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:25.567 10:02:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:25.567 10:02:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:25.567 10:02:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:25.567 10:02:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:25.567 10:02:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:25.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:25.567 10:02:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:25.567 10:02:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:25.567 10:02:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:25.567 10:02:56 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:29:25.567 10:02:56 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:29:25.567 10:02:56 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:29:25.567 10:02:56 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:29:25.567 10:02:56 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:29:25.567 10:02:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:25.567 10:02:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:25.567 10:02:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:25.567 10:02:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:25.567 10:02:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:25.567 10:02:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:25.567 10:02:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:25.567 10:02:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:25.567 10:02:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:25.567 10:02:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:25.567 10:02:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:29:25.567 10:02:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:33.713 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:33.713 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:33.713 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:33.713 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:33.713 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:33.713 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.663 ms 00:29:33.713 00:29:33.713 --- 10.0.0.2 ping statistics --- 00:29:33.713 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:33.713 rtt min/avg/max/mdev = 0.663/0.663/0.663/0.000 ms 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:33.713 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:33.713 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:29:33.713 00:29:33.713 --- 10.0.0.1 ping statistics --- 00:29:33.713 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:33.713 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:33.713 ************************************ 00:29:33.713 START TEST nvmf_digest_clean 00:29:33.713 ************************************ 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=1545627 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 1545627 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1545627 ']' 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:33.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:33.713 10:03:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:33.713 [2024-11-20 10:03:04.027498] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:29:33.713 [2024-11-20 10:03:04.027595] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:33.713 [2024-11-20 10:03:04.129154] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:33.713 [2024-11-20 10:03:04.181557] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:33.713 [2024-11-20 10:03:04.181611] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:33.713 [2024-11-20 10:03:04.181620] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:33.713 [2024-11-20 10:03:04.181628] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:33.714 [2024-11-20 10:03:04.181634] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:33.714 [2024-11-20 10:03:04.182448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:33.976 10:03:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:33.976 10:03:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:29:33.976 10:03:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:33.976 10:03:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:33.976 10:03:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:33.976 10:03:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:33.976 10:03:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:29:33.976 10:03:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:29:33.976 10:03:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:29:33.976 10:03:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.976 10:03:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:34.237 null0 00:29:34.237 [2024-11-20 10:03:04.981063] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:34.237 [2024-11-20 10:03:05.005376] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:34.237 10:03:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.237 10:03:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:29:34.237 10:03:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:34.237 10:03:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:34.237 10:03:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:29:34.237 10:03:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:29:34.237 10:03:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:29:34.237 10:03:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:34.237 10:03:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1545767 00:29:34.237 10:03:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1545767 /var/tmp/bperf.sock 00:29:34.237 10:03:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1545767 ']' 00:29:34.237 10:03:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:34.237 10:03:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:29:34.237 10:03:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:34.237 10:03:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:34.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:34.237 10:03:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:34.237 10:03:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:34.237 [2024-11-20 10:03:05.065725] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:29:34.237 [2024-11-20 10:03:05.065792] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1545767 ] 00:29:34.498 [2024-11-20 10:03:05.157063] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:34.498 [2024-11-20 10:03:05.209236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:35.079 10:03:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:35.079 10:03:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:29:35.079 10:03:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:35.080 10:03:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:35.080 10:03:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:35.340 10:03:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:35.340 10:03:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:35.600 nvme0n1 00:29:35.600 10:03:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:35.600 10:03:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:35.862 Running I/O for 2 seconds... 00:29:37.747 18874.00 IOPS, 73.73 MiB/s [2024-11-20T09:03:08.663Z] 19819.00 IOPS, 77.42 MiB/s 00:29:37.747 Latency(us) 00:29:37.747 [2024-11-20T09:03:08.663Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:37.747 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:37.747 nvme0n1 : 2.00 19853.15 77.55 0.00 0.00 6440.87 2921.81 21626.88 00:29:37.747 [2024-11-20T09:03:08.663Z] =================================================================================================================== 00:29:37.747 [2024-11-20T09:03:08.663Z] Total : 19853.15 77.55 0.00 0.00 6440.87 2921.81 21626.88 00:29:37.747 { 00:29:37.747 "results": [ 00:29:37.747 { 00:29:37.747 "job": "nvme0n1", 00:29:37.747 "core_mask": "0x2", 00:29:37.747 "workload": "randread", 00:29:37.747 "status": "finished", 00:29:37.747 "queue_depth": 128, 00:29:37.747 "io_size": 4096, 00:29:37.747 "runtime": 2.003007, 00:29:37.747 "iops": 19853.150787790557, 00:29:37.747 "mibps": 77.55137026480686, 00:29:37.747 "io_failed": 0, 00:29:37.747 "io_timeout": 0, 00:29:37.747 "avg_latency_us": 6440.871074452213, 00:29:37.747 "min_latency_us": 2921.8133333333335, 00:29:37.747 "max_latency_us": 21626.88 00:29:37.747 } 00:29:37.747 ], 00:29:37.747 "core_count": 1 00:29:37.747 } 00:29:37.747 10:03:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:37.747 10:03:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:37.747 10:03:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:37.747 10:03:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:37.747 | select(.opcode=="crc32c") 00:29:37.747 | "\(.module_name) \(.executed)"' 00:29:37.747 10:03:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:38.008 10:03:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:38.008 10:03:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:38.008 10:03:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:38.008 10:03:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:38.008 10:03:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1545767 00:29:38.008 10:03:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1545767 ']' 00:29:38.008 10:03:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1545767 00:29:38.008 10:03:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:29:38.008 10:03:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:38.008 10:03:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1545767 00:29:38.008 10:03:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:38.008 10:03:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:38.008 10:03:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1545767' 00:29:38.008 killing process with pid 1545767 00:29:38.008 10:03:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1545767 00:29:38.008 Received shutdown signal, test time was about 2.000000 seconds 00:29:38.008 00:29:38.008 Latency(us) 00:29:38.008 [2024-11-20T09:03:08.924Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:38.008 [2024-11-20T09:03:08.924Z] =================================================================================================================== 00:29:38.008 [2024-11-20T09:03:08.924Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:38.008 10:03:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1545767 00:29:38.269 10:03:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:29:38.269 10:03:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:38.269 10:03:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:38.269 10:03:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:29:38.269 10:03:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:29:38.269 10:03:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:29:38.269 10:03:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:38.269 10:03:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1546592 00:29:38.269 10:03:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1546592 /var/tmp/bperf.sock 00:29:38.269 10:03:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1546592 ']' 00:29:38.269 10:03:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:38.269 10:03:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:38.269 10:03:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:38.269 10:03:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:38.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:38.269 10:03:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:38.269 10:03:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:38.269 [2024-11-20 10:03:08.992260] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:29:38.269 [2024-11-20 10:03:08.992320] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1546592 ] 00:29:38.269 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:38.269 Zero copy mechanism will not be used. 00:29:38.269 [2024-11-20 10:03:09.076275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:38.269 [2024-11-20 10:03:09.105889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:39.208 10:03:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:39.208 10:03:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:29:39.208 10:03:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:39.208 10:03:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:39.208 10:03:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:39.208 10:03:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:39.208 10:03:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:39.468 nvme0n1 00:29:39.468 10:03:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:39.468 10:03:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:39.468 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:39.468 Zero copy mechanism will not be used. 00:29:39.468 Running I/O for 2 seconds... 00:29:41.791 3360.00 IOPS, 420.00 MiB/s [2024-11-20T09:03:12.707Z] 3705.50 IOPS, 463.19 MiB/s 00:29:41.791 Latency(us) 00:29:41.791 [2024-11-20T09:03:12.707Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:41.791 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:41.791 nvme0n1 : 2.00 3704.24 463.03 0.00 0.00 4316.80 460.80 14417.92 00:29:41.791 [2024-11-20T09:03:12.707Z] =================================================================================================================== 00:29:41.791 [2024-11-20T09:03:12.707Z] Total : 3704.24 463.03 0.00 0.00 4316.80 460.80 14417.92 00:29:41.791 { 00:29:41.791 "results": [ 00:29:41.791 { 00:29:41.791 "job": "nvme0n1", 00:29:41.791 "core_mask": "0x2", 00:29:41.791 "workload": "randread", 00:29:41.791 "status": "finished", 00:29:41.791 "queue_depth": 16, 00:29:41.791 "io_size": 131072, 00:29:41.791 "runtime": 2.004997, 00:29:41.791 "iops": 3704.2449440073974, 00:29:41.791 "mibps": 463.0306180009247, 00:29:41.791 "io_failed": 0, 00:29:41.791 "io_timeout": 0, 00:29:41.791 "avg_latency_us": 4316.802728782371, 00:29:41.791 "min_latency_us": 460.8, 00:29:41.791 "max_latency_us": 14417.92 00:29:41.791 } 00:29:41.791 ], 00:29:41.791 "core_count": 1 00:29:41.791 } 00:29:41.791 10:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:41.791 10:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:41.791 10:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:41.791 10:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:41.791 | select(.opcode=="crc32c") 00:29:41.791 | "\(.module_name) \(.executed)"' 00:29:41.791 10:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:41.791 10:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:41.791 10:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:41.791 10:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:41.792 10:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:41.792 10:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1546592 00:29:41.792 10:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1546592 ']' 00:29:41.792 10:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1546592 00:29:41.792 10:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:29:41.792 10:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:41.792 10:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1546592 00:29:41.792 10:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:41.792 10:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:41.792 10:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1546592' 00:29:41.792 killing process with pid 1546592 00:29:41.792 10:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1546592 00:29:41.792 Received shutdown signal, test time was about 2.000000 seconds 00:29:41.792 00:29:41.792 Latency(us) 00:29:41.792 [2024-11-20T09:03:12.708Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:41.792 [2024-11-20T09:03:12.708Z] =================================================================================================================== 00:29:41.792 [2024-11-20T09:03:12.708Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:41.792 10:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1546592 00:29:42.052 10:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:29:42.052 10:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:42.052 10:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:42.052 10:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:29:42.052 10:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:29:42.052 10:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:29:42.052 10:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:42.052 10:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1547720 00:29:42.052 10:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1547720 /var/tmp/bperf.sock 00:29:42.052 10:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1547720 ']' 00:29:42.052 10:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:29:42.052 10:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:42.052 10:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:42.052 10:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:42.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:42.052 10:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:42.052 10:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:42.052 [2024-11-20 10:03:12.777852] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:29:42.052 [2024-11-20 10:03:12.777912] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1547720 ] 00:29:42.052 [2024-11-20 10:03:12.859111] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:42.052 [2024-11-20 10:03:12.888438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:42.991 10:03:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:42.991 10:03:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:29:42.991 10:03:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:42.991 10:03:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:42.991 10:03:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:42.991 10:03:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:42.991 10:03:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:43.561 nvme0n1 00:29:43.561 10:03:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:43.561 10:03:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:43.561 Running I/O for 2 seconds... 00:29:45.440 30245.00 IOPS, 118.14 MiB/s [2024-11-20T09:03:16.356Z] 30434.00 IOPS, 118.88 MiB/s 00:29:45.440 Latency(us) 00:29:45.440 [2024-11-20T09:03:16.356Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:45.440 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:45.440 nvme0n1 : 2.01 30451.03 118.95 0.00 0.00 4197.77 2075.31 15073.28 00:29:45.440 [2024-11-20T09:03:16.357Z] =================================================================================================================== 00:29:45.441 [2024-11-20T09:03:16.357Z] Total : 30451.03 118.95 0.00 0.00 4197.77 2075.31 15073.28 00:29:45.441 { 00:29:45.441 "results": [ 00:29:45.441 { 00:29:45.441 "job": "nvme0n1", 00:29:45.441 "core_mask": "0x2", 00:29:45.441 "workload": "randwrite", 00:29:45.441 "status": "finished", 00:29:45.441 "queue_depth": 128, 00:29:45.441 "io_size": 4096, 00:29:45.441 "runtime": 2.005154, 00:29:45.441 "iops": 30451.027701612944, 00:29:45.441 "mibps": 118.94932695942556, 00:29:45.441 "io_failed": 0, 00:29:45.441 "io_timeout": 0, 00:29:45.441 "avg_latency_us": 4197.765286253187, 00:29:45.441 "min_latency_us": 2075.306666666667, 00:29:45.441 "max_latency_us": 15073.28 00:29:45.441 } 00:29:45.441 ], 00:29:45.441 "core_count": 1 00:29:45.441 } 00:29:45.441 10:03:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:45.441 10:03:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:45.441 10:03:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:45.441 10:03:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:45.441 | select(.opcode=="crc32c") 00:29:45.441 | "\(.module_name) \(.executed)"' 00:29:45.441 10:03:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:45.701 10:03:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:45.701 10:03:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:45.701 10:03:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:45.701 10:03:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:45.701 10:03:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1547720 00:29:45.701 10:03:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1547720 ']' 00:29:45.701 10:03:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1547720 00:29:45.701 10:03:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:29:45.701 10:03:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:45.701 10:03:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1547720 00:29:45.701 10:03:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:45.701 10:03:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:45.701 10:03:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1547720' 00:29:45.701 killing process with pid 1547720 00:29:45.701 10:03:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1547720 00:29:45.701 Received shutdown signal, test time was about 2.000000 seconds 00:29:45.701 00:29:45.701 Latency(us) 00:29:45.701 [2024-11-20T09:03:16.617Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:45.701 [2024-11-20T09:03:16.617Z] =================================================================================================================== 00:29:45.701 [2024-11-20T09:03:16.617Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:45.701 10:03:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1547720 00:29:45.962 10:03:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:29:45.962 10:03:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:45.962 10:03:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:45.962 10:03:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:29:45.962 10:03:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:29:45.962 10:03:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:29:45.962 10:03:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:45.962 10:03:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1548598 00:29:45.962 10:03:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1548598 /var/tmp/bperf.sock 00:29:45.962 10:03:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1548598 ']' 00:29:45.962 10:03:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:45.962 10:03:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:45.962 10:03:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:45.962 10:03:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:45.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:45.962 10:03:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:45.962 10:03:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:45.962 [2024-11-20 10:03:16.740516] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:29:45.962 [2024-11-20 10:03:16.740574] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1548598 ] 00:29:45.962 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:45.962 Zero copy mechanism will not be used. 00:29:45.962 [2024-11-20 10:03:16.824408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:45.962 [2024-11-20 10:03:16.853325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:46.903 10:03:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:46.903 10:03:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:29:46.903 10:03:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:46.903 10:03:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:46.903 10:03:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:46.903 10:03:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:46.903 10:03:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:47.476 nvme0n1 00:29:47.476 10:03:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:47.476 10:03:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:47.476 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:47.476 Zero copy mechanism will not be used. 00:29:47.476 Running I/O for 2 seconds... 00:29:49.368 4469.00 IOPS, 558.62 MiB/s [2024-11-20T09:03:20.545Z] 5956.00 IOPS, 744.50 MiB/s 00:29:49.629 Latency(us) 00:29:49.629 [2024-11-20T09:03:20.545Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:49.629 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:49.629 nvme0n1 : 2.01 5950.93 743.87 0.00 0.00 2683.81 1181.01 9994.24 00:29:49.629 [2024-11-20T09:03:20.545Z] =================================================================================================================== 00:29:49.629 [2024-11-20T09:03:20.545Z] Total : 5950.93 743.87 0.00 0.00 2683.81 1181.01 9994.24 00:29:49.629 { 00:29:49.629 "results": [ 00:29:49.629 { 00:29:49.629 "job": "nvme0n1", 00:29:49.629 "core_mask": "0x2", 00:29:49.629 "workload": "randwrite", 00:29:49.629 "status": "finished", 00:29:49.629 "queue_depth": 16, 00:29:49.629 "io_size": 131072, 00:29:49.629 "runtime": 2.005066, 00:29:49.629 "iops": 5950.926303672797, 00:29:49.629 "mibps": 743.8657879590996, 00:29:49.629 "io_failed": 0, 00:29:49.629 "io_timeout": 0, 00:29:49.629 "avg_latency_us": 2683.8071382277353, 00:29:49.629 "min_latency_us": 1181.0133333333333, 00:29:49.629 "max_latency_us": 9994.24 00:29:49.629 } 00:29:49.629 ], 00:29:49.629 "core_count": 1 00:29:49.629 } 00:29:49.629 10:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:49.629 10:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:49.629 10:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:49.629 10:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:49.629 | select(.opcode=="crc32c") 00:29:49.629 | "\(.module_name) \(.executed)"' 00:29:49.629 10:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:49.629 10:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:49.629 10:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:49.629 10:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:49.629 10:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:49.629 10:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1548598 00:29:49.629 10:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1548598 ']' 00:29:49.629 10:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1548598 00:29:49.629 10:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:29:49.629 10:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:49.629 10:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1548598 00:29:49.889 10:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:49.889 10:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:49.889 10:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1548598' 00:29:49.889 killing process with pid 1548598 00:29:49.889 10:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1548598 00:29:49.890 Received shutdown signal, test time was about 2.000000 seconds 00:29:49.890 00:29:49.890 Latency(us) 00:29:49.890 [2024-11-20T09:03:20.806Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:49.890 [2024-11-20T09:03:20.806Z] =================================================================================================================== 00:29:49.890 [2024-11-20T09:03:20.806Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:49.890 10:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1548598 00:29:49.890 10:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1545627 00:29:49.890 10:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1545627 ']' 00:29:49.890 10:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1545627 00:29:49.890 10:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:29:49.890 10:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:49.890 10:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1545627 00:29:49.890 10:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:49.890 10:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:49.890 10:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1545627' 00:29:49.890 killing process with pid 1545627 00:29:49.890 10:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1545627 00:29:49.890 10:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1545627 00:29:50.150 00:29:50.150 real 0m16.860s 00:29:50.150 user 0m33.448s 00:29:50.150 sys 0m3.687s 00:29:50.150 10:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:50.150 10:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:50.150 ************************************ 00:29:50.150 END TEST nvmf_digest_clean 00:29:50.150 ************************************ 00:29:50.150 10:03:20 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:29:50.150 10:03:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:50.150 10:03:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:50.150 10:03:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:50.150 ************************************ 00:29:50.150 START TEST nvmf_digest_error 00:29:50.150 ************************************ 00:29:50.150 10:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:29:50.150 10:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:29:50.150 10:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:50.150 10:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:50.150 10:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:50.150 10:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=1549311 00:29:50.150 10:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 1549311 00:29:50.150 10:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:50.150 10:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1549311 ']' 00:29:50.150 10:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:50.150 10:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:50.150 10:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:50.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:50.150 10:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:50.150 10:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:50.150 [2024-11-20 10:03:20.963945] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:29:50.150 [2024-11-20 10:03:20.963998] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:50.150 [2024-11-20 10:03:21.055659] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:50.410 [2024-11-20 10:03:21.086550] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:50.410 [2024-11-20 10:03:21.086578] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:50.410 [2024-11-20 10:03:21.086584] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:50.410 [2024-11-20 10:03:21.086588] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:50.410 [2024-11-20 10:03:21.086593] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:50.410 [2024-11-20 10:03:21.087056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:50.983 10:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:50.983 10:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:29:50.983 10:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:50.983 10:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:50.983 10:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:50.983 10:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:50.983 10:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:29:50.983 10:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.983 10:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:50.983 [2024-11-20 10:03:21.813050] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:29:50.983 10:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.983 10:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:29:50.983 10:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:29:50.983 10:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.983 10:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:50.983 null0 00:29:50.983 [2024-11-20 10:03:21.890761] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:51.244 [2024-11-20 10:03:21.914962] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:51.244 10:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.244 10:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:29:51.244 10:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:51.244 10:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:29:51.244 10:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:29:51.244 10:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:29:51.244 10:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1549626 00:29:51.244 10:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1549626 /var/tmp/bperf.sock 00:29:51.244 10:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1549626 ']' 00:29:51.244 10:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:29:51.244 10:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:51.244 10:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:51.244 10:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:51.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:51.244 10:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:51.244 10:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:51.244 [2024-11-20 10:03:21.970570] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:29:51.244 [2024-11-20 10:03:21.970620] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1549626 ] 00:29:51.244 [2024-11-20 10:03:22.054770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:51.244 [2024-11-20 10:03:22.084406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:52.186 10:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:52.186 10:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:29:52.186 10:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:52.186 10:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:52.186 10:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:52.186 10:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.186 10:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:52.186 10:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.187 10:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:52.187 10:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:52.447 nvme0n1 00:29:52.447 10:03:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:52.447 10:03:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.447 10:03:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:52.447 10:03:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.447 10:03:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:52.447 10:03:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:52.710 Running I/O for 2 seconds... 00:29:52.710 [2024-11-20 10:03:23.449208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:52.710 [2024-11-20 10:03:23.449239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:17051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.710 [2024-11-20 10:03:23.449248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.710 [2024-11-20 10:03:23.459870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:52.710 [2024-11-20 10:03:23.459890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:4517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.710 [2024-11-20 10:03:23.459898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.710 [2024-11-20 10:03:23.468808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:52.710 [2024-11-20 10:03:23.468827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:2291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.710 [2024-11-20 10:03:23.468834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.710 [2024-11-20 10:03:23.476854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:52.710 [2024-11-20 10:03:23.476873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:24582 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.710 [2024-11-20 10:03:23.476880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.710 [2024-11-20 10:03:23.486660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:52.710 [2024-11-20 10:03:23.486679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:14121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.710 [2024-11-20 10:03:23.486685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.710 [2024-11-20 10:03:23.496614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:52.710 [2024-11-20 10:03:23.496633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:1232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.710 [2024-11-20 10:03:23.496640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.710 [2024-11-20 10:03:23.506093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:52.710 [2024-11-20 10:03:23.506112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:17186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.710 [2024-11-20 10:03:23.506119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.710 [2024-11-20 10:03:23.514040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:52.710 [2024-11-20 10:03:23.514058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.710 [2024-11-20 10:03:23.514064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.710 [2024-11-20 10:03:23.523690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:52.710 [2024-11-20 10:03:23.523708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.710 [2024-11-20 10:03:23.523715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.710 [2024-11-20 10:03:23.533015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:52.710 [2024-11-20 10:03:23.533033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:21573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.710 [2024-11-20 10:03:23.533039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.710 [2024-11-20 10:03:23.541105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:52.710 [2024-11-20 10:03:23.541123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.710 [2024-11-20 10:03:23.541130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.711 [2024-11-20 10:03:23.550306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:52.711 [2024-11-20 10:03:23.550324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:23970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.711 [2024-11-20 10:03:23.550334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.711 [2024-11-20 10:03:23.558944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:52.711 [2024-11-20 10:03:23.558962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:1588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.711 [2024-11-20 10:03:23.558969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.711 [2024-11-20 10:03:23.567356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:52.711 [2024-11-20 10:03:23.567374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.711 [2024-11-20 10:03:23.567380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.711 [2024-11-20 10:03:23.575816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:52.711 [2024-11-20 10:03:23.575834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:19328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.711 [2024-11-20 10:03:23.575840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.711 [2024-11-20 10:03:23.586261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:52.711 [2024-11-20 10:03:23.586279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:9189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.711 [2024-11-20 10:03:23.586286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.711 [2024-11-20 10:03:23.594580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:52.711 [2024-11-20 10:03:23.594598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:4942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.711 [2024-11-20 10:03:23.594604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.711 [2024-11-20 10:03:23.603175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:52.711 [2024-11-20 10:03:23.603192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:6279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.711 [2024-11-20 10:03:23.603199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.711 [2024-11-20 10:03:23.612329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:52.711 [2024-11-20 10:03:23.612346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:23099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.711 [2024-11-20 10:03:23.612352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.711 [2024-11-20 10:03:23.622516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:52.711 [2024-11-20 10:03:23.622533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:1673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.711 [2024-11-20 10:03:23.622540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.973 [2024-11-20 10:03:23.631395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:52.973 [2024-11-20 10:03:23.631412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:9709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.973 [2024-11-20 10:03:23.631419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.973 [2024-11-20 10:03:23.639576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:52.973 [2024-11-20 10:03:23.639594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:15206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.973 [2024-11-20 10:03:23.639600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.973 [2024-11-20 10:03:23.648803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:52.973 [2024-11-20 10:03:23.648821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:15038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.973 [2024-11-20 10:03:23.648828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.973 [2024-11-20 10:03:23.657018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:52.973 [2024-11-20 10:03:23.657035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:6343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.973 [2024-11-20 10:03:23.657041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.973 [2024-11-20 10:03:23.666398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:52.973 [2024-11-20 10:03:23.666415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:17525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.973 [2024-11-20 10:03:23.666422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.973 [2024-11-20 10:03:23.675401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:52.973 [2024-11-20 10:03:23.675419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:23222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.973 [2024-11-20 10:03:23.675425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.973 [2024-11-20 10:03:23.684995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:52.973 [2024-11-20 10:03:23.685013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:24041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.973 [2024-11-20 10:03:23.685020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.973 [2024-11-20 10:03:23.694331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:52.973 [2024-11-20 10:03:23.694349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:5321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.973 [2024-11-20 10:03:23.694355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.973 [2024-11-20 10:03:23.702556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:52.973 [2024-11-20 10:03:23.702574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:3004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.973 [2024-11-20 10:03:23.702584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.973 [2024-11-20 10:03:23.712399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:52.973 [2024-11-20 10:03:23.712417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.973 [2024-11-20 10:03:23.712424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.973 [2024-11-20 10:03:23.722242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:52.973 [2024-11-20 10:03:23.722260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:8711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.973 [2024-11-20 10:03:23.722267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.973 [2024-11-20 10:03:23.730717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:52.973 [2024-11-20 10:03:23.730735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:16808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.973 [2024-11-20 10:03:23.730741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.973 [2024-11-20 10:03:23.740441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:52.973 [2024-11-20 10:03:23.740459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:23817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.973 [2024-11-20 10:03:23.740466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.973 [2024-11-20 10:03:23.747428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:52.973 [2024-11-20 10:03:23.747446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:4639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.973 [2024-11-20 10:03:23.747453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.973 [2024-11-20 10:03:23.757996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:52.973 [2024-11-20 10:03:23.758014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:3287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.973 [2024-11-20 10:03:23.758020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.973 [2024-11-20 10:03:23.765237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:52.973 [2024-11-20 10:03:23.765254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.973 [2024-11-20 10:03:23.765260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.973 [2024-11-20 10:03:23.777650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:52.973 [2024-11-20 10:03:23.777667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:8208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.973 [2024-11-20 10:03:23.777674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.973 [2024-11-20 10:03:23.788273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:52.974 [2024-11-20 10:03:23.788293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:5066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.974 [2024-11-20 10:03:23.788300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.974 [2024-11-20 10:03:23.796080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:52.974 [2024-11-20 10:03:23.796097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:13703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.974 [2024-11-20 10:03:23.796103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.974 [2024-11-20 10:03:23.805400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:52.974 [2024-11-20 10:03:23.805418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:17172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.974 [2024-11-20 10:03:23.805424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.974 [2024-11-20 10:03:23.814402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:52.974 [2024-11-20 10:03:23.814419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:1272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.974 [2024-11-20 10:03:23.814426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.974 [2024-11-20 10:03:23.822992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:52.974 [2024-11-20 10:03:23.823009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:20749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.974 [2024-11-20 10:03:23.823016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.974 [2024-11-20 10:03:23.832706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:52.974 [2024-11-20 10:03:23.832723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:2739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.974 [2024-11-20 10:03:23.832730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.974 [2024-11-20 10:03:23.840426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:52.974 [2024-11-20 10:03:23.840443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:5252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.974 [2024-11-20 10:03:23.840449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.974 [2024-11-20 10:03:23.850509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:52.974 [2024-11-20 10:03:23.850527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:25172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.974 [2024-11-20 10:03:23.850534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.974 [2024-11-20 10:03:23.859091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:52.974 [2024-11-20 10:03:23.859108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:16775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.974 [2024-11-20 10:03:23.859115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.974 [2024-11-20 10:03:23.868396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:52.974 [2024-11-20 10:03:23.868415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:13418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.974 [2024-11-20 10:03:23.868422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.974 [2024-11-20 10:03:23.877023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:52.974 [2024-11-20 10:03:23.877041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.974 [2024-11-20 10:03:23.877049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.236 [2024-11-20 10:03:23.886312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:53.236 [2024-11-20 10:03:23.886329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:6956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.236 [2024-11-20 10:03:23.886336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.236 [2024-11-20 10:03:23.895749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:53.236 [2024-11-20 10:03:23.895766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.236 [2024-11-20 10:03:23.895774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.236 [2024-11-20 10:03:23.903938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:53.236 [2024-11-20 10:03:23.903955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.236 [2024-11-20 10:03:23.903962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.236 [2024-11-20 10:03:23.912711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:53.236 [2024-11-20 10:03:23.912728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:14613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.236 [2024-11-20 10:03:23.912735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.236 [2024-11-20 10:03:23.921903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:53.236 [2024-11-20 10:03:23.921920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:16628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.236 [2024-11-20 10:03:23.921926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.236 [2024-11-20 10:03:23.930527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:53.236 [2024-11-20 10:03:23.930544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:16476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.236 [2024-11-20 10:03:23.930551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.236 [2024-11-20 10:03:23.939466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:53.236 [2024-11-20 10:03:23.939483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:16604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.236 [2024-11-20 10:03:23.939493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.236 [2024-11-20 10:03:23.948471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:53.236 [2024-11-20 10:03:23.948488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:10427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.236 [2024-11-20 10:03:23.948494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.236 [2024-11-20 10:03:23.956631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:53.236 [2024-11-20 10:03:23.956648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:17810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.236 [2024-11-20 10:03:23.956654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.236 [2024-11-20 10:03:23.965163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:53.236 [2024-11-20 10:03:23.965180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:5545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.236 [2024-11-20 10:03:23.965186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.236 [2024-11-20 10:03:23.975637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:53.236 [2024-11-20 10:03:23.975654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:23777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.236 [2024-11-20 10:03:23.975661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.236 [2024-11-20 10:03:23.984834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:53.236 [2024-11-20 10:03:23.984851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:17711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.236 [2024-11-20 10:03:23.984858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.236 [2024-11-20 10:03:23.994573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:53.236 [2024-11-20 10:03:23.994590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:7516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.236 [2024-11-20 10:03:23.994596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.236 [2024-11-20 10:03:24.002324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:53.236 [2024-11-20 10:03:24.002341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:20500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.236 [2024-11-20 10:03:24.002348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.236 [2024-11-20 10:03:24.012505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:53.236 [2024-11-20 10:03:24.012522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:9224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.236 [2024-11-20 10:03:24.012529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.236 [2024-11-20 10:03:24.023144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:53.237 [2024-11-20 10:03:24.023168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:6413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.237 [2024-11-20 10:03:24.023175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.237 [2024-11-20 10:03:24.030997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:53.237 [2024-11-20 10:03:24.031014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:4374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.237 [2024-11-20 10:03:24.031021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.237 [2024-11-20 10:03:24.042201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:53.237 [2024-11-20 10:03:24.042218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:6895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.237 [2024-11-20 10:03:24.042225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.237 [2024-11-20 10:03:24.051983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:53.237 [2024-11-20 10:03:24.052000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:20364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.237 [2024-11-20 10:03:24.052007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.237 [2024-11-20 10:03:24.060089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:53.237 [2024-11-20 10:03:24.060107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:21982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.237 [2024-11-20 10:03:24.060113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.237 [2024-11-20 10:03:24.069200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:53.237 [2024-11-20 10:03:24.069217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:7159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.237 [2024-11-20 10:03:24.069223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.237 [2024-11-20 10:03:24.078031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:53.237 [2024-11-20 10:03:24.078049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:17276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.237 [2024-11-20 10:03:24.078055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.237 [2024-11-20 10:03:24.086500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:53.237 [2024-11-20 10:03:24.086518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:25116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.237 [2024-11-20 10:03:24.086524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.237 [2024-11-20 10:03:24.095494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:53.237 [2024-11-20 10:03:24.095511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:5658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.237 [2024-11-20 10:03:24.095517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.237 [2024-11-20 10:03:24.104215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:53.237 [2024-11-20 10:03:24.104236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:19877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.237 [2024-11-20 10:03:24.104243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.237 [2024-11-20 10:03:24.113540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:53.237 [2024-11-20 10:03:24.113558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:13034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.237 [2024-11-20 10:03:24.113564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.237 [2024-11-20 10:03:24.122659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:53.237 [2024-11-20 10:03:24.122677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:3540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.237 [2024-11-20 10:03:24.122683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.237 [2024-11-20 10:03:24.130750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:53.237 [2024-11-20 10:03:24.130767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:17196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.237 [2024-11-20 10:03:24.130774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.237 [2024-11-20 10:03:24.141174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:53.237 [2024-11-20 10:03:24.141192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:6195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.237 [2024-11-20 10:03:24.141198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.499 [2024-11-20 10:03:24.148935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:53.499 [2024-11-20 10:03:24.148952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.499 [2024-11-20 10:03:24.148959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.499 [2024-11-20 10:03:24.159226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:53.499 [2024-11-20 10:03:24.159243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.499 [2024-11-20 10:03:24.159250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.499 [2024-11-20 10:03:24.167476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:53.499 [2024-11-20 10:03:24.167494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.499 [2024-11-20 10:03:24.167501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.499 [2024-11-20 10:03:24.176439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:53.499 [2024-11-20 10:03:24.176457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:3624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.499 [2024-11-20 10:03:24.176467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.499 [2024-11-20 10:03:24.185359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:53.499 [2024-11-20 10:03:24.185377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:20200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.499 [2024-11-20 10:03:24.185383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.499 [2024-11-20 10:03:24.194195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:53.499 [2024-11-20 10:03:24.194213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:5732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.499 [2024-11-20 10:03:24.194220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.499 [2024-11-20 10:03:24.202778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:53.499 [2024-11-20 10:03:24.202795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:2088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.499 [2024-11-20 10:03:24.202801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.499 [2024-11-20 10:03:24.211763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:53.499 [2024-11-20 10:03:24.211779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:13434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.499 [2024-11-20 10:03:24.211786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.499 [2024-11-20 10:03:24.220948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:53.499 [2024-11-20 10:03:24.220966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:8072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.499 [2024-11-20 10:03:24.220972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.499 [2024-11-20 10:03:24.229869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:53.499 [2024-11-20 10:03:24.229886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:17064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.499 [2024-11-20 10:03:24.229893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.499 [2024-11-20 10:03:24.237653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:53.499 [2024-11-20 10:03:24.237670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.499 [2024-11-20 10:03:24.237677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.499 [2024-11-20 10:03:24.248070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:53.499 [2024-11-20 10:03:24.248087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.499 [2024-11-20 10:03:24.248093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.499 [2024-11-20 10:03:24.257091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:53.499 [2024-11-20 10:03:24.257108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:15154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.499 [2024-11-20 10:03:24.257115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.499 [2024-11-20 10:03:24.265514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:53.499 [2024-11-20 10:03:24.265531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:2793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.499 [2024-11-20 10:03:24.265537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.499 [2024-11-20 10:03:24.274122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:53.499 [2024-11-20 10:03:24.274139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:6876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.499 [2024-11-20 10:03:24.274146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.499 [2024-11-20 10:03:24.283314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:53.499 [2024-11-20 10:03:24.283331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.499 [2024-11-20 10:03:24.283338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.499 [2024-11-20 10:03:24.292259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:53.499 [2024-11-20 10:03:24.292276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:6343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.499 [2024-11-20 10:03:24.292282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.499 [2024-11-20 10:03:24.301339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:53.499 [2024-11-20 10:03:24.301356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:15480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.499 [2024-11-20 10:03:24.301363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.499 [2024-11-20 10:03:24.310233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:53.499 [2024-11-20 10:03:24.310251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.499 [2024-11-20 10:03:24.310257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.499 [2024-11-20 10:03:24.319296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:53.499 [2024-11-20 10:03:24.319314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:15810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.499 [2024-11-20 10:03:24.319320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.499 [2024-11-20 10:03:24.327356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:53.499 [2024-11-20 10:03:24.327374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:24342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.499 [2024-11-20 10:03:24.327383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.499 [2024-11-20 10:03:24.336050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:53.499 [2024-11-20 10:03:24.336068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:11279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.499 [2024-11-20 10:03:24.336074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.499 [2024-11-20 10:03:24.345962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:53.499 [2024-11-20 10:03:24.345979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.499 [2024-11-20 10:03:24.345985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.499 [2024-11-20 10:03:24.355215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:53.499 [2024-11-20 10:03:24.355232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:7645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.499 [2024-11-20 10:03:24.355238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.499 [2024-11-20 10:03:24.363191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:53.499 [2024-11-20 10:03:24.363209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.499 [2024-11-20 10:03:24.363215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.499 [2024-11-20 10:03:24.372814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:53.499 [2024-11-20 10:03:24.372832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:13279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.499 [2024-11-20 10:03:24.372838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.500 [2024-11-20 10:03:24.381829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:53.500 [2024-11-20 10:03:24.381847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:18823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.500 [2024-11-20 10:03:24.381853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.500 [2024-11-20 10:03:24.389852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:53.500 [2024-11-20 10:03:24.389869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:8624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.500 [2024-11-20 10:03:24.389875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.500 [2024-11-20 10:03:24.400075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:53.500 [2024-11-20 10:03:24.400092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:5344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.500 [2024-11-20 10:03:24.400099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.500 [2024-11-20 10:03:24.409351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:53.500 [2024-11-20 10:03:24.409373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:2435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.500 [2024-11-20 10:03:24.409379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.761 [2024-11-20 10:03:24.418930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:53.761 [2024-11-20 10:03:24.418948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:3229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.761 [2024-11-20 10:03:24.418955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.761 [2024-11-20 10:03:24.427228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:53.761 [2024-11-20 10:03:24.427245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:8913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.761 [2024-11-20 10:03:24.427252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.761 27941.00 IOPS, 109.14 MiB/s [2024-11-20T09:03:24.677Z] [2024-11-20 10:03:24.436071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:53.761 [2024-11-20 10:03:24.436089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:20397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.761 [2024-11-20 10:03:24.436095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.761 [2024-11-20 10:03:24.448442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:53.761 [2024-11-20 10:03:24.448459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:14345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.761 [2024-11-20 10:03:24.448466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.761 [2024-11-20 10:03:24.456877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:53.761 [2024-11-20 10:03:24.456894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:5997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.761 [2024-11-20 10:03:24.456900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.761 [2024-11-20 10:03:24.465853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:53.761 [2024-11-20 10:03:24.465870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:12100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.761 [2024-11-20 10:03:24.465877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.761 [2024-11-20 10:03:24.474760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:53.761 [2024-11-20 10:03:24.474777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:21979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.761 [2024-11-20 10:03:24.474783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.761 [2024-11-20 10:03:24.483728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:53.761 [2024-11-20 10:03:24.483745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:14831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.761 [2024-11-20 10:03:24.483751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.761 [2024-11-20 10:03:24.491909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:53.761 [2024-11-20 10:03:24.491926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:9922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.761 [2024-11-20 10:03:24.491933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.761 [2024-11-20 10:03:24.501348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:53.761 [2024-11-20 10:03:24.501366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:2944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.761 [2024-11-20 10:03:24.501372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.761 [2024-11-20 10:03:24.509978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:53.761 [2024-11-20 10:03:24.509995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:10221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.761 [2024-11-20 10:03:24.510002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.761 [2024-11-20 10:03:24.518762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:53.761 [2024-11-20 10:03:24.518779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:18210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.761 [2024-11-20 10:03:24.518786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.761 [2024-11-20 10:03:24.528424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:53.761 [2024-11-20 10:03:24.528441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:13396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.761 [2024-11-20 10:03:24.528447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.761 [2024-11-20 10:03:24.537861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:53.761 [2024-11-20 10:03:24.537879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.761 [2024-11-20 10:03:24.537885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.761 [2024-11-20 10:03:24.546870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:53.761 [2024-11-20 10:03:24.546887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.761 [2024-11-20 10:03:24.546893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.761 [2024-11-20 10:03:24.554136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:53.761 [2024-11-20 10:03:24.554153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:13056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.761 [2024-11-20 10:03:24.554164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.761 [2024-11-20 10:03:24.564505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:53.761 [2024-11-20 10:03:24.564522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:9423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.761 [2024-11-20 10:03:24.564534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.761 [2024-11-20 10:03:24.575818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:53.761 [2024-11-20 10:03:24.575835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:22871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.761 [2024-11-20 10:03:24.575841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.761 [2024-11-20 10:03:24.585098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:53.761 [2024-11-20 10:03:24.585115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.761 [2024-11-20 10:03:24.585121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.761 [2024-11-20 10:03:24.594391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:53.761 [2024-11-20 10:03:24.594408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:22102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.761 [2024-11-20 10:03:24.594414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.761 [2024-11-20 10:03:24.602788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:53.761 [2024-11-20 10:03:24.602805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:1483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.761 [2024-11-20 10:03:24.602811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.761 [2024-11-20 10:03:24.611356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:53.761 [2024-11-20 10:03:24.611374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:4622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.761 [2024-11-20 10:03:24.611380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.761 [2024-11-20 10:03:24.620751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:53.761 [2024-11-20 10:03:24.620768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:9621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.761 [2024-11-20 10:03:24.620774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.761 [2024-11-20 10:03:24.629783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:53.761 [2024-11-20 10:03:24.629800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:2403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.761 [2024-11-20 10:03:24.629806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.761 [2024-11-20 10:03:24.637622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:53.761 [2024-11-20 10:03:24.637639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:23506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.761 [2024-11-20 10:03:24.637646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.761 [2024-11-20 10:03:24.647392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:53.761 [2024-11-20 10:03:24.647412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:20674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.761 [2024-11-20 10:03:24.647419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.761 [2024-11-20 10:03:24.655734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:53.761 [2024-11-20 10:03:24.655751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:4355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.762 [2024-11-20 10:03:24.655758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.762 [2024-11-20 10:03:24.664489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:53.762 [2024-11-20 10:03:24.664507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:7634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.762 [2024-11-20 10:03:24.664514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.762 [2024-11-20 10:03:24.673090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:53.762 [2024-11-20 10:03:24.673107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:15452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.762 [2024-11-20 10:03:24.673114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.023 [2024-11-20 10:03:24.681619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:54.023 [2024-11-20 10:03:24.681637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.023 [2024-11-20 10:03:24.681644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.023 [2024-11-20 10:03:24.691126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:54.023 [2024-11-20 10:03:24.691144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:2540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.023 [2024-11-20 10:03:24.691150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.023 [2024-11-20 10:03:24.698936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:54.023 [2024-11-20 10:03:24.698953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.023 [2024-11-20 10:03:24.698960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.023 [2024-11-20 10:03:24.708720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:54.023 [2024-11-20 10:03:24.708738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:4052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.023 [2024-11-20 10:03:24.708744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.023 [2024-11-20 10:03:24.718712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:54.024 [2024-11-20 10:03:24.718729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:8493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.024 [2024-11-20 10:03:24.718736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.024 [2024-11-20 10:03:24.726823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:54.024 [2024-11-20 10:03:24.726840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:19058 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.024 [2024-11-20 10:03:24.726847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.024 [2024-11-20 10:03:24.736853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:54.024 [2024-11-20 10:03:24.736870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.024 [2024-11-20 10:03:24.736876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.024 [2024-11-20 10:03:24.746642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:54.024 [2024-11-20 10:03:24.746659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.024 [2024-11-20 10:03:24.746666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.024 [2024-11-20 10:03:24.755443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:54.024 [2024-11-20 10:03:24.755460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:10623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.024 [2024-11-20 10:03:24.755466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.024 [2024-11-20 10:03:24.764000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:54.024 [2024-11-20 10:03:24.764017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:10404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.024 [2024-11-20 10:03:24.764024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.024 [2024-11-20 10:03:24.773068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:54.024 [2024-11-20 10:03:24.773085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:6089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.024 [2024-11-20 10:03:24.773092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.024 [2024-11-20 10:03:24.783570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:54.024 [2024-11-20 10:03:24.783589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:13283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.024 [2024-11-20 10:03:24.783595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.024 [2024-11-20 10:03:24.793147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:54.024 [2024-11-20 10:03:24.793168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:3332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.024 [2024-11-20 10:03:24.793174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.024 [2024-11-20 10:03:24.801910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:54.024 [2024-11-20 10:03:24.801928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.024 [2024-11-20 10:03:24.801938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.024 [2024-11-20 10:03:24.813265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:54.024 [2024-11-20 10:03:24.813282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:13645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.024 [2024-11-20 10:03:24.813289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.024 [2024-11-20 10:03:24.821092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:54.024 [2024-11-20 10:03:24.821110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:19964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.024 [2024-11-20 10:03:24.821117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.024 [2024-11-20 10:03:24.830719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:54.024 [2024-11-20 10:03:24.830737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:24612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.024 [2024-11-20 10:03:24.830744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.024 [2024-11-20 10:03:24.839628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:54.024 [2024-11-20 10:03:24.839645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:23050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.024 [2024-11-20 10:03:24.839652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.024 [2024-11-20 10:03:24.848325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:54.024 [2024-11-20 10:03:24.848343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.024 [2024-11-20 10:03:24.848350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.024 [2024-11-20 10:03:24.856571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:54.024 [2024-11-20 10:03:24.856589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:23050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.024 [2024-11-20 10:03:24.856595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.024 [2024-11-20 10:03:24.865695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:54.024 [2024-11-20 10:03:24.865713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:1460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.024 [2024-11-20 10:03:24.865720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.024 [2024-11-20 10:03:24.875734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:54.024 [2024-11-20 10:03:24.875752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.024 [2024-11-20 10:03:24.875758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.024 [2024-11-20 10:03:24.884541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:54.024 [2024-11-20 10:03:24.884559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.024 [2024-11-20 10:03:24.884565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.024 [2024-11-20 10:03:24.893061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:54.024 [2024-11-20 10:03:24.893079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:1317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.024 [2024-11-20 10:03:24.893085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.024 [2024-11-20 10:03:24.901552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:54.024 [2024-11-20 10:03:24.901570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.024 [2024-11-20 10:03:24.901576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.024 [2024-11-20 10:03:24.911060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:54.024 [2024-11-20 10:03:24.911077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.024 [2024-11-20 10:03:24.911084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.024 [2024-11-20 10:03:24.919322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:54.024 [2024-11-20 10:03:24.919339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:14979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.024 [2024-11-20 10:03:24.919345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.024 [2024-11-20 10:03:24.928182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:54.024 [2024-11-20 10:03:24.928200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:6998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.024 [2024-11-20 10:03:24.928206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.286 [2024-11-20 10:03:24.937801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:54.286 [2024-11-20 10:03:24.937820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.286 [2024-11-20 10:03:24.937826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.286 [2024-11-20 10:03:24.946587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:54.286 [2024-11-20 10:03:24.946605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.286 [2024-11-20 10:03:24.946611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.286 [2024-11-20 10:03:24.955463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:54.286 [2024-11-20 10:03:24.955481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:23283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.286 [2024-11-20 10:03:24.955491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.286 [2024-11-20 10:03:24.966988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:54.286 [2024-11-20 10:03:24.967006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:4503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.286 [2024-11-20 10:03:24.967013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.286 [2024-11-20 10:03:24.977127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:54.286 [2024-11-20 10:03:24.977145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:3044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.286 [2024-11-20 10:03:24.977151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.286 [2024-11-20 10:03:24.985233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:54.286 [2024-11-20 10:03:24.985250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.286 [2024-11-20 10:03:24.985258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.286 [2024-11-20 10:03:24.993834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:54.286 [2024-11-20 10:03:24.993852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:20111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.286 [2024-11-20 10:03:24.993858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.286 [2024-11-20 10:03:25.002481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:54.286 [2024-11-20 10:03:25.002498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:4050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.286 [2024-11-20 10:03:25.002505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.286 [2024-11-20 10:03:25.011700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:54.286 [2024-11-20 10:03:25.011718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:23113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.286 [2024-11-20 10:03:25.011724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.286 [2024-11-20 10:03:25.020575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:54.286 [2024-11-20 10:03:25.020592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:21422 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.286 [2024-11-20 10:03:25.020599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.286 [2024-11-20 10:03:25.029245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:54.286 [2024-11-20 10:03:25.029263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.286 [2024-11-20 10:03:25.029269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.286 [2024-11-20 10:03:25.038323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:54.286 [2024-11-20 10:03:25.038343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:1760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.286 [2024-11-20 10:03:25.038350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.286 [2024-11-20 10:03:25.047019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:54.287 [2024-11-20 10:03:25.047036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.287 [2024-11-20 10:03:25.047043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.287 [2024-11-20 10:03:25.055845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:54.287 [2024-11-20 10:03:25.055862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:17990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.287 [2024-11-20 10:03:25.055869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.287 [2024-11-20 10:03:25.064488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:54.287 [2024-11-20 10:03:25.064505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:13042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.287 [2024-11-20 10:03:25.064511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.287 [2024-11-20 10:03:25.073239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:54.287 [2024-11-20 10:03:25.073255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:20107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.287 [2024-11-20 10:03:25.073261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.287 [2024-11-20 10:03:25.083219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:54.287 [2024-11-20 10:03:25.083235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:14344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.287 [2024-11-20 10:03:25.083242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.287 [2024-11-20 10:03:25.091799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:54.287 [2024-11-20 10:03:25.091816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:21593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.287 [2024-11-20 10:03:25.091822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.287 [2024-11-20 10:03:25.102721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:54.287 [2024-11-20 10:03:25.102738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:37 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.287 [2024-11-20 10:03:25.102744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.287 [2024-11-20 10:03:25.109990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:54.287 [2024-11-20 10:03:25.110007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:18315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.287 [2024-11-20 10:03:25.110013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.287 [2024-11-20 10:03:25.119882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:54.287 [2024-11-20 10:03:25.119900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:22205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.287 [2024-11-20 10:03:25.119906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.287 [2024-11-20 10:03:25.128443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:54.287 [2024-11-20 10:03:25.128460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.287 [2024-11-20 10:03:25.128467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.287 [2024-11-20 10:03:25.137805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:54.287 [2024-11-20 10:03:25.137822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:11611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.287 [2024-11-20 10:03:25.137829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.287 [2024-11-20 10:03:25.146507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:54.287 [2024-11-20 10:03:25.146524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:8588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.287 [2024-11-20 10:03:25.146531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.287 [2024-11-20 10:03:25.155679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:54.287 [2024-11-20 10:03:25.155696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:22095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.287 [2024-11-20 10:03:25.155702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.287 [2024-11-20 10:03:25.163563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:54.287 [2024-11-20 10:03:25.163580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:23208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.287 [2024-11-20 10:03:25.163587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.287 [2024-11-20 10:03:25.172729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:54.287 [2024-11-20 10:03:25.172747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:21088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.287 [2024-11-20 10:03:25.172754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.287 [2024-11-20 10:03:25.180590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:54.287 [2024-11-20 10:03:25.180607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:7461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.287 [2024-11-20 10:03:25.180613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.287 [2024-11-20 10:03:25.190238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:54.287 [2024-11-20 10:03:25.190255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:1409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.287 [2024-11-20 10:03:25.190265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.549 [2024-11-20 10:03:25.199541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:54.549 [2024-11-20 10:03:25.199559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:1431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.549 [2024-11-20 10:03:25.199566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.549 [2024-11-20 10:03:25.209086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:54.549 [2024-11-20 10:03:25.209104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:25204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.549 [2024-11-20 10:03:25.209111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.549 [2024-11-20 10:03:25.217219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:54.549 [2024-11-20 10:03:25.217237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:16864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.549 [2024-11-20 10:03:25.217243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.549 [2024-11-20 10:03:25.227554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:54.549 [2024-11-20 10:03:25.227571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.549 [2024-11-20 10:03:25.227578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.549 [2024-11-20 10:03:25.235441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:54.549 [2024-11-20 10:03:25.235458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:14695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.549 [2024-11-20 10:03:25.235465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.549 [2024-11-20 10:03:25.244470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:54.549 [2024-11-20 10:03:25.244488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:7747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.549 [2024-11-20 10:03:25.244495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.549 [2024-11-20 10:03:25.253449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:54.549 [2024-11-20 10:03:25.253466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:4154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.549 [2024-11-20 10:03:25.253473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.549 [2024-11-20 10:03:25.261958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:54.549 [2024-11-20 10:03:25.261976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:2061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.549 [2024-11-20 10:03:25.261982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.549 [2024-11-20 10:03:25.271237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:54.549 [2024-11-20 10:03:25.271258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.549 [2024-11-20 10:03:25.271265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.549 [2024-11-20 10:03:25.279083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:54.549 [2024-11-20 10:03:25.279101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:16875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.549 [2024-11-20 10:03:25.279108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.549 [2024-11-20 10:03:25.289053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:54.549 [2024-11-20 10:03:25.289070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:17303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.549 [2024-11-20 10:03:25.289077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.549 [2024-11-20 10:03:25.297720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:54.549 [2024-11-20 10:03:25.297738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:14627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.549 [2024-11-20 10:03:25.297744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.549 [2024-11-20 10:03:25.305663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:54.549 [2024-11-20 10:03:25.305681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:8800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.549 [2024-11-20 10:03:25.305687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.549 [2024-11-20 10:03:25.314603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:54.549 [2024-11-20 10:03:25.314621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:21241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.549 [2024-11-20 10:03:25.314627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.549 [2024-11-20 10:03:25.323957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:54.550 [2024-11-20 10:03:25.323975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.550 [2024-11-20 10:03:25.323981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.550 [2024-11-20 10:03:25.331850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:54.550 [2024-11-20 10:03:25.331868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:2385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.550 [2024-11-20 10:03:25.331874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.550 [2024-11-20 10:03:25.341483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:54.550 [2024-11-20 10:03:25.341501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:21950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.550 [2024-11-20 10:03:25.341508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.550 [2024-11-20 10:03:25.350093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:54.550 [2024-11-20 10:03:25.350110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:17002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.550 [2024-11-20 10:03:25.350117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.550 [2024-11-20 10:03:25.358934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:54.550 [2024-11-20 10:03:25.358951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:21945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.550 [2024-11-20 10:03:25.358958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.550 [2024-11-20 10:03:25.367997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:54.550 [2024-11-20 10:03:25.368015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:8074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.550 [2024-11-20 10:03:25.368021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.550 [2024-11-20 10:03:25.378603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:54.550 [2024-11-20 10:03:25.378622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:8802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.550 [2024-11-20 10:03:25.378628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.550 [2024-11-20 10:03:25.387956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:54.550 [2024-11-20 10:03:25.387972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:19117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.550 [2024-11-20 10:03:25.387979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.550 [2024-11-20 10:03:25.396834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:54.550 [2024-11-20 10:03:25.396851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:12289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.550 [2024-11-20 10:03:25.396858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.550 [2024-11-20 10:03:25.405801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:54.550 [2024-11-20 10:03:25.405819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:6966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.550 [2024-11-20 10:03:25.405825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.550 [2024-11-20 10:03:25.415714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:54.550 [2024-11-20 10:03:25.415732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:13119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.550 [2024-11-20 10:03:25.415738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.550 [2024-11-20 10:03:25.423286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:54.550 [2024-11-20 10:03:25.423304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:21919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.550 [2024-11-20 10:03:25.423315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.550 [2024-11-20 10:03:25.432614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8105c0) 00:29:54.550 [2024-11-20 10:03:25.432632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:13908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.550 [2024-11-20 10:03:25.432639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.550 28067.00 IOPS, 109.64 MiB/s 00:29:54.550 Latency(us) 00:29:54.550 [2024-11-20T09:03:25.466Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:54.550 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:54.550 nvme0n1 : 2.00 28089.99 109.73 0.00 0.00 4552.46 2389.33 15728.64 00:29:54.550 [2024-11-20T09:03:25.466Z] =================================================================================================================== 00:29:54.550 [2024-11-20T09:03:25.466Z] Total : 28089.99 109.73 0.00 0.00 4552.46 2389.33 15728.64 00:29:54.550 { 00:29:54.550 "results": [ 00:29:54.550 { 00:29:54.550 "job": "nvme0n1", 00:29:54.550 "core_mask": "0x2", 00:29:54.550 "workload": "randread", 00:29:54.550 "status": "finished", 00:29:54.550 "queue_depth": 128, 00:29:54.550 "io_size": 4096, 00:29:54.550 "runtime": 2.00292, 00:29:54.550 "iops": 28089.988616619736, 00:29:54.550 "mibps": 109.72651803367084, 00:29:54.550 "io_failed": 0, 00:29:54.550 "io_timeout": 0, 00:29:54.550 "avg_latency_us": 4552.463898190609, 00:29:54.550 "min_latency_us": 2389.3333333333335, 00:29:54.550 "max_latency_us": 15728.64 00:29:54.550 } 00:29:54.550 ], 00:29:54.550 "core_count": 1 00:29:54.550 } 00:29:54.811 10:03:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:54.811 10:03:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:54.811 10:03:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:54.811 | .driver_specific 00:29:54.811 | .nvme_error 00:29:54.811 | .status_code 00:29:54.811 | .command_transient_transport_error' 00:29:54.811 10:03:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:54.811 10:03:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 220 > 0 )) 00:29:54.811 10:03:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1549626 00:29:54.811 10:03:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1549626 ']' 00:29:54.811 10:03:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1549626 00:29:54.811 10:03:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:29:54.811 10:03:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:54.811 10:03:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1549626 00:29:54.811 10:03:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:54.811 10:03:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:54.811 10:03:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1549626' 00:29:54.811 killing process with pid 1549626 00:29:54.811 10:03:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1549626 00:29:54.811 Received shutdown signal, test time was about 2.000000 seconds 00:29:54.811 00:29:54.811 Latency(us) 00:29:54.811 [2024-11-20T09:03:25.727Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:54.811 [2024-11-20T09:03:25.727Z] =================================================================================================================== 00:29:54.811 [2024-11-20T09:03:25.727Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:54.811 10:03:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1549626 00:29:55.071 10:03:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:29:55.071 10:03:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:55.071 10:03:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:29:55.071 10:03:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:55.071 10:03:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:55.071 10:03:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1550343 00:29:55.071 10:03:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1550343 /var/tmp/bperf.sock 00:29:55.071 10:03:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1550343 ']' 00:29:55.071 10:03:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:29:55.071 10:03:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:55.071 10:03:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:55.071 10:03:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:55.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:55.071 10:03:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:55.071 10:03:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:55.071 [2024-11-20 10:03:25.858311] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:29:55.071 [2024-11-20 10:03:25.858367] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1550343 ] 00:29:55.071 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:55.071 Zero copy mechanism will not be used. 00:29:55.071 [2024-11-20 10:03:25.942527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:55.071 [2024-11-20 10:03:25.970332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:56.012 10:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:56.012 10:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:29:56.012 10:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:56.012 10:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:56.012 10:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:56.012 10:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.012 10:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:56.012 10:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.012 10:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:56.012 10:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:56.272 nvme0n1 00:29:56.272 10:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:56.272 10:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.272 10:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:56.272 10:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.272 10:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:56.272 10:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:56.533 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:56.534 Zero copy mechanism will not be used. 00:29:56.534 Running I/O for 2 seconds... 00:29:56.534 [2024-11-20 10:03:27.194717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:56.534 [2024-11-20 10:03:27.194752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.534 [2024-11-20 10:03:27.194761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:56.534 [2024-11-20 10:03:27.205793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:56.534 [2024-11-20 10:03:27.205818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.534 [2024-11-20 10:03:27.205825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:56.534 [2024-11-20 10:03:27.216654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:56.534 [2024-11-20 10:03:27.216676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.534 [2024-11-20 10:03:27.216683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:56.534 [2024-11-20 10:03:27.228237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:56.534 [2024-11-20 10:03:27.228257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.534 [2024-11-20 10:03:27.228264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:56.534 [2024-11-20 10:03:27.235637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:56.534 [2024-11-20 10:03:27.235657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.534 [2024-11-20 10:03:27.235664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:56.534 [2024-11-20 10:03:27.245537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:56.534 [2024-11-20 10:03:27.245557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.534 [2024-11-20 10:03:27.245564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:56.534 [2024-11-20 10:03:27.255565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:56.534 [2024-11-20 10:03:27.255584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.534 [2024-11-20 10:03:27.255591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:56.534 [2024-11-20 10:03:27.265015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:56.534 [2024-11-20 10:03:27.265035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.534 [2024-11-20 10:03:27.265041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:56.534 [2024-11-20 10:03:27.268538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:56.534 [2024-11-20 10:03:27.268558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.534 [2024-11-20 10:03:27.268564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:56.534 [2024-11-20 10:03:27.273535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:56.534 [2024-11-20 10:03:27.273554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.534 [2024-11-20 10:03:27.273561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:56.534 [2024-11-20 10:03:27.277508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:56.534 [2024-11-20 10:03:27.277528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.534 [2024-11-20 10:03:27.277536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:56.534 [2024-11-20 10:03:27.281989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:56.534 [2024-11-20 10:03:27.282008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.534 [2024-11-20 10:03:27.282014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:56.534 [2024-11-20 10:03:27.288268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:56.534 [2024-11-20 10:03:27.288287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.534 [2024-11-20 10:03:27.288294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:56.534 [2024-11-20 10:03:27.294244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:56.534 [2024-11-20 10:03:27.294264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.534 [2024-11-20 10:03:27.294271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:56.534 [2024-11-20 10:03:27.299752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:56.534 [2024-11-20 10:03:27.299772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.534 [2024-11-20 10:03:27.299782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:56.534 [2024-11-20 10:03:27.304423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:56.534 [2024-11-20 10:03:27.304442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.534 [2024-11-20 10:03:27.304448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:56.534 [2024-11-20 10:03:27.309229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:56.534 [2024-11-20 10:03:27.309248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.534 [2024-11-20 10:03:27.309254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:56.534 [2024-11-20 10:03:27.318007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:56.534 [2024-11-20 10:03:27.318027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.534 [2024-11-20 10:03:27.318033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:56.534 [2024-11-20 10:03:27.321901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:56.534 [2024-11-20 10:03:27.321920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.534 [2024-11-20 10:03:27.321927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:56.534 [2024-11-20 10:03:27.327248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:56.534 [2024-11-20 10:03:27.327267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.534 [2024-11-20 10:03:27.327274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:56.534 [2024-11-20 10:03:27.331451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:56.534 [2024-11-20 10:03:27.331469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.534 [2024-11-20 10:03:27.331476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:56.534 [2024-11-20 10:03:27.339849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:56.534 [2024-11-20 10:03:27.339868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.534 [2024-11-20 10:03:27.339875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:56.534 [2024-11-20 10:03:27.348566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:56.534 [2024-11-20 10:03:27.348585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.534 [2024-11-20 10:03:27.348592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:56.534 [2024-11-20 10:03:27.359942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:56.534 [2024-11-20 10:03:27.359965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.534 [2024-11-20 10:03:27.359972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:56.534 [2024-11-20 10:03:27.371117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:56.534 [2024-11-20 10:03:27.371136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.534 [2024-11-20 10:03:27.371143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:56.534 [2024-11-20 10:03:27.382724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:56.534 [2024-11-20 10:03:27.382743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.534 [2024-11-20 10:03:27.382750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:56.534 [2024-11-20 10:03:27.394476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:56.535 [2024-11-20 10:03:27.394495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.535 [2024-11-20 10:03:27.394502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:56.535 [2024-11-20 10:03:27.400988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:56.535 [2024-11-20 10:03:27.401006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.535 [2024-11-20 10:03:27.401013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:56.535 [2024-11-20 10:03:27.406852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:56.535 [2024-11-20 10:03:27.406870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.535 [2024-11-20 10:03:27.406877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:56.535 [2024-11-20 10:03:27.414200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:56.535 [2024-11-20 10:03:27.414219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.535 [2024-11-20 10:03:27.414226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:56.535 [2024-11-20 10:03:27.422695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:56.535 [2024-11-20 10:03:27.422714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.535 [2024-11-20 10:03:27.422720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:56.535 [2024-11-20 10:03:27.427243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:56.535 [2024-11-20 10:03:27.427261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.535 [2024-11-20 10:03:27.427268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:56.535 [2024-11-20 10:03:27.435609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:56.535 [2024-11-20 10:03:27.435627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.535 [2024-11-20 10:03:27.435634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:56.535 [2024-11-20 10:03:27.444364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:56.535 [2024-11-20 10:03:27.444383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.535 [2024-11-20 10:03:27.444390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:56.795 [2024-11-20 10:03:27.452673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:56.795 [2024-11-20 10:03:27.452692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.795 [2024-11-20 10:03:27.452699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:56.795 [2024-11-20 10:03:27.457272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:56.795 [2024-11-20 10:03:27.457290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.795 [2024-11-20 10:03:27.457297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:56.795 [2024-11-20 10:03:27.467273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:56.795 [2024-11-20 10:03:27.467292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.795 [2024-11-20 10:03:27.467298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:56.795 [2024-11-20 10:03:27.473614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:56.795 [2024-11-20 10:03:27.473632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.795 [2024-11-20 10:03:27.473639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:56.795 [2024-11-20 10:03:27.481179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:56.795 [2024-11-20 10:03:27.481197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.795 [2024-11-20 10:03:27.481204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:56.795 [2024-11-20 10:03:27.487392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:56.795 [2024-11-20 10:03:27.487412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.795 [2024-11-20 10:03:27.487418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:56.795 [2024-11-20 10:03:27.495012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:56.795 [2024-11-20 10:03:27.495034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.795 [2024-11-20 10:03:27.495041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:56.795 [2024-11-20 10:03:27.499356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:56.795 [2024-11-20 10:03:27.499375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.795 [2024-11-20 10:03:27.499382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:56.795 [2024-11-20 10:03:27.506845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:56.795 [2024-11-20 10:03:27.506865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.795 [2024-11-20 10:03:27.506871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:56.795 [2024-11-20 10:03:27.510916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:56.795 [2024-11-20 10:03:27.510935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.795 [2024-11-20 10:03:27.510941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:56.795 [2024-11-20 10:03:27.516488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:56.795 [2024-11-20 10:03:27.516507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.795 [2024-11-20 10:03:27.516513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:56.795 [2024-11-20 10:03:27.523779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:56.795 [2024-11-20 10:03:27.523798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.795 [2024-11-20 10:03:27.523804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:56.795 [2024-11-20 10:03:27.529396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:56.795 [2024-11-20 10:03:27.529414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.795 [2024-11-20 10:03:27.529420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:56.795 [2024-11-20 10:03:27.536576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:56.795 [2024-11-20 10:03:27.536595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.796 [2024-11-20 10:03:27.536601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:56.796 [2024-11-20 10:03:27.544296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:56.796 [2024-11-20 10:03:27.544315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.796 [2024-11-20 10:03:27.544321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:56.796 [2024-11-20 10:03:27.549992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:56.796 [2024-11-20 10:03:27.550011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.796 [2024-11-20 10:03:27.550018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:56.796 [2024-11-20 10:03:27.554310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:56.796 [2024-11-20 10:03:27.554329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.796 [2024-11-20 10:03:27.554335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:56.796 [2024-11-20 10:03:27.562399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:56.796 [2024-11-20 10:03:27.562418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.796 [2024-11-20 10:03:27.562425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:56.796 [2024-11-20 10:03:27.570691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:56.796 [2024-11-20 10:03:27.570711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.796 [2024-11-20 10:03:27.570717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:56.796 [2024-11-20 10:03:27.575105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:56.796 [2024-11-20 10:03:27.575124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.796 [2024-11-20 10:03:27.575130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:56.796 [2024-11-20 10:03:27.579642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:56.796 [2024-11-20 10:03:27.579661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.796 [2024-11-20 10:03:27.579667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:56.796 [2024-11-20 10:03:27.584015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:56.796 [2024-11-20 10:03:27.584034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.796 [2024-11-20 10:03:27.584040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:56.796 [2024-11-20 10:03:27.588430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:56.796 [2024-11-20 10:03:27.588449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.796 [2024-11-20 10:03:27.588455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:56.796 [2024-11-20 10:03:27.595570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:56.796 [2024-11-20 10:03:27.595589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.796 [2024-11-20 10:03:27.595598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:56.796 [2024-11-20 10:03:27.600019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:56.796 [2024-11-20 10:03:27.600038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.796 [2024-11-20 10:03:27.600044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:56.796 [2024-11-20 10:03:27.604287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:56.796 [2024-11-20 10:03:27.604306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.796 [2024-11-20 10:03:27.604312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:56.796 [2024-11-20 10:03:27.611609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:56.796 [2024-11-20 10:03:27.611628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.796 [2024-11-20 10:03:27.611635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:56.796 [2024-11-20 10:03:27.621236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:56.796 [2024-11-20 10:03:27.621255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.796 [2024-11-20 10:03:27.621261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:56.796 [2024-11-20 10:03:27.630332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:56.796 [2024-11-20 10:03:27.630351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.796 [2024-11-20 10:03:27.630358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:56.796 [2024-11-20 10:03:27.635131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:56.796 [2024-11-20 10:03:27.635150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.796 [2024-11-20 10:03:27.635157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:56.796 [2024-11-20 10:03:27.640946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:56.796 [2024-11-20 10:03:27.640964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.796 [2024-11-20 10:03:27.640971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:56.796 [2024-11-20 10:03:27.649421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:56.796 [2024-11-20 10:03:27.649440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.796 [2024-11-20 10:03:27.649446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:56.796 [2024-11-20 10:03:27.657471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:56.796 [2024-11-20 10:03:27.657494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.796 [2024-11-20 10:03:27.657500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:56.796 [2024-11-20 10:03:27.665319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:56.796 [2024-11-20 10:03:27.665339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.796 [2024-11-20 10:03:27.665345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:56.796 [2024-11-20 10:03:27.674660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:56.796 [2024-11-20 10:03:27.674680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.796 [2024-11-20 10:03:27.674686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:56.796 [2024-11-20 10:03:27.683614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:56.796 [2024-11-20 10:03:27.683633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.796 [2024-11-20 10:03:27.683640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:56.796 [2024-11-20 10:03:27.688244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:56.796 [2024-11-20 10:03:27.688264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.796 [2024-11-20 10:03:27.688270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:56.796 [2024-11-20 10:03:27.693646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:56.796 [2024-11-20 10:03:27.693665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.796 [2024-11-20 10:03:27.693672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:56.796 [2024-11-20 10:03:27.699132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:56.796 [2024-11-20 10:03:27.699151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.796 [2024-11-20 10:03:27.699157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:56.796 [2024-11-20 10:03:27.703685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:56.796 [2024-11-20 10:03:27.703704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.796 [2024-11-20 10:03:27.703711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:57.057 [2024-11-20 10:03:27.707942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.057 [2024-11-20 10:03:27.707961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.057 [2024-11-20 10:03:27.707967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:57.057 [2024-11-20 10:03:27.713014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.057 [2024-11-20 10:03:27.713033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.057 [2024-11-20 10:03:27.713040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:57.057 [2024-11-20 10:03:27.718224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.057 [2024-11-20 10:03:27.718242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.057 [2024-11-20 10:03:27.718248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:57.057 [2024-11-20 10:03:27.726368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.057 [2024-11-20 10:03:27.726388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.057 [2024-11-20 10:03:27.726394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:57.057 [2024-11-20 10:03:27.732552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.057 [2024-11-20 10:03:27.732571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.057 [2024-11-20 10:03:27.732577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:57.057 [2024-11-20 10:03:27.737504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.057 [2024-11-20 10:03:27.737523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.057 [2024-11-20 10:03:27.737530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:57.057 [2024-11-20 10:03:27.742211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.057 [2024-11-20 10:03:27.742229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.057 [2024-11-20 10:03:27.742235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:57.057 [2024-11-20 10:03:27.752676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.057 [2024-11-20 10:03:27.752695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.057 [2024-11-20 10:03:27.752701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:57.058 [2024-11-20 10:03:27.760455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.058 [2024-11-20 10:03:27.760475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.058 [2024-11-20 10:03:27.760481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:57.058 [2024-11-20 10:03:27.767582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.058 [2024-11-20 10:03:27.767600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.058 [2024-11-20 10:03:27.767610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:57.058 [2024-11-20 10:03:27.771994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.058 [2024-11-20 10:03:27.772013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.058 [2024-11-20 10:03:27.772019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:57.058 [2024-11-20 10:03:27.776670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.058 [2024-11-20 10:03:27.776688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.058 [2024-11-20 10:03:27.776695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:57.058 [2024-11-20 10:03:27.781180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.058 [2024-11-20 10:03:27.781199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.058 [2024-11-20 10:03:27.781205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:57.058 [2024-11-20 10:03:27.786902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.058 [2024-11-20 10:03:27.786921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.058 [2024-11-20 10:03:27.786928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:57.058 [2024-11-20 10:03:27.793825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.058 [2024-11-20 10:03:27.793843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.058 [2024-11-20 10:03:27.793850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:57.058 [2024-11-20 10:03:27.801862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.058 [2024-11-20 10:03:27.801881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.058 [2024-11-20 10:03:27.801888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:57.058 [2024-11-20 10:03:27.810320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.058 [2024-11-20 10:03:27.810339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.058 [2024-11-20 10:03:27.810345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:57.058 [2024-11-20 10:03:27.819176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.058 [2024-11-20 10:03:27.819195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.058 [2024-11-20 10:03:27.819202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:57.058 [2024-11-20 10:03:27.829504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.058 [2024-11-20 10:03:27.829523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.058 [2024-11-20 10:03:27.829529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:57.058 [2024-11-20 10:03:27.839329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.058 [2024-11-20 10:03:27.839348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.058 [2024-11-20 10:03:27.839354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:57.058 [2024-11-20 10:03:27.843731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.058 [2024-11-20 10:03:27.843750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.058 [2024-11-20 10:03:27.843757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:57.058 [2024-11-20 10:03:27.848048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.058 [2024-11-20 10:03:27.848066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.058 [2024-11-20 10:03:27.848073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:57.058 [2024-11-20 10:03:27.852517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.058 [2024-11-20 10:03:27.852536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.058 [2024-11-20 10:03:27.852543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:57.058 [2024-11-20 10:03:27.856862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.058 [2024-11-20 10:03:27.856880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.058 [2024-11-20 10:03:27.856887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:57.058 [2024-11-20 10:03:27.861217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.058 [2024-11-20 10:03:27.861236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.058 [2024-11-20 10:03:27.861242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:57.058 [2024-11-20 10:03:27.869271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.058 [2024-11-20 10:03:27.869290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.058 [2024-11-20 10:03:27.869296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:57.058 [2024-11-20 10:03:27.879619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.058 [2024-11-20 10:03:27.879638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.058 [2024-11-20 10:03:27.879647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:57.058 [2024-11-20 10:03:27.884975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.058 [2024-11-20 10:03:27.884994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.058 [2024-11-20 10:03:27.885000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:57.058 [2024-11-20 10:03:27.889738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.058 [2024-11-20 10:03:27.889757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.058 [2024-11-20 10:03:27.889763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:57.058 [2024-11-20 10:03:27.901624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.058 [2024-11-20 10:03:27.901643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.058 [2024-11-20 10:03:27.901649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:57.058 [2024-11-20 10:03:27.912289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.058 [2024-11-20 10:03:27.912308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.058 [2024-11-20 10:03:27.912314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:57.058 [2024-11-20 10:03:27.920441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.058 [2024-11-20 10:03:27.920459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.058 [2024-11-20 10:03:27.920466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:57.058 [2024-11-20 10:03:27.926385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.058 [2024-11-20 10:03:27.926404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.058 [2024-11-20 10:03:27.926411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:57.058 [2024-11-20 10:03:27.931626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.058 [2024-11-20 10:03:27.931645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.058 [2024-11-20 10:03:27.931651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:57.058 [2024-11-20 10:03:27.936005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.058 [2024-11-20 10:03:27.936023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.059 [2024-11-20 10:03:27.936030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:57.059 [2024-11-20 10:03:27.942884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.059 [2024-11-20 10:03:27.942906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.059 [2024-11-20 10:03:27.942912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:57.059 [2024-11-20 10:03:27.947873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.059 [2024-11-20 10:03:27.947892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.059 [2024-11-20 10:03:27.947898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:57.059 [2024-11-20 10:03:27.952299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.059 [2024-11-20 10:03:27.952318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.059 [2024-11-20 10:03:27.952324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:57.059 [2024-11-20 10:03:27.956660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.059 [2024-11-20 10:03:27.956679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.059 [2024-11-20 10:03:27.956685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:57.059 [2024-11-20 10:03:27.961163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.059 [2024-11-20 10:03:27.961181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.059 [2024-11-20 10:03:27.961188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:57.059 [2024-11-20 10:03:27.966322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.059 [2024-11-20 10:03:27.966341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.059 [2024-11-20 10:03:27.966347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:57.321 [2024-11-20 10:03:27.977972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.321 [2024-11-20 10:03:27.977990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.321 [2024-11-20 10:03:27.977997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:57.321 [2024-11-20 10:03:27.988197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.321 [2024-11-20 10:03:27.988215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.321 [2024-11-20 10:03:27.988221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:57.321 [2024-11-20 10:03:27.997835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.321 [2024-11-20 10:03:27.997853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.321 [2024-11-20 10:03:27.997860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:57.321 [2024-11-20 10:03:28.000744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.321 [2024-11-20 10:03:28.000761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.321 [2024-11-20 10:03:28.000768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:57.321 [2024-11-20 10:03:28.004389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.321 [2024-11-20 10:03:28.004408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.321 [2024-11-20 10:03:28.004415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:57.321 [2024-11-20 10:03:28.008828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.321 [2024-11-20 10:03:28.008847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.321 [2024-11-20 10:03:28.008853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:57.321 [2024-11-20 10:03:28.013177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.321 [2024-11-20 10:03:28.013194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.321 [2024-11-20 10:03:28.013201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:57.321 [2024-11-20 10:03:28.017518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.321 [2024-11-20 10:03:28.017535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.321 [2024-11-20 10:03:28.017542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:57.321 [2024-11-20 10:03:28.023978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.321 [2024-11-20 10:03:28.023995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.321 [2024-11-20 10:03:28.024002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:57.321 [2024-11-20 10:03:28.030985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.321 [2024-11-20 10:03:28.031002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.321 [2024-11-20 10:03:28.031009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:57.321 [2024-11-20 10:03:28.035301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.321 [2024-11-20 10:03:28.035319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.321 [2024-11-20 10:03:28.035326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:57.321 [2024-11-20 10:03:28.043115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.321 [2024-11-20 10:03:28.043135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.321 [2024-11-20 10:03:28.043144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:57.321 [2024-11-20 10:03:28.051561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.321 [2024-11-20 10:03:28.051580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.321 [2024-11-20 10:03:28.051587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:57.321 [2024-11-20 10:03:28.056542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.321 [2024-11-20 10:03:28.056561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.321 [2024-11-20 10:03:28.056567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:57.321 [2024-11-20 10:03:28.068408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.321 [2024-11-20 10:03:28.068427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.321 [2024-11-20 10:03:28.068433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:57.321 [2024-11-20 10:03:28.080425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.321 [2024-11-20 10:03:28.080444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.321 [2024-11-20 10:03:28.080450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:57.321 [2024-11-20 10:03:28.092024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.321 [2024-11-20 10:03:28.092044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.321 [2024-11-20 10:03:28.092050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:57.321 [2024-11-20 10:03:28.104311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.321 [2024-11-20 10:03:28.104329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.321 [2024-11-20 10:03:28.104335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:57.321 [2024-11-20 10:03:28.116314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.321 [2024-11-20 10:03:28.116333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.321 [2024-11-20 10:03:28.116340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:57.321 [2024-11-20 10:03:28.128035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.321 [2024-11-20 10:03:28.128054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.321 [2024-11-20 10:03:28.128060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:57.321 [2024-11-20 10:03:28.139679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.321 [2024-11-20 10:03:28.139705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.321 [2024-11-20 10:03:28.139711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:57.321 [2024-11-20 10:03:28.151658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.321 [2024-11-20 10:03:28.151677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.321 [2024-11-20 10:03:28.151684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:57.321 [2024-11-20 10:03:28.163191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.321 [2024-11-20 10:03:28.163210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.321 [2024-11-20 10:03:28.163217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:57.321 [2024-11-20 10:03:28.173759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.321 [2024-11-20 10:03:28.173778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.321 [2024-11-20 10:03:28.173785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:57.321 [2024-11-20 10:03:28.178820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.321 [2024-11-20 10:03:28.178838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.322 [2024-11-20 10:03:28.178845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:57.322 [2024-11-20 10:03:28.183365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.322 [2024-11-20 10:03:28.183384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.322 [2024-11-20 10:03:28.183390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:57.322 [2024-11-20 10:03:28.188048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.322 [2024-11-20 10:03:28.188067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.322 [2024-11-20 10:03:28.188073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:57.322 4392.00 IOPS, 549.00 MiB/s [2024-11-20T09:03:28.238Z] [2024-11-20 10:03:28.194490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.322 [2024-11-20 10:03:28.194509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.322 [2024-11-20 10:03:28.194517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:57.322 [2024-11-20 10:03:28.205601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.322 [2024-11-20 10:03:28.205620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.322 [2024-11-20 10:03:28.205630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:57.322 [2024-11-20 10:03:28.216490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.322 [2024-11-20 10:03:28.216509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.322 [2024-11-20 10:03:28.216515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:57.322 [2024-11-20 10:03:28.228517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.322 [2024-11-20 10:03:28.228536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.322 [2024-11-20 10:03:28.228542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:57.584 [2024-11-20 10:03:28.239885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.584 [2024-11-20 10:03:28.239904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.584 [2024-11-20 10:03:28.239911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:57.584 [2024-11-20 10:03:28.250091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.584 [2024-11-20 10:03:28.250110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.584 [2024-11-20 10:03:28.250117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:57.584 [2024-11-20 10:03:28.258377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.584 [2024-11-20 10:03:28.258396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.584 [2024-11-20 10:03:28.258402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:57.584 [2024-11-20 10:03:28.266633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.584 [2024-11-20 10:03:28.266652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.584 [2024-11-20 10:03:28.266658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:57.584 [2024-11-20 10:03:28.277307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.584 [2024-11-20 10:03:28.277327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.584 [2024-11-20 10:03:28.277333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:57.584 [2024-11-20 10:03:28.288282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.584 [2024-11-20 10:03:28.288300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.584 [2024-11-20 10:03:28.288306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:57.584 [2024-11-20 10:03:28.298350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.584 [2024-11-20 10:03:28.298372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.584 [2024-11-20 10:03:28.298379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:57.584 [2024-11-20 10:03:28.307495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.584 [2024-11-20 10:03:28.307514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.584 [2024-11-20 10:03:28.307520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:57.584 [2024-11-20 10:03:28.318259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.584 [2024-11-20 10:03:28.318278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.584 [2024-11-20 10:03:28.318284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:57.584 [2024-11-20 10:03:28.326310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.584 [2024-11-20 10:03:28.326329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.584 [2024-11-20 10:03:28.326335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:57.584 [2024-11-20 10:03:28.338114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.584 [2024-11-20 10:03:28.338133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.584 [2024-11-20 10:03:28.338139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:57.584 [2024-11-20 10:03:28.349875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.584 [2024-11-20 10:03:28.349894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.584 [2024-11-20 10:03:28.349901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:57.584 [2024-11-20 10:03:28.362155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.584 [2024-11-20 10:03:28.362185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.585 [2024-11-20 10:03:28.362192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:57.585 [2024-11-20 10:03:28.374127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.585 [2024-11-20 10:03:28.374146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.585 [2024-11-20 10:03:28.374153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:57.585 [2024-11-20 10:03:28.383854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.585 [2024-11-20 10:03:28.383873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.585 [2024-11-20 10:03:28.383880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:57.585 [2024-11-20 10:03:28.394712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.585 [2024-11-20 10:03:28.394732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.585 [2024-11-20 10:03:28.394738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:57.585 [2024-11-20 10:03:28.404783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.585 [2024-11-20 10:03:28.404802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.585 [2024-11-20 10:03:28.404808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:57.585 [2024-11-20 10:03:28.415782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.585 [2024-11-20 10:03:28.415801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.585 [2024-11-20 10:03:28.415807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:57.585 [2024-11-20 10:03:28.424098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.585 [2024-11-20 10:03:28.424117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.585 [2024-11-20 10:03:28.424124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:57.585 [2024-11-20 10:03:28.433546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.585 [2024-11-20 10:03:28.433565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.585 [2024-11-20 10:03:28.433571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:57.585 [2024-11-20 10:03:28.442916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.585 [2024-11-20 10:03:28.442935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.585 [2024-11-20 10:03:28.442941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:57.585 [2024-11-20 10:03:28.452250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.585 [2024-11-20 10:03:28.452269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.585 [2024-11-20 10:03:28.452276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:57.585 [2024-11-20 10:03:28.461658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.585 [2024-11-20 10:03:28.461677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.585 [2024-11-20 10:03:28.461684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:57.585 [2024-11-20 10:03:28.471637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.585 [2024-11-20 10:03:28.471657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.585 [2024-11-20 10:03:28.471666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:57.585 [2024-11-20 10:03:28.482684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.585 [2024-11-20 10:03:28.482703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.585 [2024-11-20 10:03:28.482709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:57.585 [2024-11-20 10:03:28.491701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.585 [2024-11-20 10:03:28.491720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.585 [2024-11-20 10:03:28.491726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:57.846 [2024-11-20 10:03:28.503588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.846 [2024-11-20 10:03:28.503608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.846 [2024-11-20 10:03:28.503615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:57.846 [2024-11-20 10:03:28.512755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.846 [2024-11-20 10:03:28.512775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.846 [2024-11-20 10:03:28.512781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:57.846 [2024-11-20 10:03:28.524438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.846 [2024-11-20 10:03:28.524457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.846 [2024-11-20 10:03:28.524463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:57.846 [2024-11-20 10:03:28.534736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.846 [2024-11-20 10:03:28.534756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.846 [2024-11-20 10:03:28.534762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:57.847 [2024-11-20 10:03:28.545030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.847 [2024-11-20 10:03:28.545050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.847 [2024-11-20 10:03:28.545056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:57.847 [2024-11-20 10:03:28.554592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.847 [2024-11-20 10:03:28.554611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.847 [2024-11-20 10:03:28.554618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:57.847 [2024-11-20 10:03:28.563355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.847 [2024-11-20 10:03:28.563378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.847 [2024-11-20 10:03:28.563384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:57.847 [2024-11-20 10:03:28.574352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.847 [2024-11-20 10:03:28.574370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.847 [2024-11-20 10:03:28.574376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:57.847 [2024-11-20 10:03:28.585781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.847 [2024-11-20 10:03:28.585801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.847 [2024-11-20 10:03:28.585808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:57.847 [2024-11-20 10:03:28.596625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.847 [2024-11-20 10:03:28.596644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.847 [2024-11-20 10:03:28.596650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:57.847 [2024-11-20 10:03:28.606764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.847 [2024-11-20 10:03:28.606783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.847 [2024-11-20 10:03:28.606790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:57.847 [2024-11-20 10:03:28.614968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.847 [2024-11-20 10:03:28.614988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.847 [2024-11-20 10:03:28.614994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:57.847 [2024-11-20 10:03:28.626235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.847 [2024-11-20 10:03:28.626254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.847 [2024-11-20 10:03:28.626260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:57.847 [2024-11-20 10:03:28.634251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.847 [2024-11-20 10:03:28.634270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.847 [2024-11-20 10:03:28.634277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:57.847 [2024-11-20 10:03:28.645305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.847 [2024-11-20 10:03:28.645324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.847 [2024-11-20 10:03:28.645330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:57.847 [2024-11-20 10:03:28.655543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.847 [2024-11-20 10:03:28.655562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.847 [2024-11-20 10:03:28.655569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:57.847 [2024-11-20 10:03:28.666358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.847 [2024-11-20 10:03:28.666376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.847 [2024-11-20 10:03:28.666383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:57.847 [2024-11-20 10:03:28.677733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.847 [2024-11-20 10:03:28.677752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.847 [2024-11-20 10:03:28.677758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:57.847 [2024-11-20 10:03:28.690002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.847 [2024-11-20 10:03:28.690021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.847 [2024-11-20 10:03:28.690028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:57.847 [2024-11-20 10:03:28.702546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.847 [2024-11-20 10:03:28.702565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.847 [2024-11-20 10:03:28.702572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:57.847 [2024-11-20 10:03:28.712273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.847 [2024-11-20 10:03:28.712292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.847 [2024-11-20 10:03:28.712299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:57.847 [2024-11-20 10:03:28.722830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.847 [2024-11-20 10:03:28.722848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.847 [2024-11-20 10:03:28.722855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:57.847 [2024-11-20 10:03:28.734179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.847 [2024-11-20 10:03:28.734198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.847 [2024-11-20 10:03:28.734204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:57.847 [2024-11-20 10:03:28.745165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.847 [2024-11-20 10:03:28.745187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.847 [2024-11-20 10:03:28.745193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:57.847 [2024-11-20 10:03:28.756962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:57.847 [2024-11-20 10:03:28.756981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.847 [2024-11-20 10:03:28.756988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:58.108 [2024-11-20 10:03:28.768458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:58.108 [2024-11-20 10:03:28.768478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.108 [2024-11-20 10:03:28.768485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:58.108 [2024-11-20 10:03:28.778536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:58.108 [2024-11-20 10:03:28.778555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.108 [2024-11-20 10:03:28.778561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:58.108 [2024-11-20 10:03:28.788878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:58.108 [2024-11-20 10:03:28.788897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.108 [2024-11-20 10:03:28.788904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:58.108 [2024-11-20 10:03:28.795224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:58.108 [2024-11-20 10:03:28.795243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.108 [2024-11-20 10:03:28.795250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:58.108 [2024-11-20 10:03:28.803715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:58.108 [2024-11-20 10:03:28.803733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.108 [2024-11-20 10:03:28.803740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:58.108 [2024-11-20 10:03:28.814680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:58.108 [2024-11-20 10:03:28.814700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.108 [2024-11-20 10:03:28.814707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:58.108 [2024-11-20 10:03:28.825748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:58.108 [2024-11-20 10:03:28.825766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.109 [2024-11-20 10:03:28.825773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:58.109 [2024-11-20 10:03:28.834932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:58.109 [2024-11-20 10:03:28.834951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.109 [2024-11-20 10:03:28.834957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:58.109 [2024-11-20 10:03:28.846577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:58.109 [2024-11-20 10:03:28.846596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.109 [2024-11-20 10:03:28.846603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:58.109 [2024-11-20 10:03:28.857765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:58.109 [2024-11-20 10:03:28.857784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.109 [2024-11-20 10:03:28.857790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:58.109 [2024-11-20 10:03:28.865113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:58.109 [2024-11-20 10:03:28.865133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.109 [2024-11-20 10:03:28.865139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:58.109 [2024-11-20 10:03:28.875558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:58.109 [2024-11-20 10:03:28.875577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.109 [2024-11-20 10:03:28.875583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:58.109 [2024-11-20 10:03:28.886566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:58.109 [2024-11-20 10:03:28.886585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.109 [2024-11-20 10:03:28.886592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:58.109 [2024-11-20 10:03:28.898044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:58.109 [2024-11-20 10:03:28.898063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.109 [2024-11-20 10:03:28.898070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:58.109 [2024-11-20 10:03:28.907979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:58.109 [2024-11-20 10:03:28.907998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.109 [2024-11-20 10:03:28.908004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:58.109 [2024-11-20 10:03:28.918710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:58.109 [2024-11-20 10:03:28.918729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.109 [2024-11-20 10:03:28.918738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:58.109 [2024-11-20 10:03:28.928991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:58.109 [2024-11-20 10:03:28.929011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.109 [2024-11-20 10:03:28.929017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:58.109 [2024-11-20 10:03:28.939503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:58.109 [2024-11-20 10:03:28.939523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.109 [2024-11-20 10:03:28.939529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:58.109 [2024-11-20 10:03:28.949023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:58.109 [2024-11-20 10:03:28.949042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.109 [2024-11-20 10:03:28.949048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:58.109 [2024-11-20 10:03:28.960462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:58.109 [2024-11-20 10:03:28.960481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.109 [2024-11-20 10:03:28.960487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:58.109 [2024-11-20 10:03:28.969479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:58.109 [2024-11-20 10:03:28.969498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.109 [2024-11-20 10:03:28.969505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:58.109 [2024-11-20 10:03:28.978952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:58.109 [2024-11-20 10:03:28.978971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.109 [2024-11-20 10:03:28.978977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:58.109 [2024-11-20 10:03:28.989434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:58.109 [2024-11-20 10:03:28.989453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.109 [2024-11-20 10:03:28.989459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:58.109 [2024-11-20 10:03:28.999230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:58.109 [2024-11-20 10:03:28.999250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.109 [2024-11-20 10:03:28.999257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:58.109 [2024-11-20 10:03:29.008031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:58.109 [2024-11-20 10:03:29.008052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.109 [2024-11-20 10:03:29.008059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:58.109 [2024-11-20 10:03:29.019123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:58.109 [2024-11-20 10:03:29.019142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.109 [2024-11-20 10:03:29.019149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:58.370 [2024-11-20 10:03:29.028911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:58.370 [2024-11-20 10:03:29.028929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.370 [2024-11-20 10:03:29.028936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:58.370 [2024-11-20 10:03:29.039240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:58.370 [2024-11-20 10:03:29.039258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.370 [2024-11-20 10:03:29.039265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:58.370 [2024-11-20 10:03:29.050772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:58.370 [2024-11-20 10:03:29.050792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.370 [2024-11-20 10:03:29.050798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:58.370 [2024-11-20 10:03:29.059100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:58.370 [2024-11-20 10:03:29.059119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.370 [2024-11-20 10:03:29.059125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:58.370 [2024-11-20 10:03:29.068531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:58.370 [2024-11-20 10:03:29.068550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.370 [2024-11-20 10:03:29.068556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:58.370 [2024-11-20 10:03:29.078919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:58.370 [2024-11-20 10:03:29.078938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.370 [2024-11-20 10:03:29.078944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:58.370 [2024-11-20 10:03:29.090578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:58.370 [2024-11-20 10:03:29.090597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.370 [2024-11-20 10:03:29.090604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:58.370 [2024-11-20 10:03:29.102453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:58.370 [2024-11-20 10:03:29.102472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.370 [2024-11-20 10:03:29.102479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:58.370 [2024-11-20 10:03:29.113775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:58.370 [2024-11-20 10:03:29.113794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.370 [2024-11-20 10:03:29.113800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:58.370 [2024-11-20 10:03:29.124685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:58.370 [2024-11-20 10:03:29.124704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.370 [2024-11-20 10:03:29.124711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:58.370 [2024-11-20 10:03:29.135356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:58.370 [2024-11-20 10:03:29.135376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.370 [2024-11-20 10:03:29.135382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:58.370 [2024-11-20 10:03:29.145388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:58.370 [2024-11-20 10:03:29.145408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.370 [2024-11-20 10:03:29.145414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:58.370 [2024-11-20 10:03:29.156744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:58.370 [2024-11-20 10:03:29.156763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.370 [2024-11-20 10:03:29.156769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:58.370 [2024-11-20 10:03:29.167077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:58.370 [2024-11-20 10:03:29.167096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.370 [2024-11-20 10:03:29.167102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:58.370 [2024-11-20 10:03:29.178625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:58.370 [2024-11-20 10:03:29.178644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.370 [2024-11-20 10:03:29.178651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:58.370 [2024-11-20 10:03:29.188929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b0a10) 00:29:58.370 [2024-11-20 10:03:29.188947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.370 [2024-11-20 10:03:29.188956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:58.370 3696.50 IOPS, 462.06 MiB/s 00:29:58.370 Latency(us) 00:29:58.370 [2024-11-20T09:03:29.286Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:58.370 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:58.370 nvme0n1 : 2.00 3701.56 462.70 0.00 0.00 4319.87 682.67 12670.29 00:29:58.370 [2024-11-20T09:03:29.286Z] =================================================================================================================== 00:29:58.370 [2024-11-20T09:03:29.286Z] Total : 3701.56 462.70 0.00 0.00 4319.87 682.67 12670.29 00:29:58.370 { 00:29:58.370 "results": [ 00:29:58.370 { 00:29:58.370 "job": "nvme0n1", 00:29:58.370 "core_mask": "0x2", 00:29:58.370 "workload": "randread", 00:29:58.370 "status": "finished", 00:29:58.370 "queue_depth": 16, 00:29:58.370 "io_size": 131072, 00:29:58.370 "runtime": 2.001587, 00:29:58.370 "iops": 3701.562809910336, 00:29:58.370 "mibps": 462.695351238792, 00:29:58.370 "io_failed": 0, 00:29:58.370 "io_timeout": 0, 00:29:58.370 "avg_latency_us": 4319.868642641832, 00:29:58.370 "min_latency_us": 682.6666666666666, 00:29:58.370 "max_latency_us": 12670.293333333333 00:29:58.370 } 00:29:58.370 ], 00:29:58.370 "core_count": 1 00:29:58.370 } 00:29:58.370 10:03:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:58.370 10:03:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:58.370 10:03:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:58.370 | .driver_specific 00:29:58.370 | .nvme_error 00:29:58.370 | .status_code 00:29:58.370 | .command_transient_transport_error' 00:29:58.370 10:03:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:58.631 10:03:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 239 > 0 )) 00:29:58.631 10:03:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1550343 00:29:58.631 10:03:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1550343 ']' 00:29:58.631 10:03:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1550343 00:29:58.631 10:03:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:29:58.631 10:03:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:58.631 10:03:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1550343 00:29:58.631 10:03:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:58.631 10:03:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:58.631 10:03:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1550343' 00:29:58.631 killing process with pid 1550343 00:29:58.631 10:03:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1550343 00:29:58.631 Received shutdown signal, test time was about 2.000000 seconds 00:29:58.631 00:29:58.631 Latency(us) 00:29:58.631 [2024-11-20T09:03:29.547Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:58.631 [2024-11-20T09:03:29.547Z] =================================================================================================================== 00:29:58.631 [2024-11-20T09:03:29.547Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:58.631 10:03:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1550343 00:29:58.891 10:03:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:29:58.891 10:03:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:58.891 10:03:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:58.891 10:03:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:29:58.891 10:03:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:29:58.891 10:03:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1551049 00:29:58.891 10:03:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1551049 /var/tmp/bperf.sock 00:29:58.891 10:03:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1551049 ']' 00:29:58.891 10:03:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:29:58.891 10:03:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:58.891 10:03:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:58.891 10:03:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:58.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:58.891 10:03:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:58.891 10:03:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:58.891 [2024-11-20 10:03:29.620703] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:29:58.891 [2024-11-20 10:03:29.620760] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1551049 ] 00:29:58.891 [2024-11-20 10:03:29.705892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:58.891 [2024-11-20 10:03:29.735217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:59.831 10:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:59.831 10:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:29:59.831 10:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:59.831 10:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:59.832 10:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:59.832 10:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.832 10:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:59.832 10:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.832 10:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:59.832 10:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:00.092 nvme0n1 00:30:00.092 10:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:30:00.092 10:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.092 10:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:00.092 10:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.092 10:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:00.092 10:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:00.353 Running I/O for 2 seconds... 00:30:00.353 [2024-11-20 10:03:31.091470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166eff18 00:30:00.353 [2024-11-20 10:03:31.092429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.353 [2024-11-20 10:03:31.092457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:00.353 [2024-11-20 10:03:31.100112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166e3498 00:30:00.353 [2024-11-20 10:03:31.101063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:19297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.353 [2024-11-20 10:03:31.101081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:00.353 [2024-11-20 10:03:31.108585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f8e88 00:30:00.353 [2024-11-20 10:03:31.109533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:3292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.353 [2024-11-20 10:03:31.109549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:00.353 [2024-11-20 10:03:31.117068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166eff18 00:30:00.353 [2024-11-20 10:03:31.118024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:15948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.353 [2024-11-20 10:03:31.118041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:00.353 [2024-11-20 10:03:31.125528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166e3498 00:30:00.353 [2024-11-20 10:03:31.126454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:21582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.353 [2024-11-20 10:03:31.126470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:00.353 [2024-11-20 10:03:31.133981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f8e88 00:30:00.353 [2024-11-20 10:03:31.134929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:6110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.353 [2024-11-20 10:03:31.134946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:00.353 [2024-11-20 10:03:31.142435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166eff18 00:30:00.353 [2024-11-20 10:03:31.143356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:19360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.353 [2024-11-20 10:03:31.143373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:00.353 [2024-11-20 10:03:31.150867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166e3498 00:30:00.353 [2024-11-20 10:03:31.151817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:8408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.353 [2024-11-20 10:03:31.151838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:00.353 [2024-11-20 10:03:31.159310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f8e88 00:30:00.353 [2024-11-20 10:03:31.160218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:9323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.353 [2024-11-20 10:03:31.160235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:00.353 [2024-11-20 10:03:31.167751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166eff18 00:30:00.353 [2024-11-20 10:03:31.168721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:22346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.353 [2024-11-20 10:03:31.168737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:00.353 [2024-11-20 10:03:31.176190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166e3498 00:30:00.353 [2024-11-20 10:03:31.177098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:2223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.353 [2024-11-20 10:03:31.177114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:00.353 [2024-11-20 10:03:31.184091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166e2c28 00:30:00.353 [2024-11-20 10:03:31.184947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.353 [2024-11-20 10:03:31.184963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:30:00.353 [2024-11-20 10:03:31.192791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166e3d08 00:30:00.353 [2024-11-20 10:03:31.193639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:2876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.353 [2024-11-20 10:03:31.193655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:00.353 [2024-11-20 10:03:31.201209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166e4de8 00:30:00.353 [2024-11-20 10:03:31.202060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:11239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.353 [2024-11-20 10:03:31.202076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:00.353 [2024-11-20 10:03:31.209608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166e5ec8 00:30:00.353 [2024-11-20 10:03:31.210448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:18213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.353 [2024-11-20 10:03:31.210464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:00.353 [2024-11-20 10:03:31.218008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166e6fa8 00:30:00.353 [2024-11-20 10:03:31.218846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:8213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.354 [2024-11-20 10:03:31.218862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:00.354 [2024-11-20 10:03:31.226426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f4b08 00:30:00.354 [2024-11-20 10:03:31.227274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:23091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.354 [2024-11-20 10:03:31.227291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:00.354 [2024-11-20 10:03:31.234849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f31b8 00:30:00.354 [2024-11-20 10:03:31.235656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.354 [2024-11-20 10:03:31.235673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:00.354 [2024-11-20 10:03:31.243262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f20d8 00:30:00.354 [2024-11-20 10:03:31.244113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:14926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.354 [2024-11-20 10:03:31.244129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:00.354 [2024-11-20 10:03:31.251673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166e73e0 00:30:00.354 [2024-11-20 10:03:31.252526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:20006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.354 [2024-11-20 10:03:31.252542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:00.354 [2024-11-20 10:03:31.260063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f6020 00:30:00.354 [2024-11-20 10:03:31.260930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:3572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.354 [2024-11-20 10:03:31.260947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:00.615 [2024-11-20 10:03:31.268488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f7100 00:30:00.615 [2024-11-20 10:03:31.269318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:5405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.615 [2024-11-20 10:03:31.269335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:00.615 [2024-11-20 10:03:31.276903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f81e0 00:30:00.615 [2024-11-20 10:03:31.277769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:18340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.615 [2024-11-20 10:03:31.277787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:00.615 [2024-11-20 10:03:31.285328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f92c0 00:30:00.615 [2024-11-20 10:03:31.286168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:6374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.615 [2024-11-20 10:03:31.286184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:00.616 [2024-11-20 10:03:31.293729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166fa3a0 00:30:00.616 [2024-11-20 10:03:31.294581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:22611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.616 [2024-11-20 10:03:31.294598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:00.616 [2024-11-20 10:03:31.302130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166ec840 00:30:00.616 [2024-11-20 10:03:31.302969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:9125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.616 [2024-11-20 10:03:31.302986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:00.616 [2024-11-20 10:03:31.310531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166ed920 00:30:00.616 [2024-11-20 10:03:31.311363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.616 [2024-11-20 10:03:31.311381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:00.616 [2024-11-20 10:03:31.318950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166eb760 00:30:00.616 [2024-11-20 10:03:31.319793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:22845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.616 [2024-11-20 10:03:31.319809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:00.616 [2024-11-20 10:03:31.327369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166e3060 00:30:00.616 [2024-11-20 10:03:31.328231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:7763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.616 [2024-11-20 10:03:31.328247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:00.616 [2024-11-20 10:03:31.335791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166e4140 00:30:00.616 [2024-11-20 10:03:31.336597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:6971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.616 [2024-11-20 10:03:31.336614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:00.616 [2024-11-20 10:03:31.344209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166e5220 00:30:00.616 [2024-11-20 10:03:31.345040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.616 [2024-11-20 10:03:31.345056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:00.616 [2024-11-20 10:03:31.352621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166e6300 00:30:00.616 [2024-11-20 10:03:31.353472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:14277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.616 [2024-11-20 10:03:31.353488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:00.616 [2024-11-20 10:03:31.361050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f3e60 00:30:00.616 [2024-11-20 10:03:31.361912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:12840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.616 [2024-11-20 10:03:31.361929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:00.616 [2024-11-20 10:03:31.369477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f4f40 00:30:00.616 [2024-11-20 10:03:31.370352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:9102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.616 [2024-11-20 10:03:31.370371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:00.616 [2024-11-20 10:03:31.377908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f2d80 00:30:00.616 [2024-11-20 10:03:31.378774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:14942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.616 [2024-11-20 10:03:31.378790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:00.616 [2024-11-20 10:03:31.386332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166e7818 00:30:00.616 [2024-11-20 10:03:31.387183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:20163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.616 [2024-11-20 10:03:31.387200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:00.616 [2024-11-20 10:03:31.394729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f5be8 00:30:00.616 [2024-11-20 10:03:31.395550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:4404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.616 [2024-11-20 10:03:31.395567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:00.616 [2024-11-20 10:03:31.403136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f6cc8 00:30:00.616 [2024-11-20 10:03:31.403958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.616 [2024-11-20 10:03:31.403975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:00.616 [2024-11-20 10:03:31.411555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f7da8 00:30:00.616 [2024-11-20 10:03:31.412403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:3328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.616 [2024-11-20 10:03:31.412419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:00.616 [2024-11-20 10:03:31.419976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f8e88 00:30:00.616 [2024-11-20 10:03:31.420821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:10287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.616 [2024-11-20 10:03:31.420837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:00.616 [2024-11-20 10:03:31.428398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f9f68 00:30:00.616 [2024-11-20 10:03:31.429221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:23052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.616 [2024-11-20 10:03:31.429238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:00.616 [2024-11-20 10:03:31.436811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166fb048 00:30:00.616 [2024-11-20 10:03:31.437663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:2712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.616 [2024-11-20 10:03:31.437680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:00.617 [2024-11-20 10:03:31.445216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166ed4e8 00:30:00.617 [2024-11-20 10:03:31.446077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:20488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.617 [2024-11-20 10:03:31.446093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:00.617 [2024-11-20 10:03:31.453632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166ebb98 00:30:00.617 [2024-11-20 10:03:31.454499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.617 [2024-11-20 10:03:31.454515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:00.617 [2024-11-20 10:03:31.462061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166e2c28 00:30:00.617 [2024-11-20 10:03:31.462906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:23156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.617 [2024-11-20 10:03:31.462923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:00.617 [2024-11-20 10:03:31.470492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166e3d08 00:30:00.617 [2024-11-20 10:03:31.471353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:12319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.617 [2024-11-20 10:03:31.471369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:00.617 [2024-11-20 10:03:31.478925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166e4de8 00:30:00.617 [2024-11-20 10:03:31.479779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:4116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.617 [2024-11-20 10:03:31.479796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:00.617 [2024-11-20 10:03:31.487335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166e5ec8 00:30:00.617 [2024-11-20 10:03:31.488199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:21665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.617 [2024-11-20 10:03:31.488215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:00.617 [2024-11-20 10:03:31.495735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166e6fa8 00:30:00.617 [2024-11-20 10:03:31.496603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:21854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.617 [2024-11-20 10:03:31.496621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:00.617 [2024-11-20 10:03:31.504177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f4b08 00:30:00.617 [2024-11-20 10:03:31.505020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.617 [2024-11-20 10:03:31.505036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:00.617 [2024-11-20 10:03:31.512593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f31b8 00:30:00.617 [2024-11-20 10:03:31.513428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.617 [2024-11-20 10:03:31.513445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:00.617 [2024-11-20 10:03:31.521011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f20d8 00:30:00.617 [2024-11-20 10:03:31.521875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:3077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.617 [2024-11-20 10:03:31.521892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:00.878 [2024-11-20 10:03:31.529454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166e73e0 00:30:00.878 [2024-11-20 10:03:31.530305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:16623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.878 [2024-11-20 10:03:31.530321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:00.878 [2024-11-20 10:03:31.537871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f6020 00:30:00.878 [2024-11-20 10:03:31.538678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.878 [2024-11-20 10:03:31.538694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:00.878 [2024-11-20 10:03:31.546281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f7100 00:30:00.878 [2024-11-20 10:03:31.547141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.878 [2024-11-20 10:03:31.547162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:00.878 [2024-11-20 10:03:31.554724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f81e0 00:30:00.878 [2024-11-20 10:03:31.555591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.878 [2024-11-20 10:03:31.555607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:00.878 [2024-11-20 10:03:31.563178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f92c0 00:30:00.878 [2024-11-20 10:03:31.564027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:7289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.878 [2024-11-20 10:03:31.564044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:00.878 [2024-11-20 10:03:31.571600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166fa3a0 00:30:00.878 [2024-11-20 10:03:31.572427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:2564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.878 [2024-11-20 10:03:31.572443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:00.878 [2024-11-20 10:03:31.580017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166ec840 00:30:00.878 [2024-11-20 10:03:31.580871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:22316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.878 [2024-11-20 10:03:31.580887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:00.878 [2024-11-20 10:03:31.588419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166ed920 00:30:00.878 [2024-11-20 10:03:31.589266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:9914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.878 [2024-11-20 10:03:31.589285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:00.878 [2024-11-20 10:03:31.596828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166eb760 00:30:00.878 [2024-11-20 10:03:31.597640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:19073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.878 [2024-11-20 10:03:31.597656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:00.878 [2024-11-20 10:03:31.604743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f1868 00:30:00.878 [2024-11-20 10:03:31.605554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:20596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.878 [2024-11-20 10:03:31.605569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:00.878 [2024-11-20 10:03:31.613116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166fa7d8 00:30:00.878 [2024-11-20 10:03:31.613788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.878 [2024-11-20 10:03:31.613804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:00.878 [2024-11-20 10:03:31.621471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166df118 00:30:00.878 [2024-11-20 10:03:31.622120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:6794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.878 [2024-11-20 10:03:31.622136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:00.878 [2024-11-20 10:03:31.630051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f6020 00:30:00.878 [2024-11-20 10:03:31.630732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:10936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.879 [2024-11-20 10:03:31.630748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:00.879 [2024-11-20 10:03:31.638511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166ff3c8 00:30:00.879 [2024-11-20 10:03:31.639192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:5598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.879 [2024-11-20 10:03:31.639209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:00.879 [2024-11-20 10:03:31.646961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166ef270 00:30:00.879 [2024-11-20 10:03:31.647625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:8668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.879 [2024-11-20 10:03:31.647641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:00.879 [2024-11-20 10:03:31.655415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f7970 00:30:00.879 [2024-11-20 10:03:31.656074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.879 [2024-11-20 10:03:31.656089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:00.879 [2024-11-20 10:03:31.663855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166ea680 00:30:00.879 [2024-11-20 10:03:31.664530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:14341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.879 [2024-11-20 10:03:31.664546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:00.879 [2024-11-20 10:03:31.672291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166df118 00:30:00.879 [2024-11-20 10:03:31.672910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.879 [2024-11-20 10:03:31.672926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:00.879 [2024-11-20 10:03:31.680869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f6020 00:30:00.879 [2024-11-20 10:03:31.681490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.879 [2024-11-20 10:03:31.681506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:00.879 [2024-11-20 10:03:31.689315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166ff3c8 00:30:00.879 [2024-11-20 10:03:31.690001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:22521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.879 [2024-11-20 10:03:31.690017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:00.879 [2024-11-20 10:03:31.697769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166ef270 00:30:00.879 [2024-11-20 10:03:31.698402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.879 [2024-11-20 10:03:31.698418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:00.879 [2024-11-20 10:03:31.706228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f7970 00:30:00.879 [2024-11-20 10:03:31.706890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:12232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.879 [2024-11-20 10:03:31.706906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:00.879 [2024-11-20 10:03:31.714665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166ea680 00:30:00.879 [2024-11-20 10:03:31.715320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.879 [2024-11-20 10:03:31.715337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:00.879 [2024-11-20 10:03:31.723084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166df118 00:30:00.879 [2024-11-20 10:03:31.723752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:20024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.879 [2024-11-20 10:03:31.723768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:00.879 [2024-11-20 10:03:31.731517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f6020 00:30:00.879 [2024-11-20 10:03:31.732192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:10584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.879 [2024-11-20 10:03:31.732208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:00.879 [2024-11-20 10:03:31.739942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166ff3c8 00:30:00.879 [2024-11-20 10:03:31.740608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:10778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.879 [2024-11-20 10:03:31.740624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:00.879 [2024-11-20 10:03:31.748386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166ef270 00:30:00.879 [2024-11-20 10:03:31.749066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.879 [2024-11-20 10:03:31.749082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:00.879 [2024-11-20 10:03:31.756857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f7970 00:30:00.879 [2024-11-20 10:03:31.757535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:21855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.879 [2024-11-20 10:03:31.757552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:00.879 [2024-11-20 10:03:31.765301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166ea680 00:30:00.879 [2024-11-20 10:03:31.765973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:19734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.879 [2024-11-20 10:03:31.765989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:00.879 [2024-11-20 10:03:31.773761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166df118 00:30:00.879 [2024-11-20 10:03:31.774384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:23298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.879 [2024-11-20 10:03:31.774401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:00.879 [2024-11-20 10:03:31.782885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f4b08 00:30:00.879 [2024-11-20 10:03:31.783853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:17161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:00.879 [2024-11-20 10:03:31.783869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:01.140 [2024-11-20 10:03:31.791006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166ed920 00:30:01.140 [2024-11-20 10:03:31.791796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:21496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.140 [2024-11-20 10:03:31.791813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:01.140 [2024-11-20 10:03:31.799591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166fc560 00:30:01.140 [2024-11-20 10:03:31.800359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:15702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.140 [2024-11-20 10:03:31.800375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:01.140 [2024-11-20 10:03:31.808008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166e4de8 00:30:01.140 [2024-11-20 10:03:31.808805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.140 [2024-11-20 10:03:31.808825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:01.140 [2024-11-20 10:03:31.816432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166e5ec8 00:30:01.140 [2024-11-20 10:03:31.817199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:15835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.140 [2024-11-20 10:03:31.817216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:01.140 [2024-11-20 10:03:31.824853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166e6fa8 00:30:01.141 [2024-11-20 10:03:31.825635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:19513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.141 [2024-11-20 10:03:31.825652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:01.141 [2024-11-20 10:03:31.833301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f4f40 00:30:01.141 [2024-11-20 10:03:31.834094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.141 [2024-11-20 10:03:31.834111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:01.141 [2024-11-20 10:03:31.841752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166e4578 00:30:01.141 [2024-11-20 10:03:31.842548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:15814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.141 [2024-11-20 10:03:31.842564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:01.141 [2024-11-20 10:03:31.850185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166e3498 00:30:01.141 [2024-11-20 10:03:31.850941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:6333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.141 [2024-11-20 10:03:31.850957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:01.141 [2024-11-20 10:03:31.858613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166e23b8 00:30:01.141 [2024-11-20 10:03:31.859390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:18954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.141 [2024-11-20 10:03:31.859407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:01.141 [2024-11-20 10:03:31.867050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166ec408 00:30:01.141 [2024-11-20 10:03:31.867829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:18906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.141 [2024-11-20 10:03:31.867846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:01.141 [2024-11-20 10:03:31.875490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166ecc78 00:30:01.141 [2024-11-20 10:03:31.876235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:21232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.141 [2024-11-20 10:03:31.876251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:01.141 [2024-11-20 10:03:31.883903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166e0a68 00:30:01.141 [2024-11-20 10:03:31.884707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:13965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.141 [2024-11-20 10:03:31.884723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:01.141 [2024-11-20 10:03:31.892339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166ee5c8 00:30:01.141 [2024-11-20 10:03:31.893109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:9474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.141 [2024-11-20 10:03:31.893125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:01.141 [2024-11-20 10:03:31.900743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166fd640 00:30:01.141 [2024-11-20 10:03:31.901525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:20309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.141 [2024-11-20 10:03:31.901541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:01.141 [2024-11-20 10:03:31.909139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:01.141 [2024-11-20 10:03:31.909909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:3408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.141 [2024-11-20 10:03:31.909925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:01.141 [2024-11-20 10:03:31.917560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f8618 00:30:01.141 [2024-11-20 10:03:31.918326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:5717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.141 [2024-11-20 10:03:31.918342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:01.141 [2024-11-20 10:03:31.925988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f7538 00:30:01.141 [2024-11-20 10:03:31.926771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:13929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.141 [2024-11-20 10:03:31.926787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:01.141 [2024-11-20 10:03:31.934419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f6458 00:30:01.141 [2024-11-20 10:03:31.935199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.141 [2024-11-20 10:03:31.935216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:01.141 [2024-11-20 10:03:31.942842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166fc998 00:30:01.141 [2024-11-20 10:03:31.943628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:7237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.141 [2024-11-20 10:03:31.943645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:01.141 [2024-11-20 10:03:31.951246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166e49b0 00:30:01.141 [2024-11-20 10:03:31.952032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:5458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.141 [2024-11-20 10:03:31.952048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:01.141 [2024-11-20 10:03:31.959680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166e5a90 00:30:01.141 [2024-11-20 10:03:31.960426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:20278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.141 [2024-11-20 10:03:31.960443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:01.141 [2024-11-20 10:03:31.968104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166e6b70 00:30:01.141 [2024-11-20 10:03:31.968889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.141 [2024-11-20 10:03:31.968905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:01.141 [2024-11-20 10:03:31.976639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f3e60 00:30:01.141 [2024-11-20 10:03:31.977434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:4572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.141 [2024-11-20 10:03:31.977451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:01.141 [2024-11-20 10:03:31.985051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166e3060 00:30:01.141 [2024-11-20 10:03:31.985826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:2284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.141 [2024-11-20 10:03:31.985842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:01.141 [2024-11-20 10:03:31.993503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166eb760 00:30:01.141 [2024-11-20 10:03:31.994273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:17937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.141 [2024-11-20 10:03:31.994290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:01.141 [2024-11-20 10:03:32.001907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166ed920 00:30:01.141 [2024-11-20 10:03:32.002703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:7910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.141 [2024-11-20 10:03:32.002720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:01.141 [2024-11-20 10:03:32.010346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166ec840 00:30:01.141 [2024-11-20 10:03:32.011135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:16600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.141 [2024-11-20 10:03:32.011152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:01.141 [2024-11-20 10:03:32.018773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166e0630 00:30:01.141 [2024-11-20 10:03:32.019528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:3267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.141 [2024-11-20 10:03:32.019545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:01.141 [2024-11-20 10:03:32.027237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166ee190 00:30:01.141 [2024-11-20 10:03:32.028024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:1361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.141 [2024-11-20 10:03:32.028043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:01.141 [2024-11-20 10:03:32.035661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166feb58 00:30:01.141 [2024-11-20 10:03:32.036450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:14424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.141 [2024-11-20 10:03:32.036467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:01.141 [2024-11-20 10:03:32.044072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f92c0 00:30:01.141 [2024-11-20 10:03:32.044854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:12967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.141 [2024-11-20 10:03:32.044871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:01.142 [2024-11-20 10:03:32.052500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f81e0 00:30:01.404 [2024-11-20 10:03:32.053279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:17567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.404 [2024-11-20 10:03:32.053296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:01.404 [2024-11-20 10:03:32.060949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f7100 00:30:01.404 [2024-11-20 10:03:32.061737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:18686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.404 [2024-11-20 10:03:32.061754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:01.404 [2024-11-20 10:03:32.069380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f6020 00:30:01.404 [2024-11-20 10:03:32.070151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:5671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.404 [2024-11-20 10:03:32.070170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:01.404 [2024-11-20 10:03:32.079230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f0bc0 00:30:01.404 29948.00 IOPS, 116.98 MiB/s [2024-11-20T09:03:32.320Z] [2024-11-20 10:03:32.079880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:2408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.404 [2024-11-20 10:03:32.079896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:01.404 [2024-11-20 10:03:32.088616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f20d8 00:30:01.404 [2024-11-20 10:03:32.089787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.404 [2024-11-20 10:03:32.089803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:01.404 [2024-11-20 10:03:32.095624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166fdeb0 00:30:01.404 [2024-11-20 10:03:32.096179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:11471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.404 [2024-11-20 10:03:32.096195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:01.404 [2024-11-20 10:03:32.103986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166e1710 00:30:01.404 [2024-11-20 10:03:32.104532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:10920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.404 [2024-11-20 10:03:32.104549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:01.404 [2024-11-20 10:03:32.113168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f0788 00:30:01.404 [2024-11-20 10:03:32.114084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:19947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.404 [2024-11-20 10:03:32.114101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:01.404 [2024-11-20 10:03:32.121878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:01.404 [2024-11-20 10:03:32.122149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:17546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.404 [2024-11-20 10:03:32.122169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:01.404 [2024-11-20 10:03:32.130611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:01.404 [2024-11-20 10:03:32.130907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:5677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.404 [2024-11-20 10:03:32.130923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:01.404 [2024-11-20 10:03:32.139357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:01.404 [2024-11-20 10:03:32.139648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.404 [2024-11-20 10:03:32.139664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:01.404 [2024-11-20 10:03:32.148003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:01.404 [2024-11-20 10:03:32.148283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:8624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.404 [2024-11-20 10:03:32.148300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:01.404 [2024-11-20 10:03:32.156775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:01.404 [2024-11-20 10:03:32.157047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:16160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.404 [2024-11-20 10:03:32.157063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:01.404 [2024-11-20 10:03:32.165482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:01.404 [2024-11-20 10:03:32.165773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.404 [2024-11-20 10:03:32.165789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:01.404 [2024-11-20 10:03:32.174189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:01.404 [2024-11-20 10:03:32.174475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:22269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.404 [2024-11-20 10:03:32.174491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:01.404 [2024-11-20 10:03:32.182918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:01.404 [2024-11-20 10:03:32.183054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:24806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.404 [2024-11-20 10:03:32.183069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:01.404 [2024-11-20 10:03:32.191632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:01.404 [2024-11-20 10:03:32.191909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.404 [2024-11-20 10:03:32.191925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:01.404 [2024-11-20 10:03:32.200376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:01.404 [2024-11-20 10:03:32.200682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:11160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.404 [2024-11-20 10:03:32.200698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:01.404 [2024-11-20 10:03:32.209055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:01.404 [2024-11-20 10:03:32.209201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:9064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.404 [2024-11-20 10:03:32.209217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:01.404 [2024-11-20 10:03:32.217743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:01.404 [2024-11-20 10:03:32.217876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.404 [2024-11-20 10:03:32.217891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:01.404 [2024-11-20 10:03:32.226436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:01.404 [2024-11-20 10:03:32.226704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:23106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.404 [2024-11-20 10:03:32.226720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:01.404 [2024-11-20 10:03:32.235118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:01.404 [2024-11-20 10:03:32.235414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:15590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.404 [2024-11-20 10:03:32.235430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:01.404 [2024-11-20 10:03:32.243826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:01.404 [2024-11-20 10:03:32.244048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.404 [2024-11-20 10:03:32.244064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:01.404 [2024-11-20 10:03:32.252534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:01.405 [2024-11-20 10:03:32.252800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:5075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.405 [2024-11-20 10:03:32.252819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:01.405 [2024-11-20 10:03:32.261267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:01.405 [2024-11-20 10:03:32.261553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:7287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.405 [2024-11-20 10:03:32.261569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:01.405 [2024-11-20 10:03:32.270005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:01.405 [2024-11-20 10:03:32.270266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.405 [2024-11-20 10:03:32.270283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:01.405 [2024-11-20 10:03:32.278695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:01.405 [2024-11-20 10:03:32.278973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:4641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.405 [2024-11-20 10:03:32.278988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:01.405 [2024-11-20 10:03:32.287447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:01.405 [2024-11-20 10:03:32.287710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:18265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.405 [2024-11-20 10:03:32.287726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:01.405 [2024-11-20 10:03:32.296219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:01.405 [2024-11-20 10:03:32.296449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.405 [2024-11-20 10:03:32.296465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:01.405 [2024-11-20 10:03:32.304956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:01.405 [2024-11-20 10:03:32.305208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:6005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.405 [2024-11-20 10:03:32.305224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:01.405 [2024-11-20 10:03:32.313669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:01.405 [2024-11-20 10:03:32.313955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:17957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.405 [2024-11-20 10:03:32.313971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:01.667 [2024-11-20 10:03:32.322357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:01.667 [2024-11-20 10:03:32.322615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.667 [2024-11-20 10:03:32.322631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:01.667 [2024-11-20 10:03:32.331054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:01.667 [2024-11-20 10:03:32.331324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:2481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.667 [2024-11-20 10:03:32.331348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:01.667 [2024-11-20 10:03:32.339807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:01.667 [2024-11-20 10:03:32.340050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:11799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.668 [2024-11-20 10:03:32.340066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:01.668 [2024-11-20 10:03:32.348545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:01.668 [2024-11-20 10:03:32.348820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.668 [2024-11-20 10:03:32.348835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:01.668 [2024-11-20 10:03:32.357241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:01.668 [2024-11-20 10:03:32.357491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:13106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.668 [2024-11-20 10:03:32.357506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:01.668 [2024-11-20 10:03:32.365948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:01.668 [2024-11-20 10:03:32.366270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:20861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.668 [2024-11-20 10:03:32.366286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:01.668 [2024-11-20 10:03:32.374733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:01.668 [2024-11-20 10:03:32.374994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.668 [2024-11-20 10:03:32.375010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:01.668 [2024-11-20 10:03:32.383457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:01.668 [2024-11-20 10:03:32.383723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:20144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.668 [2024-11-20 10:03:32.383739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:01.668 [2024-11-20 10:03:32.392157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:01.668 [2024-11-20 10:03:32.392440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:13683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.668 [2024-11-20 10:03:32.392457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:01.668 [2024-11-20 10:03:32.400939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:01.668 [2024-11-20 10:03:32.401184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.668 [2024-11-20 10:03:32.401200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:01.668 [2024-11-20 10:03:32.409682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:01.668 [2024-11-20 10:03:32.409961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:15906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.668 [2024-11-20 10:03:32.409976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:01.668 [2024-11-20 10:03:32.418403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:01.668 [2024-11-20 10:03:32.418639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:7693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.668 [2024-11-20 10:03:32.418654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:01.668 [2024-11-20 10:03:32.427148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:01.668 [2024-11-20 10:03:32.427421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.668 [2024-11-20 10:03:32.427437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:01.668 [2024-11-20 10:03:32.435862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:01.668 [2024-11-20 10:03:32.436102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:19405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.668 [2024-11-20 10:03:32.436118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:01.668 [2024-11-20 10:03:32.444593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:01.668 [2024-11-20 10:03:32.444824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:24843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.668 [2024-11-20 10:03:32.444839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:01.668 [2024-11-20 10:03:32.453301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:01.668 [2024-11-20 10:03:32.453554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.668 [2024-11-20 10:03:32.453570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:01.668 [2024-11-20 10:03:32.462068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:01.668 [2024-11-20 10:03:32.462365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:11167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.668 [2024-11-20 10:03:32.462381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:01.668 [2024-11-20 10:03:32.470771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:01.668 [2024-11-20 10:03:32.471007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:10808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.668 [2024-11-20 10:03:32.471023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:01.668 [2024-11-20 10:03:32.479544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:01.668 [2024-11-20 10:03:32.479795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.668 [2024-11-20 10:03:32.479814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:01.668 [2024-11-20 10:03:32.488308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:01.668 [2024-11-20 10:03:32.488584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:4602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.668 [2024-11-20 10:03:32.488600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:01.668 [2024-11-20 10:03:32.497050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:01.668 [2024-11-20 10:03:32.497340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:18327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.668 [2024-11-20 10:03:32.497356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:01.668 [2024-11-20 10:03:32.505781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:01.668 [2024-11-20 10:03:32.506015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.668 [2024-11-20 10:03:32.506031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:01.668 [2024-11-20 10:03:32.514531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:01.668 [2024-11-20 10:03:32.514806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:22537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.668 [2024-11-20 10:03:32.514822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:01.668 [2024-11-20 10:03:32.523289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:01.668 [2024-11-20 10:03:32.523582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:7808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.668 [2024-11-20 10:03:32.523598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:01.668 [2024-11-20 10:03:32.532027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:01.668 [2024-11-20 10:03:32.532287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.668 [2024-11-20 10:03:32.532303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:01.668 [2024-11-20 10:03:32.540848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:01.668 [2024-11-20 10:03:32.541072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:6333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.668 [2024-11-20 10:03:32.541087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:01.668 [2024-11-20 10:03:32.549559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:01.668 [2024-11-20 10:03:32.549702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.668 [2024-11-20 10:03:32.549718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:01.668 [2024-11-20 10:03:32.558288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:01.668 [2024-11-20 10:03:32.558547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.668 [2024-11-20 10:03:32.558563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:01.668 [2024-11-20 10:03:32.567024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:01.668 [2024-11-20 10:03:32.567296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:18589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.668 [2024-11-20 10:03:32.567313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:01.668 [2024-11-20 10:03:32.575784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:01.669 [2024-11-20 10:03:32.576062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:13278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.669 [2024-11-20 10:03:32.576078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:01.930 [2024-11-20 10:03:32.584523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:01.930 [2024-11-20 10:03:32.584770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.930 [2024-11-20 10:03:32.584785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:01.930 [2024-11-20 10:03:32.593268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:01.930 [2024-11-20 10:03:32.593525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:17789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.930 [2024-11-20 10:03:32.593541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:01.930 [2024-11-20 10:03:32.601983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:01.930 [2024-11-20 10:03:32.602230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:24536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.930 [2024-11-20 10:03:32.602245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:01.930 [2024-11-20 10:03:32.610674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:01.930 [2024-11-20 10:03:32.610942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.930 [2024-11-20 10:03:32.610958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:01.930 [2024-11-20 10:03:32.619577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:01.930 [2024-11-20 10:03:32.619874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:12867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.930 [2024-11-20 10:03:32.619891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:01.930 [2024-11-20 10:03:32.628280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:01.930 [2024-11-20 10:03:32.628554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:3339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.930 [2024-11-20 10:03:32.628568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:01.930 [2024-11-20 10:03:32.637011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:01.930 [2024-11-20 10:03:32.637279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.930 [2024-11-20 10:03:32.637295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:01.930 [2024-11-20 10:03:32.645738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:01.931 [2024-11-20 10:03:32.646026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:2519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.931 [2024-11-20 10:03:32.646042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:01.931 [2024-11-20 10:03:32.654451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:01.931 [2024-11-20 10:03:32.654717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:10965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.931 [2024-11-20 10:03:32.654734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:01.931 [2024-11-20 10:03:32.663181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:01.931 [2024-11-20 10:03:32.663472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.931 [2024-11-20 10:03:32.663489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:01.931 [2024-11-20 10:03:32.671917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:01.931 [2024-11-20 10:03:32.672168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:10078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.931 [2024-11-20 10:03:32.672185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:01.931 [2024-11-20 10:03:32.680639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:01.931 [2024-11-20 10:03:32.680932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:16086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.931 [2024-11-20 10:03:32.680949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:01.931 [2024-11-20 10:03:32.689356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:01.931 [2024-11-20 10:03:32.689589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.931 [2024-11-20 10:03:32.689605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:01.931 [2024-11-20 10:03:32.698166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:01.931 [2024-11-20 10:03:32.698455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:13544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.931 [2024-11-20 10:03:32.698471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:01.931 [2024-11-20 10:03:32.706869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:01.931 [2024-11-20 10:03:32.707186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:16376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.931 [2024-11-20 10:03:32.707208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:01.931 [2024-11-20 10:03:32.715631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:01.931 [2024-11-20 10:03:32.715854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.931 [2024-11-20 10:03:32.715869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:01.931 [2024-11-20 10:03:32.724326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:01.931 [2024-11-20 10:03:32.724608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:13468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.931 [2024-11-20 10:03:32.724624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:01.931 [2024-11-20 10:03:32.732984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:01.931 [2024-11-20 10:03:32.733264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:7021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.931 [2024-11-20 10:03:32.733281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:01.931 [2024-11-20 10:03:32.741696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:01.931 [2024-11-20 10:03:32.741940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.931 [2024-11-20 10:03:32.741956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:01.931 [2024-11-20 10:03:32.750378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:01.931 [2024-11-20 10:03:32.750620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:15266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.931 [2024-11-20 10:03:32.750636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:01.931 [2024-11-20 10:03:32.759112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:01.931 [2024-11-20 10:03:32.759379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:15280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.931 [2024-11-20 10:03:32.759395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:01.931 [2024-11-20 10:03:32.767822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:01.931 [2024-11-20 10:03:32.768096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.931 [2024-11-20 10:03:32.768112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:01.931 [2024-11-20 10:03:32.776513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:01.931 [2024-11-20 10:03:32.776807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:14533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.931 [2024-11-20 10:03:32.776823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:01.931 [2024-11-20 10:03:32.785252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:01.931 [2024-11-20 10:03:32.785559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:12470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.931 [2024-11-20 10:03:32.785575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:01.931 [2024-11-20 10:03:32.793915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:01.931 [2024-11-20 10:03:32.794221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.931 [2024-11-20 10:03:32.794238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:01.931 [2024-11-20 10:03:32.802609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:01.931 [2024-11-20 10:03:32.802873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:1048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.931 [2024-11-20 10:03:32.802888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:01.931 [2024-11-20 10:03:32.811359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:01.931 [2024-11-20 10:03:32.811627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:18281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.931 [2024-11-20 10:03:32.811642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:01.931 [2024-11-20 10:03:32.820120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:01.931 [2024-11-20 10:03:32.820396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.931 [2024-11-20 10:03:32.820412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:01.931 [2024-11-20 10:03:32.828955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:01.931 [2024-11-20 10:03:32.829199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:19343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.931 [2024-11-20 10:03:32.829214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:01.931 [2024-11-20 10:03:32.837635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:01.931 [2024-11-20 10:03:32.837893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:14741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.931 [2024-11-20 10:03:32.837909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:02.192 [2024-11-20 10:03:32.846382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:02.192 [2024-11-20 10:03:32.846650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.192 [2024-11-20 10:03:32.846667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:02.192 [2024-11-20 10:03:32.855105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:02.192 [2024-11-20 10:03:32.855393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:3028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.192 [2024-11-20 10:03:32.855410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:02.192 [2024-11-20 10:03:32.863867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:02.192 [2024-11-20 10:03:32.864096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:14935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.192 [2024-11-20 10:03:32.864111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:02.192 [2024-11-20 10:03:32.872645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:02.192 [2024-11-20 10:03:32.872886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.192 [2024-11-20 10:03:32.872902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:02.192 [2024-11-20 10:03:32.881334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:02.192 [2024-11-20 10:03:32.881579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:2918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.192 [2024-11-20 10:03:32.881596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:02.192 [2024-11-20 10:03:32.890034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:02.192 [2024-11-20 10:03:32.890302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:12868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.192 [2024-11-20 10:03:32.890318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:02.192 [2024-11-20 10:03:32.898774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:02.192 [2024-11-20 10:03:32.899004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.192 [2024-11-20 10:03:32.899020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:02.192 [2024-11-20 10:03:32.907492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:02.192 [2024-11-20 10:03:32.907743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:11550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.192 [2024-11-20 10:03:32.907759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:02.192 [2024-11-20 10:03:32.916238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:02.192 [2024-11-20 10:03:32.916458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:10574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.192 [2024-11-20 10:03:32.916473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:02.192 [2024-11-20 10:03:32.924940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:02.192 [2024-11-20 10:03:32.925206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.192 [2024-11-20 10:03:32.925222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:02.192 [2024-11-20 10:03:32.933726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:02.192 [2024-11-20 10:03:32.933872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:17988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.192 [2024-11-20 10:03:32.933890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:02.192 [2024-11-20 10:03:32.942462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:02.192 [2024-11-20 10:03:32.942720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:12193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.192 [2024-11-20 10:03:32.942736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:02.192 [2024-11-20 10:03:32.951203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:02.192 [2024-11-20 10:03:32.951427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.192 [2024-11-20 10:03:32.951443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:02.192 [2024-11-20 10:03:32.959923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:02.193 [2024-11-20 10:03:32.960187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:10067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.193 [2024-11-20 10:03:32.960203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:02.193 [2024-11-20 10:03:32.968610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:02.193 [2024-11-20 10:03:32.968862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:19846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.193 [2024-11-20 10:03:32.968878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:02.193 [2024-11-20 10:03:32.977308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:02.193 [2024-11-20 10:03:32.977555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.193 [2024-11-20 10:03:32.977570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:02.193 [2024-11-20 10:03:32.986112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:02.193 [2024-11-20 10:03:32.986368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:3456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.193 [2024-11-20 10:03:32.986385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:02.193 [2024-11-20 10:03:32.994862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:02.193 [2024-11-20 10:03:32.995097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:3449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.193 [2024-11-20 10:03:32.995113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:02.193 [2024-11-20 10:03:33.003616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:02.193 [2024-11-20 10:03:33.003868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.193 [2024-11-20 10:03:33.003884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:02.193 [2024-11-20 10:03:33.012298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:02.193 [2024-11-20 10:03:33.012545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:5686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.193 [2024-11-20 10:03:33.012561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:02.193 [2024-11-20 10:03:33.021006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:02.193 [2024-11-20 10:03:33.021317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:7801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.193 [2024-11-20 10:03:33.021333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:02.193 [2024-11-20 10:03:33.029770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:02.193 [2024-11-20 10:03:33.030019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.193 [2024-11-20 10:03:33.030035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:02.193 [2024-11-20 10:03:33.038524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:02.193 [2024-11-20 10:03:33.038802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:4248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.193 [2024-11-20 10:03:33.038818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:02.193 [2024-11-20 10:03:33.047237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:02.193 [2024-11-20 10:03:33.047517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:7134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.193 [2024-11-20 10:03:33.047533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:02.193 [2024-11-20 10:03:33.055965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:02.193 [2024-11-20 10:03:33.056234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.193 [2024-11-20 10:03:33.056250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:02.193 [2024-11-20 10:03:33.064701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:02.193 [2024-11-20 10:03:33.064983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:9272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.193 [2024-11-20 10:03:33.064999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:02.193 [2024-11-20 10:03:33.073445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:02.193 [2024-11-20 10:03:33.073689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:19111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.193 [2024-11-20 10:03:33.073705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:02.193 29647.00 IOPS, 115.81 MiB/s [2024-11-20T09:03:33.109Z] [2024-11-20 10:03:33.082091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3520) with pdu=0x2000166f96f8 00:30:02.193 [2024-11-20 10:03:33.082358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.193 [2024-11-20 10:03:33.082373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:02.193 00:30:02.193 Latency(us) 00:30:02.193 [2024-11-20T09:03:33.109Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:02.193 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:02.193 nvme0n1 : 2.00 29641.86 115.79 0.00 0.00 4311.31 2102.61 15400.96 00:30:02.193 [2024-11-20T09:03:33.109Z] =================================================================================================================== 00:30:02.193 [2024-11-20T09:03:33.109Z] Total : 29641.86 115.79 0.00 0.00 4311.31 2102.61 15400.96 00:30:02.193 { 00:30:02.193 "results": [ 00:30:02.193 { 00:30:02.193 "job": "nvme0n1", 00:30:02.193 "core_mask": "0x2", 00:30:02.193 "workload": "randwrite", 00:30:02.193 "status": "finished", 00:30:02.193 "queue_depth": 128, 00:30:02.193 "io_size": 4096, 00:30:02.193 "runtime": 2.004125, 00:30:02.193 "iops": 29641.86365620907, 00:30:02.193 "mibps": 115.78852990706667, 00:30:02.193 "io_failed": 0, 00:30:02.193 "io_timeout": 0, 00:30:02.193 "avg_latency_us": 4311.3149926494525, 00:30:02.193 "min_latency_us": 2102.6133333333332, 00:30:02.193 "max_latency_us": 15400.96 00:30:02.193 } 00:30:02.193 ], 00:30:02.193 "core_count": 1 00:30:02.193 } 00:30:02.453 10:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:02.453 10:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:02.453 10:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:02.453 10:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:02.453 | .driver_specific 00:30:02.453 | .nvme_error 00:30:02.453 | .status_code 00:30:02.453 | .command_transient_transport_error' 00:30:02.453 10:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 233 > 0 )) 00:30:02.453 10:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1551049 00:30:02.453 10:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1551049 ']' 00:30:02.453 10:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1551049 00:30:02.453 10:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:30:02.453 10:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:02.453 10:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1551049 00:30:02.453 10:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:02.453 10:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:02.453 10:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1551049' 00:30:02.453 killing process with pid 1551049 00:30:02.453 10:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1551049 00:30:02.453 Received shutdown signal, test time was about 2.000000 seconds 00:30:02.453 00:30:02.453 Latency(us) 00:30:02.453 [2024-11-20T09:03:33.369Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:02.453 [2024-11-20T09:03:33.369Z] =================================================================================================================== 00:30:02.453 [2024-11-20T09:03:33.369Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:02.453 10:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1551049 00:30:02.714 10:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:30:02.714 10:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:30:02.714 10:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:30:02.714 10:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:30:02.714 10:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:30:02.714 10:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1551728 00:30:02.714 10:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1551728 /var/tmp/bperf.sock 00:30:02.714 10:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1551728 ']' 00:30:02.714 10:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:30:02.714 10:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:02.714 10:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:02.714 10:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:02.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:02.714 10:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:02.714 10:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:02.714 [2024-11-20 10:03:33.499138] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:30:02.714 [2024-11-20 10:03:33.499199] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1551728 ] 00:30:02.714 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:02.714 Zero copy mechanism will not be used. 00:30:02.714 [2024-11-20 10:03:33.582484] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:02.714 [2024-11-20 10:03:33.611715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:03.654 10:03:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:03.654 10:03:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:30:03.654 10:03:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:03.654 10:03:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:03.654 10:03:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:03.654 10:03:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.654 10:03:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:03.654 10:03:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.654 10:03:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:03.654 10:03:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:03.914 nvme0n1 00:30:03.914 10:03:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:30:03.914 10:03:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.914 10:03:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:03.914 10:03:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.914 10:03:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:03.914 10:03:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:04.174 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:04.174 Zero copy mechanism will not be used. 00:30:04.174 Running I/O for 2 seconds... 00:30:04.174 [2024-11-20 10:03:34.911976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.174 [2024-11-20 10:03:34.912133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.174 [2024-11-20 10:03:34.912164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:04.174 [2024-11-20 10:03:34.919089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.174 [2024-11-20 10:03:34.919165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.174 [2024-11-20 10:03:34.919184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:04.174 [2024-11-20 10:03:34.927983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.174 [2024-11-20 10:03:34.928048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.174 [2024-11-20 10:03:34.928065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:04.174 [2024-11-20 10:03:34.934068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.174 [2024-11-20 10:03:34.934388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.174 [2024-11-20 10:03:34.934406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:04.175 [2024-11-20 10:03:34.940395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.175 [2024-11-20 10:03:34.940463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.175 [2024-11-20 10:03:34.940479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:04.175 [2024-11-20 10:03:34.946623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.175 [2024-11-20 10:03:34.946684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.175 [2024-11-20 10:03:34.946699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:04.175 [2024-11-20 10:03:34.952973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.175 [2024-11-20 10:03:34.953244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.175 [2024-11-20 10:03:34.953262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:04.175 [2024-11-20 10:03:34.961891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.175 [2024-11-20 10:03:34.962124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.175 [2024-11-20 10:03:34.962145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:04.175 [2024-11-20 10:03:34.968342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.175 [2024-11-20 10:03:34.968389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.175 [2024-11-20 10:03:34.968405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:04.175 [2024-11-20 10:03:34.975956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.175 [2024-11-20 10:03:34.976029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.175 [2024-11-20 10:03:34.976044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:04.175 [2024-11-20 10:03:34.984299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.175 [2024-11-20 10:03:34.984364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.175 [2024-11-20 10:03:34.984380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:04.175 [2024-11-20 10:03:34.994102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.175 [2024-11-20 10:03:34.994268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.175 [2024-11-20 10:03:34.994284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:04.175 [2024-11-20 10:03:35.003263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.175 [2024-11-20 10:03:35.003328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.175 [2024-11-20 10:03:35.003344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:04.175 [2024-11-20 10:03:35.010045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.175 [2024-11-20 10:03:35.010360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.175 [2024-11-20 10:03:35.010376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:04.175 [2024-11-20 10:03:35.017597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.175 [2024-11-20 10:03:35.017649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.175 [2024-11-20 10:03:35.017665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:04.175 [2024-11-20 10:03:35.025156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.175 [2024-11-20 10:03:35.025466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.175 [2024-11-20 10:03:35.025483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:04.175 [2024-11-20 10:03:35.031775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.175 [2024-11-20 10:03:35.031876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.175 [2024-11-20 10:03:35.031891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:04.175 [2024-11-20 10:03:35.041564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.175 [2024-11-20 10:03:35.041632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.175 [2024-11-20 10:03:35.041648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:04.175 [2024-11-20 10:03:35.051992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.175 [2024-11-20 10:03:35.052107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.175 [2024-11-20 10:03:35.052123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:04.175 [2024-11-20 10:03:35.062787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.175 [2024-11-20 10:03:35.063033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.175 [2024-11-20 10:03:35.063050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:04.175 [2024-11-20 10:03:35.073164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.175 [2024-11-20 10:03:35.073320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.175 [2024-11-20 10:03:35.073336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:04.175 [2024-11-20 10:03:35.083802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.175 [2024-11-20 10:03:35.084041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.175 [2024-11-20 10:03:35.084059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:04.437 [2024-11-20 10:03:35.095049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.437 [2024-11-20 10:03:35.095164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.437 [2024-11-20 10:03:35.095180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:04.437 [2024-11-20 10:03:35.101355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.437 [2024-11-20 10:03:35.101401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.437 [2024-11-20 10:03:35.101417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:04.437 [2024-11-20 10:03:35.105654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.437 [2024-11-20 10:03:35.105700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.437 [2024-11-20 10:03:35.105718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:04.437 [2024-11-20 10:03:35.110222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.437 [2024-11-20 10:03:35.110271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.437 [2024-11-20 10:03:35.110287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:04.437 [2024-11-20 10:03:35.115663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.437 [2024-11-20 10:03:35.115943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.437 [2024-11-20 10:03:35.115960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:04.437 [2024-11-20 10:03:35.122517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.437 [2024-11-20 10:03:35.122567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.437 [2024-11-20 10:03:35.122582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:04.437 [2024-11-20 10:03:35.131310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.437 [2024-11-20 10:03:35.131379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.437 [2024-11-20 10:03:35.131395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:04.437 [2024-11-20 10:03:35.136078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.437 [2024-11-20 10:03:35.136133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.437 [2024-11-20 10:03:35.136149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:04.437 [2024-11-20 10:03:35.140594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.437 [2024-11-20 10:03:35.140642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.437 [2024-11-20 10:03:35.140657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:04.437 [2024-11-20 10:03:35.144380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.437 [2024-11-20 10:03:35.144424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.437 [2024-11-20 10:03:35.144439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:04.437 [2024-11-20 10:03:35.148403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.437 [2024-11-20 10:03:35.148449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.437 [2024-11-20 10:03:35.148465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:04.437 [2024-11-20 10:03:35.152398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.437 [2024-11-20 10:03:35.152458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.437 [2024-11-20 10:03:35.152474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:04.437 [2024-11-20 10:03:35.156323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.437 [2024-11-20 10:03:35.156382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.437 [2024-11-20 10:03:35.156397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:04.437 [2024-11-20 10:03:35.159579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.437 [2024-11-20 10:03:35.159634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.437 [2024-11-20 10:03:35.159649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:04.437 [2024-11-20 10:03:35.162801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.437 [2024-11-20 10:03:35.162855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.437 [2024-11-20 10:03:35.162870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:04.437 [2024-11-20 10:03:35.167267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.438 [2024-11-20 10:03:35.167320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.438 [2024-11-20 10:03:35.167335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:04.438 [2024-11-20 10:03:35.172134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.438 [2024-11-20 10:03:35.172197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.438 [2024-11-20 10:03:35.172213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:04.438 [2024-11-20 10:03:35.176456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.438 [2024-11-20 10:03:35.176512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.438 [2024-11-20 10:03:35.176528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:04.438 [2024-11-20 10:03:35.180429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.438 [2024-11-20 10:03:35.180482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.438 [2024-11-20 10:03:35.180497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:04.438 [2024-11-20 10:03:35.184639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.438 [2024-11-20 10:03:35.184735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.438 [2024-11-20 10:03:35.184750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:04.438 [2024-11-20 10:03:35.190692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.438 [2024-11-20 10:03:35.190746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.438 [2024-11-20 10:03:35.190761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:04.438 [2024-11-20 10:03:35.193860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.438 [2024-11-20 10:03:35.193918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.438 [2024-11-20 10:03:35.193933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:04.438 [2024-11-20 10:03:35.198730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.438 [2024-11-20 10:03:35.198795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.438 [2024-11-20 10:03:35.198811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:04.438 [2024-11-20 10:03:35.207965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.438 [2024-11-20 10:03:35.208036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.438 [2024-11-20 10:03:35.208051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:04.438 [2024-11-20 10:03:35.214496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.438 [2024-11-20 10:03:35.214564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.438 [2024-11-20 10:03:35.214580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:04.438 [2024-11-20 10:03:35.219811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.438 [2024-11-20 10:03:35.219882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.438 [2024-11-20 10:03:35.219897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:04.438 [2024-11-20 10:03:35.225084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.438 [2024-11-20 10:03:35.225133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.438 [2024-11-20 10:03:35.225148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:04.438 [2024-11-20 10:03:35.229670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.438 [2024-11-20 10:03:35.229715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.438 [2024-11-20 10:03:35.229731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:04.438 [2024-11-20 10:03:35.234386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.438 [2024-11-20 10:03:35.234433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.438 [2024-11-20 10:03:35.234451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:04.438 [2024-11-20 10:03:35.239030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.438 [2024-11-20 10:03:35.239098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.438 [2024-11-20 10:03:35.239113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:04.438 [2024-11-20 10:03:35.243946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.438 [2024-11-20 10:03:35.243994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.438 [2024-11-20 10:03:35.244009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:04.438 [2024-11-20 10:03:35.249674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.438 [2024-11-20 10:03:35.249743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.438 [2024-11-20 10:03:35.249758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:04.438 [2024-11-20 10:03:35.254372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.438 [2024-11-20 10:03:35.254418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.438 [2024-11-20 10:03:35.254434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:04.438 [2024-11-20 10:03:35.260908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.438 [2024-11-20 10:03:35.260972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.438 [2024-11-20 10:03:35.260987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:04.438 [2024-11-20 10:03:35.267964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.438 [2024-11-20 10:03:35.268026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.438 [2024-11-20 10:03:35.268042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:04.438 [2024-11-20 10:03:35.273378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.438 [2024-11-20 10:03:35.273433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.438 [2024-11-20 10:03:35.273449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:04.438 [2024-11-20 10:03:35.277724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.438 [2024-11-20 10:03:35.277774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.438 [2024-11-20 10:03:35.277789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:04.438 [2024-11-20 10:03:35.283594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.438 [2024-11-20 10:03:35.283648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.438 [2024-11-20 10:03:35.283663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:04.438 [2024-11-20 10:03:35.291474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.438 [2024-11-20 10:03:35.291616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.438 [2024-11-20 10:03:35.291632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:04.438 [2024-11-20 10:03:35.299411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.438 [2024-11-20 10:03:35.299625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.438 [2024-11-20 10:03:35.299641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:04.438 [2024-11-20 10:03:35.306764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.438 [2024-11-20 10:03:35.306823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.438 [2024-11-20 10:03:35.306839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:04.438 [2024-11-20 10:03:35.312094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.438 [2024-11-20 10:03:35.312142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.438 [2024-11-20 10:03:35.312162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:04.438 [2024-11-20 10:03:35.320208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.439 [2024-11-20 10:03:35.320451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.439 [2024-11-20 10:03:35.320467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:04.439 [2024-11-20 10:03:35.328721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.439 [2024-11-20 10:03:35.328794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.439 [2024-11-20 10:03:35.328810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:04.439 [2024-11-20 10:03:35.333103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.439 [2024-11-20 10:03:35.333163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.439 [2024-11-20 10:03:35.333180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:04.439 [2024-11-20 10:03:35.338041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.439 [2024-11-20 10:03:35.338085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.439 [2024-11-20 10:03:35.338100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:04.439 [2024-11-20 10:03:35.344199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.439 [2024-11-20 10:03:35.344453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.439 [2024-11-20 10:03:35.344469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:04.701 [2024-11-20 10:03:35.353123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.701 [2024-11-20 10:03:35.353390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.701 [2024-11-20 10:03:35.353406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:04.701 [2024-11-20 10:03:35.362718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.701 [2024-11-20 10:03:35.362941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.701 [2024-11-20 10:03:35.362957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:04.701 [2024-11-20 10:03:35.372957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.701 [2024-11-20 10:03:35.373130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.701 [2024-11-20 10:03:35.373146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:04.701 [2024-11-20 10:03:35.382348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.701 [2024-11-20 10:03:35.382558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.701 [2024-11-20 10:03:35.382573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:04.701 [2024-11-20 10:03:35.392192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.701 [2024-11-20 10:03:35.392533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.701 [2024-11-20 10:03:35.392549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:04.701 [2024-11-20 10:03:35.402375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.701 [2024-11-20 10:03:35.402591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.701 [2024-11-20 10:03:35.402606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:04.701 [2024-11-20 10:03:35.409200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.701 [2024-11-20 10:03:35.409244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.701 [2024-11-20 10:03:35.409260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:04.701 [2024-11-20 10:03:35.413794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.701 [2024-11-20 10:03:35.413852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.701 [2024-11-20 10:03:35.413870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:04.701 [2024-11-20 10:03:35.420949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.701 [2024-11-20 10:03:35.421014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.701 [2024-11-20 10:03:35.421029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:04.701 [2024-11-20 10:03:35.427588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.701 [2024-11-20 10:03:35.427691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.702 [2024-11-20 10:03:35.427708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:04.702 [2024-11-20 10:03:35.434906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.702 [2024-11-20 10:03:35.434965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.702 [2024-11-20 10:03:35.434980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:04.702 [2024-11-20 10:03:35.438647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.702 [2024-11-20 10:03:35.438693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.702 [2024-11-20 10:03:35.438708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:04.702 [2024-11-20 10:03:35.442546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.702 [2024-11-20 10:03:35.442598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.702 [2024-11-20 10:03:35.442613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:04.702 [2024-11-20 10:03:35.446378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.702 [2024-11-20 10:03:35.446433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.702 [2024-11-20 10:03:35.446448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:04.702 [2024-11-20 10:03:35.450117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.702 [2024-11-20 10:03:35.450194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.702 [2024-11-20 10:03:35.450209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:04.702 [2024-11-20 10:03:35.453398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.702 [2024-11-20 10:03:35.453444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.702 [2024-11-20 10:03:35.453460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:04.702 [2024-11-20 10:03:35.456876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.702 [2024-11-20 10:03:35.456941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.702 [2024-11-20 10:03:35.456957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:04.702 [2024-11-20 10:03:35.460765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.702 [2024-11-20 10:03:35.460829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.702 [2024-11-20 10:03:35.460844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:04.702 [2024-11-20 10:03:35.465002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.702 [2024-11-20 10:03:35.465082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.702 [2024-11-20 10:03:35.465097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:04.702 [2024-11-20 10:03:35.470289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.702 [2024-11-20 10:03:35.470363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.702 [2024-11-20 10:03:35.470378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:04.702 [2024-11-20 10:03:35.474357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.702 [2024-11-20 10:03:35.474412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.702 [2024-11-20 10:03:35.474427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:04.702 [2024-11-20 10:03:35.477844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.702 [2024-11-20 10:03:35.477892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.702 [2024-11-20 10:03:35.477907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:04.702 [2024-11-20 10:03:35.481302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.702 [2024-11-20 10:03:35.481351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.702 [2024-11-20 10:03:35.481366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:04.702 [2024-11-20 10:03:35.489967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.702 [2024-11-20 10:03:35.490020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.702 [2024-11-20 10:03:35.490036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:04.702 [2024-11-20 10:03:35.494216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.702 [2024-11-20 10:03:35.494262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.702 [2024-11-20 10:03:35.494277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:04.702 [2024-11-20 10:03:35.498763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.702 [2024-11-20 10:03:35.498825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.702 [2024-11-20 10:03:35.498840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:04.702 [2024-11-20 10:03:35.503257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.702 [2024-11-20 10:03:35.503331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.702 [2024-11-20 10:03:35.503346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:04.702 [2024-11-20 10:03:35.510267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.702 [2024-11-20 10:03:35.510532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.702 [2024-11-20 10:03:35.510550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:04.702 [2024-11-20 10:03:35.518960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.702 [2024-11-20 10:03:35.519008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.702 [2024-11-20 10:03:35.519024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:04.702 [2024-11-20 10:03:35.524043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.702 [2024-11-20 10:03:35.524091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.702 [2024-11-20 10:03:35.524107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:04.702 [2024-11-20 10:03:35.527587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.702 [2024-11-20 10:03:35.527644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.702 [2024-11-20 10:03:35.527659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:04.702 [2024-11-20 10:03:35.531430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.702 [2024-11-20 10:03:35.531480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.702 [2024-11-20 10:03:35.531495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:04.702 [2024-11-20 10:03:35.535262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.702 [2024-11-20 10:03:35.535308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.702 [2024-11-20 10:03:35.535325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:04.702 [2024-11-20 10:03:35.539000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.702 [2024-11-20 10:03:35.539060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.702 [2024-11-20 10:03:35.539078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:04.702 [2024-11-20 10:03:35.543867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.702 [2024-11-20 10:03:35.543924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.702 [2024-11-20 10:03:35.543940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:04.702 [2024-11-20 10:03:35.547358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.702 [2024-11-20 10:03:35.547424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.702 [2024-11-20 10:03:35.547440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:04.702 [2024-11-20 10:03:35.551197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.703 [2024-11-20 10:03:35.551261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.703 [2024-11-20 10:03:35.551276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:04.703 [2024-11-20 10:03:35.554514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.703 [2024-11-20 10:03:35.554573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.703 [2024-11-20 10:03:35.554589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:04.703 [2024-11-20 10:03:35.558086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.703 [2024-11-20 10:03:35.558146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.703 [2024-11-20 10:03:35.558168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:04.703 [2024-11-20 10:03:35.561278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.703 [2024-11-20 10:03:35.561345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.703 [2024-11-20 10:03:35.561361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:04.703 [2024-11-20 10:03:35.564728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.703 [2024-11-20 10:03:35.564787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.703 [2024-11-20 10:03:35.564802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:04.703 [2024-11-20 10:03:35.568124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.703 [2024-11-20 10:03:35.568192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.703 [2024-11-20 10:03:35.568207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:04.703 [2024-11-20 10:03:35.571677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.703 [2024-11-20 10:03:35.571731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.703 [2024-11-20 10:03:35.571746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:04.703 [2024-11-20 10:03:35.576944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.703 [2024-11-20 10:03:35.577232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.703 [2024-11-20 10:03:35.577249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:04.703 [2024-11-20 10:03:35.583185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.703 [2024-11-20 10:03:35.583244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.703 [2024-11-20 10:03:35.583259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:04.703 [2024-11-20 10:03:35.587349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.703 [2024-11-20 10:03:35.587404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.703 [2024-11-20 10:03:35.587420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:04.703 [2024-11-20 10:03:35.590929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.703 [2024-11-20 10:03:35.590984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.703 [2024-11-20 10:03:35.591000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:04.703 [2024-11-20 10:03:35.596713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.703 [2024-11-20 10:03:35.597022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.703 [2024-11-20 10:03:35.597040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:04.703 [2024-11-20 10:03:35.603402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.703 [2024-11-20 10:03:35.603449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.703 [2024-11-20 10:03:35.603465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:04.703 [2024-11-20 10:03:35.607050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.703 [2024-11-20 10:03:35.607098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.703 [2024-11-20 10:03:35.607113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:04.703 [2024-11-20 10:03:35.611510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:04.703 [2024-11-20 10:03:35.611564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.703 [2024-11-20 10:03:35.611582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:05.002 [2024-11-20 10:03:35.615386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.002 [2024-11-20 10:03:35.615445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.002 [2024-11-20 10:03:35.615461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:05.002 [2024-11-20 10:03:35.619237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.002 [2024-11-20 10:03:35.619284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.002 [2024-11-20 10:03:35.619300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:05.002 [2024-11-20 10:03:35.623192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.002 [2024-11-20 10:03:35.623258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.002 [2024-11-20 10:03:35.623273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:05.002 [2024-11-20 10:03:35.627581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.002 [2024-11-20 10:03:35.627702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.002 [2024-11-20 10:03:35.627718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:05.002 [2024-11-20 10:03:35.632779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.002 [2024-11-20 10:03:35.632836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.002 [2024-11-20 10:03:35.632852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:05.002 [2024-11-20 10:03:35.636057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.002 [2024-11-20 10:03:35.636140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.002 [2024-11-20 10:03:35.636156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:05.002 [2024-11-20 10:03:35.639257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.002 [2024-11-20 10:03:35.639317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.002 [2024-11-20 10:03:35.639333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:05.002 [2024-11-20 10:03:35.642569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.002 [2024-11-20 10:03:35.642615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.002 [2024-11-20 10:03:35.642631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:05.002 [2024-11-20 10:03:35.647971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.002 [2024-11-20 10:03:35.648231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.002 [2024-11-20 10:03:35.648248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:05.002 [2024-11-20 10:03:35.655197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.002 [2024-11-20 10:03:35.655241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.002 [2024-11-20 10:03:35.655256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:05.002 [2024-11-20 10:03:35.659127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.002 [2024-11-20 10:03:35.659184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.002 [2024-11-20 10:03:35.659200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:05.002 [2024-11-20 10:03:35.664482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.002 [2024-11-20 10:03:35.664541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.002 [2024-11-20 10:03:35.664557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:05.002 [2024-11-20 10:03:35.670206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.002 [2024-11-20 10:03:35.670280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.002 [2024-11-20 10:03:35.670295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:05.002 [2024-11-20 10:03:35.679617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.002 [2024-11-20 10:03:35.679860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.002 [2024-11-20 10:03:35.679877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:05.002 [2024-11-20 10:03:35.689742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.002 [2024-11-20 10:03:35.690014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.002 [2024-11-20 10:03:35.690031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:05.002 [2024-11-20 10:03:35.699640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.002 [2024-11-20 10:03:35.699708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.002 [2024-11-20 10:03:35.699723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:05.002 [2024-11-20 10:03:35.710095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.002 [2024-11-20 10:03:35.710361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.002 [2024-11-20 10:03:35.710378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:05.002 [2024-11-20 10:03:35.720174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.002 [2024-11-20 10:03:35.720390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.002 [2024-11-20 10:03:35.720406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:05.002 [2024-11-20 10:03:35.729467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.002 [2024-11-20 10:03:35.729530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.002 [2024-11-20 10:03:35.729546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:05.002 [2024-11-20 10:03:35.738030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.002 [2024-11-20 10:03:35.738260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.002 [2024-11-20 10:03:35.738276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:05.002 [2024-11-20 10:03:35.746495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.002 [2024-11-20 10:03:35.746593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.002 [2024-11-20 10:03:35.746609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:05.002 [2024-11-20 10:03:35.751897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.002 [2024-11-20 10:03:35.751946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.002 [2024-11-20 10:03:35.751961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:05.002 [2024-11-20 10:03:35.755804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.002 [2024-11-20 10:03:35.755852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.003 [2024-11-20 10:03:35.755868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:05.003 [2024-11-20 10:03:35.759529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.003 [2024-11-20 10:03:35.759580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.003 [2024-11-20 10:03:35.759595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:05.003 [2024-11-20 10:03:35.763027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.003 [2024-11-20 10:03:35.763076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.003 [2024-11-20 10:03:35.763091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:05.003 [2024-11-20 10:03:35.769630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.003 [2024-11-20 10:03:35.769679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.003 [2024-11-20 10:03:35.769698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:05.003 [2024-11-20 10:03:35.773214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.003 [2024-11-20 10:03:35.773267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.003 [2024-11-20 10:03:35.773283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:05.003 [2024-11-20 10:03:35.776425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.003 [2024-11-20 10:03:35.776483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.003 [2024-11-20 10:03:35.776499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:05.003 [2024-11-20 10:03:35.779885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.003 [2024-11-20 10:03:35.779932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.003 [2024-11-20 10:03:35.779947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:05.003 [2024-11-20 10:03:35.783202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.003 [2024-11-20 10:03:35.783259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.003 [2024-11-20 10:03:35.783274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:05.003 [2024-11-20 10:03:35.786354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.003 [2024-11-20 10:03:35.786419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.003 [2024-11-20 10:03:35.786435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:05.003 [2024-11-20 10:03:35.789709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.003 [2024-11-20 10:03:35.789770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.003 [2024-11-20 10:03:35.789785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:05.003 [2024-11-20 10:03:35.792911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.003 [2024-11-20 10:03:35.792972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.003 [2024-11-20 10:03:35.792987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:05.003 [2024-11-20 10:03:35.795959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.003 [2024-11-20 10:03:35.796007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.003 [2024-11-20 10:03:35.796022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:05.003 [2024-11-20 10:03:35.798835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.003 [2024-11-20 10:03:35.798898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.003 [2024-11-20 10:03:35.798914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:05.003 [2024-11-20 10:03:35.801834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.003 [2024-11-20 10:03:35.801882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.003 [2024-11-20 10:03:35.801898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:05.003 [2024-11-20 10:03:35.804627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.003 [2024-11-20 10:03:35.804683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.003 [2024-11-20 10:03:35.804699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:05.003 [2024-11-20 10:03:35.808128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.003 [2024-11-20 10:03:35.808176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.003 [2024-11-20 10:03:35.808191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:05.003 [2024-11-20 10:03:35.812373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.003 [2024-11-20 10:03:35.812419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.003 [2024-11-20 10:03:35.812435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:05.003 [2024-11-20 10:03:35.816443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.003 [2024-11-20 10:03:35.816489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.003 [2024-11-20 10:03:35.816506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:05.003 [2024-11-20 10:03:35.819473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.003 [2024-11-20 10:03:35.819521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.003 [2024-11-20 10:03:35.819537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:05.003 [2024-11-20 10:03:35.822855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.003 [2024-11-20 10:03:35.822902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.003 [2024-11-20 10:03:35.822917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:05.003 [2024-11-20 10:03:35.826214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.003 [2024-11-20 10:03:35.826259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.003 [2024-11-20 10:03:35.826275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:05.003 [2024-11-20 10:03:35.829131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.003 [2024-11-20 10:03:35.829182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.003 [2024-11-20 10:03:35.829198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:05.003 [2024-11-20 10:03:35.832693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.003 [2024-11-20 10:03:35.832770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.003 [2024-11-20 10:03:35.832785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:05.003 [2024-11-20 10:03:35.837438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.003 [2024-11-20 10:03:35.837622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.003 [2024-11-20 10:03:35.837638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:05.003 [2024-11-20 10:03:35.843639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.003 [2024-11-20 10:03:35.843836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.003 [2024-11-20 10:03:35.843851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:05.003 [2024-11-20 10:03:35.848824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.003 [2024-11-20 10:03:35.848907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.003 [2024-11-20 10:03:35.848922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:05.003 [2024-11-20 10:03:35.854168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.003 [2024-11-20 10:03:35.854244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.003 [2024-11-20 10:03:35.854259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:05.003 [2024-11-20 10:03:35.861880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.003 [2024-11-20 10:03:35.861959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.003 [2024-11-20 10:03:35.861974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:05.003 [2024-11-20 10:03:35.867407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.003 [2024-11-20 10:03:35.867494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.003 [2024-11-20 10:03:35.867509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:05.003 [2024-11-20 10:03:35.872521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.003 [2024-11-20 10:03:35.872707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.003 [2024-11-20 10:03:35.872727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:05.003 [2024-11-20 10:03:35.878906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.003 [2024-11-20 10:03:35.879089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.003 [2024-11-20 10:03:35.879105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:05.003 [2024-11-20 10:03:35.885842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.003 [2024-11-20 10:03:35.885938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.003 [2024-11-20 10:03:35.885955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:05.003 [2024-11-20 10:03:35.892397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.003 [2024-11-20 10:03:35.892466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.003 [2024-11-20 10:03:35.892482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:05.319 [2024-11-20 10:03:35.899802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.319 [2024-11-20 10:03:35.899976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.319 [2024-11-20 10:03:35.899992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:05.319 5439.00 IOPS, 679.88 MiB/s [2024-11-20T09:03:36.235Z] [2024-11-20 10:03:35.908997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.319 [2024-11-20 10:03:35.909061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.319 [2024-11-20 10:03:35.909078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:05.319 [2024-11-20 10:03:35.914703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.319 [2024-11-20 10:03:35.914776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.319 [2024-11-20 10:03:35.914792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:05.319 [2024-11-20 10:03:35.921453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.319 [2024-11-20 10:03:35.921562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.319 [2024-11-20 10:03:35.921578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:05.319 [2024-11-20 10:03:35.927508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.319 [2024-11-20 10:03:35.927606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.319 [2024-11-20 10:03:35.927621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:05.319 [2024-11-20 10:03:35.932942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.319 [2024-11-20 10:03:35.933095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.319 [2024-11-20 10:03:35.933111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:05.319 [2024-11-20 10:03:35.940292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.319 [2024-11-20 10:03:35.940356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.319 [2024-11-20 10:03:35.940371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:05.319 [2024-11-20 10:03:35.945630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.319 [2024-11-20 10:03:35.945692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.319 [2024-11-20 10:03:35.945708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:05.319 [2024-11-20 10:03:35.951978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.319 [2024-11-20 10:03:35.952041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.319 [2024-11-20 10:03:35.952056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:05.319 [2024-11-20 10:03:35.955444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.319 [2024-11-20 10:03:35.955497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.319 [2024-11-20 10:03:35.955512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:05.319 [2024-11-20 10:03:35.958469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.319 [2024-11-20 10:03:35.958512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.319 [2024-11-20 10:03:35.958528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:05.319 [2024-11-20 10:03:35.961470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.319 [2024-11-20 10:03:35.961520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.319 [2024-11-20 10:03:35.961535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:05.319 [2024-11-20 10:03:35.964560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.319 [2024-11-20 10:03:35.964611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.319 [2024-11-20 10:03:35.964626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:05.319 [2024-11-20 10:03:35.971640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.319 [2024-11-20 10:03:35.971688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.319 [2024-11-20 10:03:35.971707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:05.319 [2024-11-20 10:03:35.979094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.319 [2024-11-20 10:03:35.979378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.319 [2024-11-20 10:03:35.979396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:05.319 [2024-11-20 10:03:35.985710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.319 [2024-11-20 10:03:35.986007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.319 [2024-11-20 10:03:35.986024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:05.319 [2024-11-20 10:03:35.990664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.319 [2024-11-20 10:03:35.990715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.319 [2024-11-20 10:03:35.990730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:05.319 [2024-11-20 10:03:35.994386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.319 [2024-11-20 10:03:35.994432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.319 [2024-11-20 10:03:35.994448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:05.319 [2024-11-20 10:03:35.997360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.319 [2024-11-20 10:03:35.997406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.319 [2024-11-20 10:03:35.997421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:05.320 [2024-11-20 10:03:36.003401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.320 [2024-11-20 10:03:36.003478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.320 [2024-11-20 10:03:36.003493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:05.320 [2024-11-20 10:03:36.008417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.320 [2024-11-20 10:03:36.008486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.320 [2024-11-20 10:03:36.008502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:05.320 [2024-11-20 10:03:36.012008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.320 [2024-11-20 10:03:36.012063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.320 [2024-11-20 10:03:36.012078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:05.320 [2024-11-20 10:03:36.015384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.320 [2024-11-20 10:03:36.015440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.320 [2024-11-20 10:03:36.015459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:05.320 [2024-11-20 10:03:36.018804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.320 [2024-11-20 10:03:36.018863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.320 [2024-11-20 10:03:36.018879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:05.320 [2024-11-20 10:03:36.022087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.320 [2024-11-20 10:03:36.022155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.320 [2024-11-20 10:03:36.022175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:05.320 [2024-11-20 10:03:36.024820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.320 [2024-11-20 10:03:36.024864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.320 [2024-11-20 10:03:36.024880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:05.320 [2024-11-20 10:03:36.027374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.320 [2024-11-20 10:03:36.027419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.320 [2024-11-20 10:03:36.027435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:05.320 [2024-11-20 10:03:36.030095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.320 [2024-11-20 10:03:36.030152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.320 [2024-11-20 10:03:36.030172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:05.320 [2024-11-20 10:03:36.032704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.320 [2024-11-20 10:03:36.032754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.320 [2024-11-20 10:03:36.032769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:05.320 [2024-11-20 10:03:36.035216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.320 [2024-11-20 10:03:36.035265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.320 [2024-11-20 10:03:36.035280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:05.320 [2024-11-20 10:03:36.037753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.320 [2024-11-20 10:03:36.037796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.320 [2024-11-20 10:03:36.037812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:05.320 [2024-11-20 10:03:36.040323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.320 [2024-11-20 10:03:36.040377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.320 [2024-11-20 10:03:36.040392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:05.320 [2024-11-20 10:03:36.043069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.320 [2024-11-20 10:03:36.043121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.320 [2024-11-20 10:03:36.043137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:05.320 [2024-11-20 10:03:36.045658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.320 [2024-11-20 10:03:36.045708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.320 [2024-11-20 10:03:36.045723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:05.320 [2024-11-20 10:03:36.048127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.320 [2024-11-20 10:03:36.048176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.320 [2024-11-20 10:03:36.048192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:05.320 [2024-11-20 10:03:36.050607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.320 [2024-11-20 10:03:36.050658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.320 [2024-11-20 10:03:36.050674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:05.320 [2024-11-20 10:03:36.053094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.320 [2024-11-20 10:03:36.053146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.320 [2024-11-20 10:03:36.053167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:05.320 [2024-11-20 10:03:36.055672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.320 [2024-11-20 10:03:36.055728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.320 [2024-11-20 10:03:36.055743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:05.320 [2024-11-20 10:03:36.058407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.320 [2024-11-20 10:03:36.058480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.320 [2024-11-20 10:03:36.058496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:05.320 [2024-11-20 10:03:36.062586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.320 [2024-11-20 10:03:36.062628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.320 [2024-11-20 10:03:36.062646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:05.320 [2024-11-20 10:03:36.066660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.320 [2024-11-20 10:03:36.066706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.320 [2024-11-20 10:03:36.066721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:05.320 [2024-11-20 10:03:36.070203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.320 [2024-11-20 10:03:36.070272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.320 [2024-11-20 10:03:36.070288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:05.320 [2024-11-20 10:03:36.073633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.320 [2024-11-20 10:03:36.073684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.320 [2024-11-20 10:03:36.073699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:05.320 [2024-11-20 10:03:36.076504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.320 [2024-11-20 10:03:36.076549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.320 [2024-11-20 10:03:36.076564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:05.320 [2024-11-20 10:03:36.079201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.320 [2024-11-20 10:03:36.079246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.320 [2024-11-20 10:03:36.079261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:05.320 [2024-11-20 10:03:36.083072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.320 [2024-11-20 10:03:36.083120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.320 [2024-11-20 10:03:36.083135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:05.321 [2024-11-20 10:03:36.088807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.321 [2024-11-20 10:03:36.089106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.321 [2024-11-20 10:03:36.089123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:05.321 [2024-11-20 10:03:36.091628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.321 [2024-11-20 10:03:36.091680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.321 [2024-11-20 10:03:36.091695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:05.321 [2024-11-20 10:03:36.094124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.321 [2024-11-20 10:03:36.094190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.321 [2024-11-20 10:03:36.094206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:05.321 [2024-11-20 10:03:36.097587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.321 [2024-11-20 10:03:36.097635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.321 [2024-11-20 10:03:36.097650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:05.321 [2024-11-20 10:03:36.100537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.321 [2024-11-20 10:03:36.100581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.321 [2024-11-20 10:03:36.100597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:05.321 [2024-11-20 10:03:36.103365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.321 [2024-11-20 10:03:36.103416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.321 [2024-11-20 10:03:36.103430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:05.321 [2024-11-20 10:03:36.106283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.321 [2024-11-20 10:03:36.106340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.321 [2024-11-20 10:03:36.106356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:05.321 [2024-11-20 10:03:36.108998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.321 [2024-11-20 10:03:36.109096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.321 [2024-11-20 10:03:36.109111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:05.321 [2024-11-20 10:03:36.113534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.321 [2024-11-20 10:03:36.113590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.321 [2024-11-20 10:03:36.113605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:05.321 [2024-11-20 10:03:36.116319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.321 [2024-11-20 10:03:36.116404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.321 [2024-11-20 10:03:36.116419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:05.321 [2024-11-20 10:03:36.119438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.321 [2024-11-20 10:03:36.119531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.321 [2024-11-20 10:03:36.119546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:05.321 [2024-11-20 10:03:36.127120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.321 [2024-11-20 10:03:36.127189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.321 [2024-11-20 10:03:36.127205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:05.321 [2024-11-20 10:03:36.130027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.321 [2024-11-20 10:03:36.130076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.321 [2024-11-20 10:03:36.130091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:05.321 [2024-11-20 10:03:36.133036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.321 [2024-11-20 10:03:36.133081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.321 [2024-11-20 10:03:36.133096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:05.321 [2024-11-20 10:03:36.136386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.321 [2024-11-20 10:03:36.136471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.321 [2024-11-20 10:03:36.136486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:05.321 [2024-11-20 10:03:36.140845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.321 [2024-11-20 10:03:36.141058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.321 [2024-11-20 10:03:36.141074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:05.321 [2024-11-20 10:03:36.149197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.321 [2024-11-20 10:03:36.149268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.321 [2024-11-20 10:03:36.149283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:05.321 [2024-11-20 10:03:36.153206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.321 [2024-11-20 10:03:36.153283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.321 [2024-11-20 10:03:36.153299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:05.321 [2024-11-20 10:03:36.156704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.321 [2024-11-20 10:03:36.156841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.321 [2024-11-20 10:03:36.156857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:05.321 [2024-11-20 10:03:36.160204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.321 [2024-11-20 10:03:36.160282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.321 [2024-11-20 10:03:36.160301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:05.321 [2024-11-20 10:03:36.163538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.321 [2024-11-20 10:03:36.163652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.321 [2024-11-20 10:03:36.163669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:05.321 [2024-11-20 10:03:36.167076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.321 [2024-11-20 10:03:36.167165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.321 [2024-11-20 10:03:36.167181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:05.321 [2024-11-20 10:03:36.171704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.321 [2024-11-20 10:03:36.171771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.321 [2024-11-20 10:03:36.171786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:05.321 [2024-11-20 10:03:36.175470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.321 [2024-11-20 10:03:36.175516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.321 [2024-11-20 10:03:36.175531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:05.321 [2024-11-20 10:03:36.178932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.321 [2024-11-20 10:03:36.178984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.321 [2024-11-20 10:03:36.179000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:05.321 [2024-11-20 10:03:36.182833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.321 [2024-11-20 10:03:36.182912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.321 [2024-11-20 10:03:36.182927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:05.321 [2024-11-20 10:03:36.187855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.321 [2024-11-20 10:03:36.188087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.321 [2024-11-20 10:03:36.188105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:05.321 [2024-11-20 10:03:36.198346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.322 [2024-11-20 10:03:36.198633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.322 [2024-11-20 10:03:36.198650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:05.322 [2024-11-20 10:03:36.208099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.322 [2024-11-20 10:03:36.208345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.322 [2024-11-20 10:03:36.208363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:05.322 [2024-11-20 10:03:36.211723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.322 [2024-11-20 10:03:36.211772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.322 [2024-11-20 10:03:36.211787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:05.322 [2024-11-20 10:03:36.214392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.322 [2024-11-20 10:03:36.214455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.322 [2024-11-20 10:03:36.214471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:05.322 [2024-11-20 10:03:36.217279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.322 [2024-11-20 10:03:36.217333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.322 [2024-11-20 10:03:36.217349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:05.322 [2024-11-20 10:03:36.220227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.322 [2024-11-20 10:03:36.220279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.322 [2024-11-20 10:03:36.220294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:05.322 [2024-11-20 10:03:36.225174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.322 [2024-11-20 10:03:36.225408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.322 [2024-11-20 10:03:36.225423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:05.584 [2024-11-20 10:03:36.234481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.584 [2024-11-20 10:03:36.234737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.584 [2024-11-20 10:03:36.234754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:05.584 [2024-11-20 10:03:36.243556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.584 [2024-11-20 10:03:36.243867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.584 [2024-11-20 10:03:36.243884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:05.584 [2024-11-20 10:03:36.253475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.584 [2024-11-20 10:03:36.253761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.584 [2024-11-20 10:03:36.253778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:05.584 [2024-11-20 10:03:36.263423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.584 [2024-11-20 10:03:36.263684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.584 [2024-11-20 10:03:36.263700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:05.584 [2024-11-20 10:03:36.272231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.584 [2024-11-20 10:03:36.272523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.584 [2024-11-20 10:03:36.272540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:05.584 [2024-11-20 10:03:36.282368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.584 [2024-11-20 10:03:36.282584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.584 [2024-11-20 10:03:36.282600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:05.584 [2024-11-20 10:03:36.291925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.584 [2024-11-20 10:03:36.292116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.584 [2024-11-20 10:03:36.292132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:05.584 [2024-11-20 10:03:36.301785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.584 [2024-11-20 10:03:36.302021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.584 [2024-11-20 10:03:36.302036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:05.584 [2024-11-20 10:03:36.311249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.584 [2024-11-20 10:03:36.311358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.584 [2024-11-20 10:03:36.311374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:05.584 [2024-11-20 10:03:36.321191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.584 [2024-11-20 10:03:36.321419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.584 [2024-11-20 10:03:36.321434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:05.584 [2024-11-20 10:03:36.330760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.584 [2024-11-20 10:03:36.331097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.584 [2024-11-20 10:03:36.331113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:05.584 [2024-11-20 10:03:36.339821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.584 [2024-11-20 10:03:36.340084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.584 [2024-11-20 10:03:36.340104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:05.584 [2024-11-20 10:03:36.345582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.584 [2024-11-20 10:03:36.345660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.584 [2024-11-20 10:03:36.345675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:05.584 [2024-11-20 10:03:36.348358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.584 [2024-11-20 10:03:36.348441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.584 [2024-11-20 10:03:36.348456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:05.584 [2024-11-20 10:03:36.351132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.584 [2024-11-20 10:03:36.351217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.584 [2024-11-20 10:03:36.351233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:05.584 [2024-11-20 10:03:36.353781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.584 [2024-11-20 10:03:36.353866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.584 [2024-11-20 10:03:36.353882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:05.584 [2024-11-20 10:03:36.356509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.584 [2024-11-20 10:03:36.356603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.584 [2024-11-20 10:03:36.356619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:05.584 [2024-11-20 10:03:36.359220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.584 [2024-11-20 10:03:36.359300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.584 [2024-11-20 10:03:36.359316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:05.584 [2024-11-20 10:03:36.361963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.584 [2024-11-20 10:03:36.362048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.584 [2024-11-20 10:03:36.362064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:05.585 [2024-11-20 10:03:36.364812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.585 [2024-11-20 10:03:36.364890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.585 [2024-11-20 10:03:36.364905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:05.585 [2024-11-20 10:03:36.367482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.585 [2024-11-20 10:03:36.367572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.585 [2024-11-20 10:03:36.367588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:05.585 [2024-11-20 10:03:36.369999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.585 [2024-11-20 10:03:36.370085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.585 [2024-11-20 10:03:36.370100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:05.585 [2024-11-20 10:03:36.372652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.585 [2024-11-20 10:03:36.372740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.585 [2024-11-20 10:03:36.372755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:05.585 [2024-11-20 10:03:36.377385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.585 [2024-11-20 10:03:36.377650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.585 [2024-11-20 10:03:36.377667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:05.585 [2024-11-20 10:03:36.384529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.585 [2024-11-20 10:03:36.384599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.585 [2024-11-20 10:03:36.384615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:05.585 [2024-11-20 10:03:36.391814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.585 [2024-11-20 10:03:36.391877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.585 [2024-11-20 10:03:36.391892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:05.585 [2024-11-20 10:03:36.396043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.585 [2024-11-20 10:03:36.396095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.585 [2024-11-20 10:03:36.396110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:05.585 [2024-11-20 10:03:36.399318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.585 [2024-11-20 10:03:36.399573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.585 [2024-11-20 10:03:36.399588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:05.585 [2024-11-20 10:03:36.406715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.585 [2024-11-20 10:03:36.406790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.585 [2024-11-20 10:03:36.406805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:05.585 [2024-11-20 10:03:36.413364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.585 [2024-11-20 10:03:36.413624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.585 [2024-11-20 10:03:36.413641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:05.585 [2024-11-20 10:03:36.422346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.585 [2024-11-20 10:03:36.422600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.585 [2024-11-20 10:03:36.422616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:05.585 [2024-11-20 10:03:36.432500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.585 [2024-11-20 10:03:36.432765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.585 [2024-11-20 10:03:36.432782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:05.585 [2024-11-20 10:03:36.442889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.585 [2024-11-20 10:03:36.443140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.585 [2024-11-20 10:03:36.443157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:05.585 [2024-11-20 10:03:36.454137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.585 [2024-11-20 10:03:36.454404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.585 [2024-11-20 10:03:36.454420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:05.585 [2024-11-20 10:03:36.463607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.585 [2024-11-20 10:03:36.463797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.585 [2024-11-20 10:03:36.463813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:05.585 [2024-11-20 10:03:36.474142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.585 [2024-11-20 10:03:36.474396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.585 [2024-11-20 10:03:36.474412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:05.585 [2024-11-20 10:03:36.484499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.585 [2024-11-20 10:03:36.484797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.585 [2024-11-20 10:03:36.484814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:05.585 [2024-11-20 10:03:36.494892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.585 [2024-11-20 10:03:36.495161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.585 [2024-11-20 10:03:36.495180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:05.848 [2024-11-20 10:03:36.503250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.848 [2024-11-20 10:03:36.503302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.848 [2024-11-20 10:03:36.503318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:05.848 [2024-11-20 10:03:36.506310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.848 [2024-11-20 10:03:36.506358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.848 [2024-11-20 10:03:36.506374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:05.848 [2024-11-20 10:03:36.509031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.848 [2024-11-20 10:03:36.509085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.848 [2024-11-20 10:03:36.509100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:05.848 [2024-11-20 10:03:36.511708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.848 [2024-11-20 10:03:36.511754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.848 [2024-11-20 10:03:36.511770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:05.848 [2024-11-20 10:03:36.514385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.848 [2024-11-20 10:03:36.514467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.848 [2024-11-20 10:03:36.514482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:05.848 [2024-11-20 10:03:36.517163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.848 [2024-11-20 10:03:36.517209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.848 [2024-11-20 10:03:36.517224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:05.848 [2024-11-20 10:03:36.519780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.848 [2024-11-20 10:03:36.519836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.848 [2024-11-20 10:03:36.519851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:05.848 [2024-11-20 10:03:36.522292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.848 [2024-11-20 10:03:36.522336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.848 [2024-11-20 10:03:36.522351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:05.848 [2024-11-20 10:03:36.524783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.848 [2024-11-20 10:03:36.524841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.848 [2024-11-20 10:03:36.524857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:05.848 [2024-11-20 10:03:36.527270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.848 [2024-11-20 10:03:36.527324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.848 [2024-11-20 10:03:36.527339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:05.848 [2024-11-20 10:03:36.529768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.848 [2024-11-20 10:03:36.529810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.848 [2024-11-20 10:03:36.529825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:05.848 [2024-11-20 10:03:36.532250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.848 [2024-11-20 10:03:36.532300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.848 [2024-11-20 10:03:36.532315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:05.848 [2024-11-20 10:03:36.534716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.848 [2024-11-20 10:03:36.534772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.848 [2024-11-20 10:03:36.534787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:05.848 [2024-11-20 10:03:36.539649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.848 [2024-11-20 10:03:36.539857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.848 [2024-11-20 10:03:36.539873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:05.848 [2024-11-20 10:03:36.543572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.848 [2024-11-20 10:03:36.543655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.848 [2024-11-20 10:03:36.543670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:05.848 [2024-11-20 10:03:36.546084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.848 [2024-11-20 10:03:36.546173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.848 [2024-11-20 10:03:36.546188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:05.848 [2024-11-20 10:03:36.548546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.848 [2024-11-20 10:03:36.548625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.848 [2024-11-20 10:03:36.548640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:05.848 [2024-11-20 10:03:36.551030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.848 [2024-11-20 10:03:36.551109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.848 [2024-11-20 10:03:36.551124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:05.848 [2024-11-20 10:03:36.553884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.848 [2024-11-20 10:03:36.553978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.848 [2024-11-20 10:03:36.553995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:05.848 [2024-11-20 10:03:36.556982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.848 [2024-11-20 10:03:36.557041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.848 [2024-11-20 10:03:36.557056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:05.848 [2024-11-20 10:03:36.564419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.848 [2024-11-20 10:03:36.564497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.848 [2024-11-20 10:03:36.564513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:05.849 [2024-11-20 10:03:36.569351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.849 [2024-11-20 10:03:36.569509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.849 [2024-11-20 10:03:36.569525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:05.849 [2024-11-20 10:03:36.577617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.849 [2024-11-20 10:03:36.577995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.849 [2024-11-20 10:03:36.578012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:05.849 [2024-11-20 10:03:36.585163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.849 [2024-11-20 10:03:36.585424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.849 [2024-11-20 10:03:36.585440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:05.849 [2024-11-20 10:03:36.593003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.849 [2024-11-20 10:03:36.593271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.849 [2024-11-20 10:03:36.593286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:05.849 [2024-11-20 10:03:36.601621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.849 [2024-11-20 10:03:36.601809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.849 [2024-11-20 10:03:36.601828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:05.849 [2024-11-20 10:03:36.610333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.849 [2024-11-20 10:03:36.610404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.849 [2024-11-20 10:03:36.610420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:05.849 [2024-11-20 10:03:36.617470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.849 [2024-11-20 10:03:36.617551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.849 [2024-11-20 10:03:36.617566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:05.849 [2024-11-20 10:03:36.621305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.849 [2024-11-20 10:03:36.621354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.849 [2024-11-20 10:03:36.621370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:05.849 [2024-11-20 10:03:36.624217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.849 [2024-11-20 10:03:36.624275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.849 [2024-11-20 10:03:36.624290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:05.849 [2024-11-20 10:03:36.627276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.849 [2024-11-20 10:03:36.627346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.849 [2024-11-20 10:03:36.627361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:05.849 [2024-11-20 10:03:36.630421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.849 [2024-11-20 10:03:36.630487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.849 [2024-11-20 10:03:36.630502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:05.849 [2024-11-20 10:03:36.633610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.849 [2024-11-20 10:03:36.633705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.849 [2024-11-20 10:03:36.633721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:05.849 [2024-11-20 10:03:36.637192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.849 [2024-11-20 10:03:36.637246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.849 [2024-11-20 10:03:36.637262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:05.849 [2024-11-20 10:03:36.642231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.849 [2024-11-20 10:03:36.642297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.849 [2024-11-20 10:03:36.642313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:05.849 [2024-11-20 10:03:36.647360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.849 [2024-11-20 10:03:36.647412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.849 [2024-11-20 10:03:36.647427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:05.849 [2024-11-20 10:03:36.650846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.849 [2024-11-20 10:03:36.650919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.849 [2024-11-20 10:03:36.650935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:05.849 [2024-11-20 10:03:36.654322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.849 [2024-11-20 10:03:36.654368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.849 [2024-11-20 10:03:36.654384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:05.849 [2024-11-20 10:03:36.658002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.849 [2024-11-20 10:03:36.658055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.849 [2024-11-20 10:03:36.658070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:05.849 [2024-11-20 10:03:36.661496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.849 [2024-11-20 10:03:36.661740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.849 [2024-11-20 10:03:36.661755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:05.849 [2024-11-20 10:03:36.665140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.849 [2024-11-20 10:03:36.665315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.849 [2024-11-20 10:03:36.665330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:05.849 [2024-11-20 10:03:36.673559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.849 [2024-11-20 10:03:36.673618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.849 [2024-11-20 10:03:36.673634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:05.849 [2024-11-20 10:03:36.681462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.849 [2024-11-20 10:03:36.681661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.849 [2024-11-20 10:03:36.681676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:05.849 [2024-11-20 10:03:36.689865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.849 [2024-11-20 10:03:36.689942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.849 [2024-11-20 10:03:36.689957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:05.849 [2024-11-20 10:03:36.694020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.849 [2024-11-20 10:03:36.694087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.849 [2024-11-20 10:03:36.694102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:05.849 [2024-11-20 10:03:36.697144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.849 [2024-11-20 10:03:36.697257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.849 [2024-11-20 10:03:36.697272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:05.849 [2024-11-20 10:03:36.700291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.849 [2024-11-20 10:03:36.700342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.849 [2024-11-20 10:03:36.700358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:05.849 [2024-11-20 10:03:36.707792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.850 [2024-11-20 10:03:36.707852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.850 [2024-11-20 10:03:36.707868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:05.850 [2024-11-20 10:03:36.715593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.850 [2024-11-20 10:03:36.715644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.850 [2024-11-20 10:03:36.715659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:05.850 [2024-11-20 10:03:36.723346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.850 [2024-11-20 10:03:36.723642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.850 [2024-11-20 10:03:36.723659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:05.850 [2024-11-20 10:03:36.733088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.850 [2024-11-20 10:03:36.733152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.850 [2024-11-20 10:03:36.733173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:05.850 [2024-11-20 10:03:36.742634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.850 [2024-11-20 10:03:36.742831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.850 [2024-11-20 10:03:36.742852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:05.850 [2024-11-20 10:03:36.752963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:05.850 [2024-11-20 10:03:36.753246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.850 [2024-11-20 10:03:36.753262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:06.111 [2024-11-20 10:03:36.763712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:06.111 [2024-11-20 10:03:36.763935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.111 [2024-11-20 10:03:36.763951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:06.111 [2024-11-20 10:03:36.773961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:06.111 [2024-11-20 10:03:36.774207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.111 [2024-11-20 10:03:36.774223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:06.111 [2024-11-20 10:03:36.784371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:06.111 [2024-11-20 10:03:36.784631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.111 [2024-11-20 10:03:36.784646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:06.111 [2024-11-20 10:03:36.795023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:06.111 [2024-11-20 10:03:36.795312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.111 [2024-11-20 10:03:36.795329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:06.111 [2024-11-20 10:03:36.805542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:06.111 [2024-11-20 10:03:36.805810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.111 [2024-11-20 10:03:36.805827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:06.111 [2024-11-20 10:03:36.814795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:06.111 [2024-11-20 10:03:36.814853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.111 [2024-11-20 10:03:36.814869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:06.111 [2024-11-20 10:03:36.823245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:06.111 [2024-11-20 10:03:36.823295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.111 [2024-11-20 10:03:36.823309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:06.111 [2024-11-20 10:03:36.832851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:06.111 [2024-11-20 10:03:36.833115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.111 [2024-11-20 10:03:36.833131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:06.111 [2024-11-20 10:03:36.840260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:06.111 [2024-11-20 10:03:36.840311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.111 [2024-11-20 10:03:36.840326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:06.111 [2024-11-20 10:03:36.848207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:06.111 [2024-11-20 10:03:36.848525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.111 [2024-11-20 10:03:36.848542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:06.111 [2024-11-20 10:03:36.855946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:06.111 [2024-11-20 10:03:36.856007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.111 [2024-11-20 10:03:36.856022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:06.111 [2024-11-20 10:03:36.864905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:06.112 [2024-11-20 10:03:36.865359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.112 [2024-11-20 10:03:36.865376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:06.112 [2024-11-20 10:03:36.872942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:06.112 [2024-11-20 10:03:36.872995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.112 [2024-11-20 10:03:36.873011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:06.112 [2024-11-20 10:03:36.880475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:06.112 [2024-11-20 10:03:36.880531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.112 [2024-11-20 10:03:36.880546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:06.112 [2024-11-20 10:03:36.884657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:06.112 [2024-11-20 10:03:36.884725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.112 [2024-11-20 10:03:36.884740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:06.112 [2024-11-20 10:03:36.887388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:06.112 [2024-11-20 10:03:36.887463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.112 [2024-11-20 10:03:36.887481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:06.112 [2024-11-20 10:03:36.890117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:06.112 [2024-11-20 10:03:36.890178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.112 [2024-11-20 10:03:36.890194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:06.112 [2024-11-20 10:03:36.892748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:06.112 [2024-11-20 10:03:36.892915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.112 [2024-11-20 10:03:36.892931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:06.112 [2024-11-20 10:03:36.895608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:06.112 [2024-11-20 10:03:36.895672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.112 [2024-11-20 10:03:36.895687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:06.112 [2024-11-20 10:03:36.899399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:06.112 [2024-11-20 10:03:36.899529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.112 [2024-11-20 10:03:36.899544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:06.112 [2024-11-20 10:03:36.902256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:06.112 [2024-11-20 10:03:36.902310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.112 [2024-11-20 10:03:36.902326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:06.112 5688.50 IOPS, 711.06 MiB/s [2024-11-20T09:03:37.028Z] [2024-11-20 10:03:36.905850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a3860) with pdu=0x2000166fef90 00:30:06.112 [2024-11-20 10:03:36.905899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.112 [2024-11-20 10:03:36.905915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:06.112 00:30:06.112 Latency(us) 00:30:06.112 [2024-11-20T09:03:37.028Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:06.112 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:30:06.112 nvme0n1 : 2.00 5689.53 711.19 0.00 0.00 2808.67 1194.67 17257.81 00:30:06.112 [2024-11-20T09:03:37.028Z] =================================================================================================================== 00:30:06.112 [2024-11-20T09:03:37.028Z] Total : 5689.53 711.19 0.00 0.00 2808.67 1194.67 17257.81 00:30:06.112 { 00:30:06.112 "results": [ 00:30:06.112 { 00:30:06.112 "job": "nvme0n1", 00:30:06.112 "core_mask": "0x2", 00:30:06.112 "workload": "randwrite", 00:30:06.112 "status": "finished", 00:30:06.112 "queue_depth": 16, 00:30:06.112 "io_size": 131072, 00:30:06.112 "runtime": 2.002276, 00:30:06.112 "iops": 5689.5253201856285, 00:30:06.112 "mibps": 711.1906650232036, 00:30:06.112 "io_failed": 0, 00:30:06.112 "io_timeout": 0, 00:30:06.112 "avg_latency_us": 2808.66936329588, 00:30:06.112 "min_latency_us": 1194.6666666666667, 00:30:06.112 "max_latency_us": 17257.81333333333 00:30:06.112 } 00:30:06.112 ], 00:30:06.112 "core_count": 1 00:30:06.112 } 00:30:06.112 10:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:06.112 10:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:06.112 10:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:06.112 | .driver_specific 00:30:06.112 | .nvme_error 00:30:06.112 | .status_code 00:30:06.112 | .command_transient_transport_error' 00:30:06.112 10:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:06.372 10:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 368 > 0 )) 00:30:06.372 10:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1551728 00:30:06.372 10:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1551728 ']' 00:30:06.372 10:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1551728 00:30:06.372 10:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:30:06.372 10:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:06.372 10:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1551728 00:30:06.372 10:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:06.372 10:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:06.372 10:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1551728' 00:30:06.372 killing process with pid 1551728 00:30:06.372 10:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1551728 00:30:06.372 Received shutdown signal, test time was about 2.000000 seconds 00:30:06.372 00:30:06.372 Latency(us) 00:30:06.372 [2024-11-20T09:03:37.288Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:06.372 [2024-11-20T09:03:37.288Z] =================================================================================================================== 00:30:06.372 [2024-11-20T09:03:37.288Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:06.372 10:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1551728 00:30:06.634 10:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1549311 00:30:06.634 10:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1549311 ']' 00:30:06.634 10:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1549311 00:30:06.634 10:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:30:06.634 10:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:06.634 10:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1549311 00:30:06.634 10:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:06.634 10:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:06.634 10:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1549311' 00:30:06.634 killing process with pid 1549311 00:30:06.634 10:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1549311 00:30:06.634 10:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1549311 00:30:06.634 00:30:06.634 real 0m16.566s 00:30:06.634 user 0m32.736s 00:30:06.634 sys 0m3.696s 00:30:06.634 10:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:06.634 10:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:06.634 ************************************ 00:30:06.634 END TEST nvmf_digest_error 00:30:06.634 ************************************ 00:30:06.634 10:03:37 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:30:06.634 10:03:37 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:30:06.634 10:03:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:06.634 10:03:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:30:06.634 10:03:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:06.634 10:03:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:30:06.634 10:03:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:06.634 10:03:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:06.634 rmmod nvme_tcp 00:30:06.895 rmmod nvme_fabrics 00:30:06.895 rmmod nvme_keyring 00:30:06.895 10:03:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:06.895 10:03:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:30:06.895 10:03:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:30:06.895 10:03:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 1549311 ']' 00:30:06.895 10:03:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 1549311 00:30:06.895 10:03:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 1549311 ']' 00:30:06.895 10:03:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 1549311 00:30:06.895 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1549311) - No such process 00:30:06.895 10:03:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 1549311 is not found' 00:30:06.896 Process with pid 1549311 is not found 00:30:06.896 10:03:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:06.896 10:03:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:06.896 10:03:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:06.896 10:03:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:30:06.896 10:03:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:30:06.896 10:03:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:06.896 10:03:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:30:06.896 10:03:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:06.896 10:03:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:06.896 10:03:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:06.896 10:03:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:06.896 10:03:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:08.807 10:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:08.807 00:30:08.807 real 0m43.583s 00:30:08.807 user 1m8.412s 00:30:08.807 sys 0m13.249s 00:30:08.807 10:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:08.807 10:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:30:08.807 ************************************ 00:30:08.807 END TEST nvmf_digest 00:30:08.807 ************************************ 00:30:09.067 10:03:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:30:09.067 10:03:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:30:09.067 10:03:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:30:09.067 10:03:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:30:09.067 10:03:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:09.067 10:03:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:09.067 10:03:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:09.067 ************************************ 00:30:09.067 START TEST nvmf_bdevperf 00:30:09.067 ************************************ 00:30:09.067 10:03:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:30:09.067 * Looking for test storage... 00:30:09.067 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:09.067 10:03:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:09.067 10:03:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:30:09.067 10:03:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:09.067 10:03:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:09.067 10:03:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:09.067 10:03:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:09.067 10:03:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:09.067 10:03:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:30:09.067 10:03:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:30:09.067 10:03:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:30:09.067 10:03:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:30:09.067 10:03:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:30:09.067 10:03:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:30:09.067 10:03:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:30:09.067 10:03:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:09.067 10:03:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:30:09.067 10:03:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:30:09.067 10:03:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:09.067 10:03:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:09.067 10:03:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:30:09.067 10:03:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:30:09.067 10:03:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:09.067 10:03:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:30:09.067 10:03:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:30:09.067 10:03:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:30:09.067 10:03:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:30:09.067 10:03:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:09.067 10:03:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:30:09.067 10:03:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:30:09.067 10:03:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:09.067 10:03:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:09.067 10:03:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:30:09.067 10:03:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:09.067 10:03:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:09.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:09.067 --rc genhtml_branch_coverage=1 00:30:09.067 --rc genhtml_function_coverage=1 00:30:09.067 --rc genhtml_legend=1 00:30:09.067 --rc geninfo_all_blocks=1 00:30:09.067 --rc geninfo_unexecuted_blocks=1 00:30:09.067 00:30:09.067 ' 00:30:09.067 10:03:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:09.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:09.067 --rc genhtml_branch_coverage=1 00:30:09.067 --rc genhtml_function_coverage=1 00:30:09.067 --rc genhtml_legend=1 00:30:09.067 --rc geninfo_all_blocks=1 00:30:09.067 --rc geninfo_unexecuted_blocks=1 00:30:09.067 00:30:09.067 ' 00:30:09.067 10:03:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:09.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:09.067 --rc genhtml_branch_coverage=1 00:30:09.067 --rc genhtml_function_coverage=1 00:30:09.067 --rc genhtml_legend=1 00:30:09.067 --rc geninfo_all_blocks=1 00:30:09.067 --rc geninfo_unexecuted_blocks=1 00:30:09.067 00:30:09.067 ' 00:30:09.067 10:03:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:09.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:09.067 --rc genhtml_branch_coverage=1 00:30:09.067 --rc genhtml_function_coverage=1 00:30:09.067 --rc genhtml_legend=1 00:30:09.067 --rc geninfo_all_blocks=1 00:30:09.067 --rc geninfo_unexecuted_blocks=1 00:30:09.067 00:30:09.067 ' 00:30:09.067 10:03:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:09.067 10:03:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:30:09.067 10:03:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:09.067 10:03:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:09.067 10:03:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:09.067 10:03:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:09.067 10:03:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:09.067 10:03:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:09.067 10:03:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:09.067 10:03:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:09.067 10:03:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:09.067 10:03:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:09.327 10:03:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:09.327 10:03:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:09.327 10:03:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:09.327 10:03:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:09.327 10:03:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:09.327 10:03:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:09.327 10:03:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:09.327 10:03:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:30:09.327 10:03:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:09.327 10:03:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:09.327 10:03:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:09.327 10:03:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.327 10:03:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.328 10:03:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.328 10:03:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:30:09.328 10:03:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.328 10:03:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:30:09.328 10:03:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:09.328 10:03:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:09.328 10:03:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:09.328 10:03:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:09.328 10:03:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:09.328 10:03:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:09.328 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:09.328 10:03:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:09.328 10:03:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:09.328 10:03:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:09.328 10:03:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:09.328 10:03:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:09.328 10:03:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:30:09.328 10:03:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:09.328 10:03:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:09.328 10:03:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:09.328 10:03:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:09.328 10:03:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:09.328 10:03:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:09.328 10:03:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:09.328 10:03:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:09.328 10:03:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:09.328 10:03:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:09.328 10:03:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:30:09.328 10:03:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:17.464 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:17.464 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:30:17.464 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:17.464 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:17.464 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:17.464 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:17.464 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:17.464 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:30:17.464 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:17.464 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:30:17.464 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:30:17.464 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:30:17.464 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:30:17.464 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:30:17.464 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:30:17.464 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:17.464 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:17.464 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:17.464 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:17.464 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:17.464 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:17.464 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:17.464 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:17.464 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:17.464 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:17.465 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:17.465 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:17.465 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:17.465 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:17.465 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:17.465 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:17.465 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:17.465 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:17.465 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:17.465 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:17.465 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:17.465 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:17.465 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:17.465 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:17.465 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:17.465 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:17.465 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:17.465 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:17.465 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:17.465 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:17.465 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:17.465 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:17.465 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:17.465 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:17.465 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:17.465 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:17.465 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:17.465 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:17.465 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:17.465 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:17.465 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:17.465 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:17.465 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:17.465 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:17.465 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:17.465 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:17.465 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:17.465 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:17.465 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:17.465 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:17.465 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:17.465 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:17.465 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:17.465 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:17.465 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:17.465 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:17.465 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:17.465 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:17.465 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:30:17.465 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:17.465 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:17.465 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:17.465 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:17.465 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:17.465 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:17.465 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:17.465 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:17.465 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:17.465 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:17.465 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:17.465 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:17.465 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:17.465 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:17.465 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:17.465 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:17.465 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:17.465 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:17.465 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:17.465 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:17.465 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:17.465 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:17.465 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:17.465 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:17.465 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:17.465 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:17.465 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:17.465 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.678 ms 00:30:17.465 00:30:17.465 --- 10.0.0.2 ping statistics --- 00:30:17.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:17.465 rtt min/avg/max/mdev = 0.678/0.678/0.678/0.000 ms 00:30:17.465 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:17.465 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:17.465 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.267 ms 00:30:17.465 00:30:17.465 --- 10.0.0.1 ping statistics --- 00:30:17.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:17.465 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:30:17.465 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:17.465 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:30:17.465 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:17.465 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:17.465 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:17.465 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:17.465 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:17.465 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:17.465 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:17.465 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:30:17.465 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:30:17.465 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:17.465 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:17.465 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:17.465 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=1556754 00:30:17.465 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 1556754 00:30:17.465 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:17.465 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 1556754 ']' 00:30:17.465 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:17.465 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:17.465 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:17.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:17.465 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:17.465 10:03:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:17.465 [2024-11-20 10:03:47.594432] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:30:17.465 [2024-11-20 10:03:47.594499] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:17.465 [2024-11-20 10:03:47.693863] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:17.465 [2024-11-20 10:03:47.745557] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:17.465 [2024-11-20 10:03:47.745610] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:17.465 [2024-11-20 10:03:47.745619] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:17.465 [2024-11-20 10:03:47.745626] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:17.466 [2024-11-20 10:03:47.745633] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:17.466 [2024-11-20 10:03:47.747748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:17.466 [2024-11-20 10:03:47.747911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:17.466 [2024-11-20 10:03:47.747912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:17.727 10:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:17.727 10:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:30:17.727 10:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:17.727 10:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:17.727 10:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:17.727 10:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:17.727 10:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:17.727 10:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.727 10:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:17.727 [2024-11-20 10:03:48.450642] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:17.727 10:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.727 10:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:17.727 10:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.727 10:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:17.727 Malloc0 00:30:17.727 10:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.727 10:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:17.727 10:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.727 10:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:17.727 10:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.727 10:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:17.727 10:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.727 10:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:17.727 10:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.727 10:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:17.727 10:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.727 10:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:17.727 [2024-11-20 10:03:48.528713] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:17.727 10:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.727 10:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:30:17.727 10:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:30:17.727 10:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:30:17.727 10:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:30:17.727 10:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:17.727 10:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:17.727 { 00:30:17.727 "params": { 00:30:17.727 "name": "Nvme$subsystem", 00:30:17.727 "trtype": "$TEST_TRANSPORT", 00:30:17.727 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:17.727 "adrfam": "ipv4", 00:30:17.727 "trsvcid": "$NVMF_PORT", 00:30:17.727 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:17.727 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:17.727 "hdgst": ${hdgst:-false}, 00:30:17.727 "ddgst": ${ddgst:-false} 00:30:17.727 }, 00:30:17.727 "method": "bdev_nvme_attach_controller" 00:30:17.727 } 00:30:17.727 EOF 00:30:17.727 )") 00:30:17.727 10:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:30:17.727 10:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:30:17.727 10:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:30:17.727 10:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:17.727 "params": { 00:30:17.727 "name": "Nvme1", 00:30:17.727 "trtype": "tcp", 00:30:17.727 "traddr": "10.0.0.2", 00:30:17.727 "adrfam": "ipv4", 00:30:17.727 "trsvcid": "4420", 00:30:17.727 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:17.727 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:17.727 "hdgst": false, 00:30:17.727 "ddgst": false 00:30:17.727 }, 00:30:17.727 "method": "bdev_nvme_attach_controller" 00:30:17.727 }' 00:30:17.727 [2024-11-20 10:03:48.598525] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:30:17.727 [2024-11-20 10:03:48.598590] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1556945 ] 00:30:17.988 [2024-11-20 10:03:48.691308] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:17.988 [2024-11-20 10:03:48.744496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:18.249 Running I/O for 1 seconds... 00:30:19.191 8687.00 IOPS, 33.93 MiB/s 00:30:19.191 Latency(us) 00:30:19.191 [2024-11-20T09:03:50.107Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:19.191 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:19.191 Verification LBA range: start 0x0 length 0x4000 00:30:19.191 Nvme1n1 : 1.01 8767.77 34.25 0.00 0.00 14532.83 583.68 17367.04 00:30:19.191 [2024-11-20T09:03:50.107Z] =================================================================================================================== 00:30:19.191 [2024-11-20T09:03:50.107Z] Total : 8767.77 34.25 0.00 0.00 14532.83 583.68 17367.04 00:30:19.191 10:03:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1557161 00:30:19.191 10:03:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:30:19.191 10:03:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:30:19.191 10:03:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:30:19.191 10:03:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:30:19.191 10:03:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:30:19.191 10:03:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:19.191 10:03:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:19.191 { 00:30:19.191 "params": { 00:30:19.191 "name": "Nvme$subsystem", 00:30:19.191 "trtype": "$TEST_TRANSPORT", 00:30:19.191 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:19.191 "adrfam": "ipv4", 00:30:19.191 "trsvcid": "$NVMF_PORT", 00:30:19.191 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:19.191 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:19.191 "hdgst": ${hdgst:-false}, 00:30:19.191 "ddgst": ${ddgst:-false} 00:30:19.191 }, 00:30:19.191 "method": "bdev_nvme_attach_controller" 00:30:19.191 } 00:30:19.191 EOF 00:30:19.191 )") 00:30:19.191 10:03:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:30:19.191 10:03:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:30:19.191 10:03:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:30:19.191 10:03:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:19.191 "params": { 00:30:19.191 "name": "Nvme1", 00:30:19.191 "trtype": "tcp", 00:30:19.191 "traddr": "10.0.0.2", 00:30:19.191 "adrfam": "ipv4", 00:30:19.191 "trsvcid": "4420", 00:30:19.191 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:19.191 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:19.191 "hdgst": false, 00:30:19.191 "ddgst": false 00:30:19.191 }, 00:30:19.191 "method": "bdev_nvme_attach_controller" 00:30:19.191 }' 00:30:19.451 [2024-11-20 10:03:50.129225] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:30:19.451 [2024-11-20 10:03:50.129304] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1557161 ] 00:30:19.451 [2024-11-20 10:03:50.223170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:19.451 [2024-11-20 10:03:50.272342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:19.713 Running I/O for 15 seconds... 00:30:22.041 11175.00 IOPS, 43.65 MiB/s [2024-11-20T09:03:53.222Z] 11146.50 IOPS, 43.54 MiB/s [2024-11-20T09:03:53.222Z] 10:03:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1556754 00:30:22.306 10:03:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:30:22.306 [2024-11-20 10:03:53.080123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:94272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.306 [2024-11-20 10:03:53.080167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.306 [2024-11-20 10:03:53.080189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:94328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.306 [2024-11-20 10:03:53.080199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.306 [2024-11-20 10:03:53.080211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:94336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.306 [2024-11-20 10:03:53.080220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.306 [2024-11-20 10:03:53.080232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:94344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.306 [2024-11-20 10:03:53.080240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.306 [2024-11-20 10:03:53.080250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:94352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.306 [2024-11-20 10:03:53.080259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.306 [2024-11-20 10:03:53.080269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:94360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.306 [2024-11-20 10:03:53.080279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.306 [2024-11-20 10:03:53.080295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:94368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.306 [2024-11-20 10:03:53.080305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.306 [2024-11-20 10:03:53.080315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:94376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.306 [2024-11-20 10:03:53.080325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.306 [2024-11-20 10:03:53.080336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:94384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.306 [2024-11-20 10:03:53.080343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.306 [2024-11-20 10:03:53.080352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:94392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.306 [2024-11-20 10:03:53.080361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.306 [2024-11-20 10:03:53.080373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:94400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.306 [2024-11-20 10:03:53.080383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.306 [2024-11-20 10:03:53.080394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:94408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.306 [2024-11-20 10:03:53.080404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.306 [2024-11-20 10:03:53.080420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:94416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.306 [2024-11-20 10:03:53.080429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.306 [2024-11-20 10:03:53.080444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:94424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.306 [2024-11-20 10:03:53.080453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.306 [2024-11-20 10:03:53.080466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:94432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.306 [2024-11-20 10:03:53.080475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.306 [2024-11-20 10:03:53.080485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:94440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.306 [2024-11-20 10:03:53.080496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.307 [2024-11-20 10:03:53.080510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:94448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.307 [2024-11-20 10:03:53.080519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.307 [2024-11-20 10:03:53.080529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:94456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.307 [2024-11-20 10:03:53.080537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.307 [2024-11-20 10:03:53.080547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:94464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.307 [2024-11-20 10:03:53.080557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.307 [2024-11-20 10:03:53.080568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:94472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.307 [2024-11-20 10:03:53.080576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.307 [2024-11-20 10:03:53.080586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:94480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.307 [2024-11-20 10:03:53.080594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.307 [2024-11-20 10:03:53.080604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:94488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.307 [2024-11-20 10:03:53.080612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.307 [2024-11-20 10:03:53.080622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:94496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.307 [2024-11-20 10:03:53.080630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.307 [2024-11-20 10:03:53.080639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:94504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.307 [2024-11-20 10:03:53.080647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.307 [2024-11-20 10:03:53.080656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:94512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.307 [2024-11-20 10:03:53.080664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.307 [2024-11-20 10:03:53.080674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:94520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.307 [2024-11-20 10:03:53.080682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.307 [2024-11-20 10:03:53.080691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:94528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.307 [2024-11-20 10:03:53.080699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.307 [2024-11-20 10:03:53.080708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:94536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.307 [2024-11-20 10:03:53.080715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.307 [2024-11-20 10:03:53.080724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:94544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.307 [2024-11-20 10:03:53.080732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.307 [2024-11-20 10:03:53.080741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:94552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.307 [2024-11-20 10:03:53.080749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.307 [2024-11-20 10:03:53.080758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:94560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.307 [2024-11-20 10:03:53.080765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.307 [2024-11-20 10:03:53.080776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:94568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.307 [2024-11-20 10:03:53.080783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.307 [2024-11-20 10:03:53.080793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:94576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.307 [2024-11-20 10:03:53.080800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.307 [2024-11-20 10:03:53.080810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:94584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.307 [2024-11-20 10:03:53.080817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.307 [2024-11-20 10:03:53.080826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:94592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.307 [2024-11-20 10:03:53.080834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.307 [2024-11-20 10:03:53.080843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:94600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.307 [2024-11-20 10:03:53.080851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.307 [2024-11-20 10:03:53.080860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:94608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.307 [2024-11-20 10:03:53.080869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.307 [2024-11-20 10:03:53.080878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:94616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.307 [2024-11-20 10:03:53.080886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.307 [2024-11-20 10:03:53.080895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:94624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.307 [2024-11-20 10:03:53.080902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.307 [2024-11-20 10:03:53.080912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:94632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.307 [2024-11-20 10:03:53.080919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.307 [2024-11-20 10:03:53.080929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:94640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.307 [2024-11-20 10:03:53.080936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.307 [2024-11-20 10:03:53.080945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:94648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.307 [2024-11-20 10:03:53.080953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.307 [2024-11-20 10:03:53.080962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:94656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.307 [2024-11-20 10:03:53.080970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.307 [2024-11-20 10:03:53.080979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:94664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.307 [2024-11-20 10:03:53.080987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.307 [2024-11-20 10:03:53.080997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:94672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.307 [2024-11-20 10:03:53.081005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.307 [2024-11-20 10:03:53.081014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:94680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.307 [2024-11-20 10:03:53.081022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.307 [2024-11-20 10:03:53.081031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:94688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.307 [2024-11-20 10:03:53.081039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.307 [2024-11-20 10:03:53.081048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:94696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.307 [2024-11-20 10:03:53.081056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.307 [2024-11-20 10:03:53.081065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:94704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.307 [2024-11-20 10:03:53.081072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.307 [2024-11-20 10:03:53.081082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:94712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.307 [2024-11-20 10:03:53.081089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.307 [2024-11-20 10:03:53.081099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:94720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.307 [2024-11-20 10:03:53.081106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.307 [2024-11-20 10:03:53.081115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:94728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.307 [2024-11-20 10:03:53.081122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.307 [2024-11-20 10:03:53.081132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:94736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.307 [2024-11-20 10:03:53.081139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.307 [2024-11-20 10:03:53.081149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:94744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.307 [2024-11-20 10:03:53.081156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.307 [2024-11-20 10:03:53.081249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:94752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.307 [2024-11-20 10:03:53.081257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.308 [2024-11-20 10:03:53.081266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:94760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.308 [2024-11-20 10:03:53.081274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.308 [2024-11-20 10:03:53.081284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:94768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.308 [2024-11-20 10:03:53.081293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.308 [2024-11-20 10:03:53.081302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:94776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.308 [2024-11-20 10:03:53.081309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.308 [2024-11-20 10:03:53.081319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:94784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.308 [2024-11-20 10:03:53.081327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.308 [2024-11-20 10:03:53.081336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:94792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.308 [2024-11-20 10:03:53.081343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.308 [2024-11-20 10:03:53.081353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:94800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.308 [2024-11-20 10:03:53.081360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.308 [2024-11-20 10:03:53.081369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:94808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.308 [2024-11-20 10:03:53.081377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.308 [2024-11-20 10:03:53.081387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:94816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.308 [2024-11-20 10:03:53.081394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.308 [2024-11-20 10:03:53.081403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:94824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.308 [2024-11-20 10:03:53.081411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.308 [2024-11-20 10:03:53.081420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:94832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.308 [2024-11-20 10:03:53.081429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.308 [2024-11-20 10:03:53.081440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:94840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.308 [2024-11-20 10:03:53.081448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.308 [2024-11-20 10:03:53.081457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:94848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.308 [2024-11-20 10:03:53.081465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.308 [2024-11-20 10:03:53.081474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:94856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.308 [2024-11-20 10:03:53.081482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.308 [2024-11-20 10:03:53.081492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:94864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.308 [2024-11-20 10:03:53.081499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.308 [2024-11-20 10:03:53.081513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:94872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.308 [2024-11-20 10:03:53.081520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.308 [2024-11-20 10:03:53.081530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:94880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.308 [2024-11-20 10:03:53.081537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.308 [2024-11-20 10:03:53.081547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:94888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.308 [2024-11-20 10:03:53.081555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.308 [2024-11-20 10:03:53.081564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:94896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.308 [2024-11-20 10:03:53.081571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.308 [2024-11-20 10:03:53.081580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:94904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.308 [2024-11-20 10:03:53.081588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.308 [2024-11-20 10:03:53.081598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:94912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.308 [2024-11-20 10:03:53.081605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.308 [2024-11-20 10:03:53.081614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:94920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.308 [2024-11-20 10:03:53.081622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.308 [2024-11-20 10:03:53.081631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:94928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.308 [2024-11-20 10:03:53.081640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.308 [2024-11-20 10:03:53.081650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:94936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.308 [2024-11-20 10:03:53.081658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.308 [2024-11-20 10:03:53.081667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:94944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.308 [2024-11-20 10:03:53.081674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.308 [2024-11-20 10:03:53.081683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:94952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.308 [2024-11-20 10:03:53.081691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.308 [2024-11-20 10:03:53.081701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:94960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.308 [2024-11-20 10:03:53.081708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.308 [2024-11-20 10:03:53.081717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:94968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.308 [2024-11-20 10:03:53.081726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.308 [2024-11-20 10:03:53.081736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:94976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.308 [2024-11-20 10:03:53.081743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.308 [2024-11-20 10:03:53.081753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:94984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.308 [2024-11-20 10:03:53.081761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.308 [2024-11-20 10:03:53.081770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:94992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.308 [2024-11-20 10:03:53.081777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.308 [2024-11-20 10:03:53.081787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:94280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.308 [2024-11-20 10:03:53.081794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.308 [2024-11-20 10:03:53.081804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:94288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.308 [2024-11-20 10:03:53.081812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.308 [2024-11-20 10:03:53.081821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:94296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.308 [2024-11-20 10:03:53.081828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.308 [2024-11-20 10:03:53.081838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:94304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.308 [2024-11-20 10:03:53.081845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.308 [2024-11-20 10:03:53.081855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:94312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.308 [2024-11-20 10:03:53.081863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.308 [2024-11-20 10:03:53.081872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:94320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.308 [2024-11-20 10:03:53.081879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.308 [2024-11-20 10:03:53.081888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:95000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.308 [2024-11-20 10:03:53.081896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.308 [2024-11-20 10:03:53.081906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:95008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.308 [2024-11-20 10:03:53.081913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.308 [2024-11-20 10:03:53.081922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:95016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.308 [2024-11-20 10:03:53.081929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.309 [2024-11-20 10:03:53.081939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:95024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.309 [2024-11-20 10:03:53.081947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.309 [2024-11-20 10:03:53.081957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:95032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.309 [2024-11-20 10:03:53.081965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.309 [2024-11-20 10:03:53.081974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:95040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.309 [2024-11-20 10:03:53.081981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.309 [2024-11-20 10:03:53.081991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.309 [2024-11-20 10:03:53.081999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.309 [2024-11-20 10:03:53.082009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:95056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.309 [2024-11-20 10:03:53.082016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.309 [2024-11-20 10:03:53.082026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:95064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.309 [2024-11-20 10:03:53.082033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.309 [2024-11-20 10:03:53.082042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:95072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.309 [2024-11-20 10:03:53.082051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.309 [2024-11-20 10:03:53.082061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:95080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.309 [2024-11-20 10:03:53.082068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.309 [2024-11-20 10:03:53.082078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:95088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.309 [2024-11-20 10:03:53.082085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.309 [2024-11-20 10:03:53.082094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:95096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.309 [2024-11-20 10:03:53.082102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.309 [2024-11-20 10:03:53.082111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:95104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.309 [2024-11-20 10:03:53.082119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.309 [2024-11-20 10:03:53.082128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:95112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.309 [2024-11-20 10:03:53.082135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.309 [2024-11-20 10:03:53.082144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:95120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.309 [2024-11-20 10:03:53.082153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.309 [2024-11-20 10:03:53.082169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:95128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.309 [2024-11-20 10:03:53.082177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.309 [2024-11-20 10:03:53.082186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:95136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.309 [2024-11-20 10:03:53.082193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.309 [2024-11-20 10:03:53.082203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:95144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.309 [2024-11-20 10:03:53.082210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.309 [2024-11-20 10:03:53.082219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:95152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.309 [2024-11-20 10:03:53.082227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.309 [2024-11-20 10:03:53.082236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:95160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.309 [2024-11-20 10:03:53.082243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.309 [2024-11-20 10:03:53.082253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:95168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.309 [2024-11-20 10:03:53.082260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.309 [2024-11-20 10:03:53.082269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:95176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.309 [2024-11-20 10:03:53.082277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.309 [2024-11-20 10:03:53.082286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:95184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.309 [2024-11-20 10:03:53.082293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.309 [2024-11-20 10:03:53.082302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:95192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.309 [2024-11-20 10:03:53.082309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.309 [2024-11-20 10:03:53.082319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:95200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.309 [2024-11-20 10:03:53.082326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.309 [2024-11-20 10:03:53.082336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:95208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.309 [2024-11-20 10:03:53.082343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.309 [2024-11-20 10:03:53.082352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:95216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.309 [2024-11-20 10:03:53.082360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.309 [2024-11-20 10:03:53.082369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:95224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.309 [2024-11-20 10:03:53.082377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.309 [2024-11-20 10:03:53.082387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:95232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.309 [2024-11-20 10:03:53.082395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.309 [2024-11-20 10:03:53.082404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:95240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.309 [2024-11-20 10:03:53.082412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.309 [2024-11-20 10:03:53.082421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:95248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.309 [2024-11-20 10:03:53.082428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.309 [2024-11-20 10:03:53.082438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:95256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.309 [2024-11-20 10:03:53.082446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.309 [2024-11-20 10:03:53.082455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:95264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.309 [2024-11-20 10:03:53.082462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.309 [2024-11-20 10:03:53.082471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:95272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.309 [2024-11-20 10:03:53.082478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.309 [2024-11-20 10:03:53.082489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:95280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.309 [2024-11-20 10:03:53.082496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.309 [2024-11-20 10:03:53.082505] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60390 is same with the state(6) to be set 00:30:22.309 [2024-11-20 10:03:53.082513] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:22.309 [2024-11-20 10:03:53.082519] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:22.309 [2024-11-20 10:03:53.082526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95288 len:8 PRP1 0x0 PRP2 0x0 00:30:22.309 [2024-11-20 10:03:53.082534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.309 [2024-11-20 10:03:53.086099] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.309 [2024-11-20 10:03:53.086150] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:22.309 [2024-11-20 10:03:53.086949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.309 [2024-11-20 10:03:53.086968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:22.309 [2024-11-20 10:03:53.086976] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:22.309 [2024-11-20 10:03:53.087198] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:22.309 [2024-11-20 10:03:53.087416] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.309 [2024-11-20 10:03:53.087429] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.309 [2024-11-20 10:03:53.087438] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.309 [2024-11-20 10:03:53.087446] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.309 [2024-11-20 10:03:53.100179] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.310 [2024-11-20 10:03:53.100736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.310 [2024-11-20 10:03:53.100755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:22.310 [2024-11-20 10:03:53.100763] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:22.310 [2024-11-20 10:03:53.100979] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:22.310 [2024-11-20 10:03:53.101201] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.310 [2024-11-20 10:03:53.101212] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.310 [2024-11-20 10:03:53.101220] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.310 [2024-11-20 10:03:53.101228] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.310 [2024-11-20 10:03:53.113954] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.310 [2024-11-20 10:03:53.114622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.310 [2024-11-20 10:03:53.114664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:22.310 [2024-11-20 10:03:53.114675] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:22.310 [2024-11-20 10:03:53.114913] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:22.310 [2024-11-20 10:03:53.115134] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.310 [2024-11-20 10:03:53.115144] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.310 [2024-11-20 10:03:53.115153] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.310 [2024-11-20 10:03:53.115171] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.310 [2024-11-20 10:03:53.127715] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.310 [2024-11-20 10:03:53.128400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.310 [2024-11-20 10:03:53.128442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:22.310 [2024-11-20 10:03:53.128453] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:22.310 [2024-11-20 10:03:53.128691] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:22.310 [2024-11-20 10:03:53.128912] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.310 [2024-11-20 10:03:53.128922] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.310 [2024-11-20 10:03:53.128931] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.310 [2024-11-20 10:03:53.128944] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.310 [2024-11-20 10:03:53.141504] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.310 [2024-11-20 10:03:53.142208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.310 [2024-11-20 10:03:53.142252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:22.310 [2024-11-20 10:03:53.142264] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:22.310 [2024-11-20 10:03:53.142503] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:22.310 [2024-11-20 10:03:53.142724] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.310 [2024-11-20 10:03:53.142735] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.310 [2024-11-20 10:03:53.142743] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.310 [2024-11-20 10:03:53.142752] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.310 [2024-11-20 10:03:53.155293] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.310 [2024-11-20 10:03:53.155915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.310 [2024-11-20 10:03:53.155959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:22.310 [2024-11-20 10:03:53.155971] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:22.310 [2024-11-20 10:03:53.156221] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:22.310 [2024-11-20 10:03:53.156443] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.310 [2024-11-20 10:03:53.156453] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.310 [2024-11-20 10:03:53.156461] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.310 [2024-11-20 10:03:53.156470] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.310 [2024-11-20 10:03:53.169225] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.310 [2024-11-20 10:03:53.169787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.310 [2024-11-20 10:03:53.169832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:22.310 [2024-11-20 10:03:53.169844] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:22.310 [2024-11-20 10:03:53.170083] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:22.310 [2024-11-20 10:03:53.170322] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.310 [2024-11-20 10:03:53.170335] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.310 [2024-11-20 10:03:53.170343] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.310 [2024-11-20 10:03:53.170352] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.310 [2024-11-20 10:03:53.183110] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.310 [2024-11-20 10:03:53.183773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.310 [2024-11-20 10:03:53.183825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:22.310 [2024-11-20 10:03:53.183836] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:22.310 [2024-11-20 10:03:53.184076] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:22.310 [2024-11-20 10:03:53.184310] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.310 [2024-11-20 10:03:53.184322] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.310 [2024-11-20 10:03:53.184330] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.310 [2024-11-20 10:03:53.184339] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.310 [2024-11-20 10:03:53.196895] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.310 [2024-11-20 10:03:53.197586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.310 [2024-11-20 10:03:53.197632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:22.310 [2024-11-20 10:03:53.197644] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:22.310 [2024-11-20 10:03:53.197884] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:22.310 [2024-11-20 10:03:53.198106] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.310 [2024-11-20 10:03:53.198117] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.310 [2024-11-20 10:03:53.198125] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.310 [2024-11-20 10:03:53.198134] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.310 [2024-11-20 10:03:53.210706] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.310 [2024-11-20 10:03:53.211294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.310 [2024-11-20 10:03:53.211319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:22.310 [2024-11-20 10:03:53.211328] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:22.310 [2024-11-20 10:03:53.211546] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:22.310 [2024-11-20 10:03:53.211763] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.310 [2024-11-20 10:03:53.211774] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.310 [2024-11-20 10:03:53.211782] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.310 [2024-11-20 10:03:53.211790] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.574 [2024-11-20 10:03:53.224569] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.574 [2024-11-20 10:03:53.225201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.574 [2024-11-20 10:03:53.225253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:22.574 [2024-11-20 10:03:53.225268] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:22.574 [2024-11-20 10:03:53.225518] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:22.574 [2024-11-20 10:03:53.225741] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.574 [2024-11-20 10:03:53.225752] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.574 [2024-11-20 10:03:53.225760] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.574 [2024-11-20 10:03:53.225769] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.574 [2024-11-20 10:03:53.238343] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.574 [2024-11-20 10:03:53.239002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.574 [2024-11-20 10:03:53.239056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:22.574 [2024-11-20 10:03:53.239068] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:22.574 [2024-11-20 10:03:53.239327] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:22.574 [2024-11-20 10:03:53.239552] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.574 [2024-11-20 10:03:53.239564] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.574 [2024-11-20 10:03:53.239572] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.574 [2024-11-20 10:03:53.239581] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.574 [2024-11-20 10:03:53.252149] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.574 [2024-11-20 10:03:53.252803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.574 [2024-11-20 10:03:53.252862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:22.574 [2024-11-20 10:03:53.252874] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:22.574 [2024-11-20 10:03:53.253121] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:22.574 [2024-11-20 10:03:53.253360] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.574 [2024-11-20 10:03:53.253373] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.574 [2024-11-20 10:03:53.253381] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.574 [2024-11-20 10:03:53.253391] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.574 [2024-11-20 10:03:53.265990] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.574 [2024-11-20 10:03:53.266666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.574 [2024-11-20 10:03:53.266721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:22.574 [2024-11-20 10:03:53.266733] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:22.574 [2024-11-20 10:03:53.266978] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:22.574 [2024-11-20 10:03:53.267220] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.574 [2024-11-20 10:03:53.267240] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.574 [2024-11-20 10:03:53.267249] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.574 [2024-11-20 10:03:53.267258] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.574 [2024-11-20 10:03:53.279820] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.574 [2024-11-20 10:03:53.280496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.574 [2024-11-20 10:03:53.280549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:22.574 [2024-11-20 10:03:53.280561] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:22.574 [2024-11-20 10:03:53.280807] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:22.574 [2024-11-20 10:03:53.281030] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.574 [2024-11-20 10:03:53.281041] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.574 [2024-11-20 10:03:53.281049] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.574 [2024-11-20 10:03:53.281058] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.574 [2024-11-20 10:03:53.293628] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.574 [2024-11-20 10:03:53.294268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.574 [2024-11-20 10:03:53.294324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:22.574 [2024-11-20 10:03:53.294338] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:22.574 [2024-11-20 10:03:53.294584] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:22.574 [2024-11-20 10:03:53.294808] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.574 [2024-11-20 10:03:53.294820] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.574 [2024-11-20 10:03:53.294829] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.574 [2024-11-20 10:03:53.294837] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.574 [2024-11-20 10:03:53.307517] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.574 [2024-11-20 10:03:53.308129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.574 [2024-11-20 10:03:53.308156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:22.574 [2024-11-20 10:03:53.308176] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:22.574 [2024-11-20 10:03:53.308397] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:22.574 [2024-11-20 10:03:53.308617] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.574 [2024-11-20 10:03:53.308628] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.574 [2024-11-20 10:03:53.308637] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.574 [2024-11-20 10:03:53.308652] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.574 [2024-11-20 10:03:53.321462] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.575 [2024-11-20 10:03:53.322112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.575 [2024-11-20 10:03:53.322184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:22.575 [2024-11-20 10:03:53.322198] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:22.575 [2024-11-20 10:03:53.322447] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:22.575 [2024-11-20 10:03:53.322671] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.575 [2024-11-20 10:03:53.322684] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.575 [2024-11-20 10:03:53.322693] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.575 [2024-11-20 10:03:53.322702] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.575 [2024-11-20 10:03:53.335286] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.575 [2024-11-20 10:03:53.335790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.575 [2024-11-20 10:03:53.335820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:22.575 [2024-11-20 10:03:53.335830] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:22.575 [2024-11-20 10:03:53.336050] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:22.575 [2024-11-20 10:03:53.336284] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.575 [2024-11-20 10:03:53.336302] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.575 [2024-11-20 10:03:53.336310] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.575 [2024-11-20 10:03:53.336319] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.575 [2024-11-20 10:03:53.349092] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.575 [2024-11-20 10:03:53.349710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.575 [2024-11-20 10:03:53.349736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:22.575 [2024-11-20 10:03:53.349745] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:22.575 [2024-11-20 10:03:53.349964] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:22.575 [2024-11-20 10:03:53.350189] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.575 [2024-11-20 10:03:53.350202] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.575 [2024-11-20 10:03:53.350211] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.575 [2024-11-20 10:03:53.350220] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.575 [2024-11-20 10:03:53.362993] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.575 [2024-11-20 10:03:53.363822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.575 [2024-11-20 10:03:53.363896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:22.575 [2024-11-20 10:03:53.363910] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:22.575 [2024-11-20 10:03:53.364174] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:22.575 [2024-11-20 10:03:53.364415] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.575 [2024-11-20 10:03:53.364430] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.575 [2024-11-20 10:03:53.364439] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.575 [2024-11-20 10:03:53.364449] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.575 [2024-11-20 10:03:53.376819] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.575 [2024-11-20 10:03:53.377549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.575 [2024-11-20 10:03:53.377615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:22.575 [2024-11-20 10:03:53.377628] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:22.575 [2024-11-20 10:03:53.377881] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:22.575 [2024-11-20 10:03:53.378107] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.575 [2024-11-20 10:03:53.378120] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.575 [2024-11-20 10:03:53.378129] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.575 [2024-11-20 10:03:53.378138] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.575 [2024-11-20 10:03:53.390728] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.575 [2024-11-20 10:03:53.391485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.575 [2024-11-20 10:03:53.391551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:22.575 [2024-11-20 10:03:53.391564] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:22.575 [2024-11-20 10:03:53.391816] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:22.575 [2024-11-20 10:03:53.392042] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.575 [2024-11-20 10:03:53.392054] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.575 [2024-11-20 10:03:53.392063] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.575 [2024-11-20 10:03:53.392073] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.575 [2024-11-20 10:03:53.404661] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.575 [2024-11-20 10:03:53.405309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.575 [2024-11-20 10:03:53.405376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:22.575 [2024-11-20 10:03:53.405390] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:22.575 [2024-11-20 10:03:53.405650] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:22.575 [2024-11-20 10:03:53.405876] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.575 [2024-11-20 10:03:53.405888] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.575 [2024-11-20 10:03:53.405897] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.575 [2024-11-20 10:03:53.405907] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.575 [2024-11-20 10:03:53.418484] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.575 [2024-11-20 10:03:53.419109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.575 [2024-11-20 10:03:53.419140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:22.575 [2024-11-20 10:03:53.419149] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:22.575 [2024-11-20 10:03:53.419380] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:22.575 [2024-11-20 10:03:53.419601] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.575 [2024-11-20 10:03:53.419612] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.575 [2024-11-20 10:03:53.419620] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.575 [2024-11-20 10:03:53.419628] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.575 [2024-11-20 10:03:53.432381] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.575 [2024-11-20 10:03:53.432985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.575 [2024-11-20 10:03:53.433011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:22.575 [2024-11-20 10:03:53.433020] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:22.575 [2024-11-20 10:03:53.433249] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:22.575 [2024-11-20 10:03:53.433470] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.575 [2024-11-20 10:03:53.433484] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.575 [2024-11-20 10:03:53.433492] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.575 [2024-11-20 10:03:53.433500] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.575 [2024-11-20 10:03:53.446168] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.575 [2024-11-20 10:03:53.446822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.575 [2024-11-20 10:03:53.446888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:22.575 [2024-11-20 10:03:53.446901] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:22.575 [2024-11-20 10:03:53.447155] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:22.575 [2024-11-20 10:03:53.447394] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.575 [2024-11-20 10:03:53.447413] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.575 [2024-11-20 10:03:53.447422] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.575 [2024-11-20 10:03:53.447432] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.576 [2024-11-20 10:03:53.460014] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.576 [2024-11-20 10:03:53.460708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.576 [2024-11-20 10:03:53.460774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:22.576 [2024-11-20 10:03:53.460787] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:22.576 [2024-11-20 10:03:53.461039] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:22.576 [2024-11-20 10:03:53.461277] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.576 [2024-11-20 10:03:53.461289] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.576 [2024-11-20 10:03:53.461299] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.576 [2024-11-20 10:03:53.461309] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.576 [2024-11-20 10:03:53.473899] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.576 [2024-11-20 10:03:53.474577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.576 [2024-11-20 10:03:53.474644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:22.576 [2024-11-20 10:03:53.474657] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:22.576 [2024-11-20 10:03:53.474910] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:22.576 [2024-11-20 10:03:53.475136] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.576 [2024-11-20 10:03:53.475148] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.576 [2024-11-20 10:03:53.475172] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.576 [2024-11-20 10:03:53.475183] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.839 [2024-11-20 10:03:53.487755] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.839 [2024-11-20 10:03:53.488519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.839 [2024-11-20 10:03:53.488586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:22.839 [2024-11-20 10:03:53.488599] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:22.839 [2024-11-20 10:03:53.488852] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:22.839 [2024-11-20 10:03:53.489078] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.839 [2024-11-20 10:03:53.489090] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.839 [2024-11-20 10:03:53.489099] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.839 [2024-11-20 10:03:53.489124] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.839 [2024-11-20 10:03:53.501580] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.839 [2024-11-20 10:03:53.502208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.839 [2024-11-20 10:03:53.502241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:22.839 [2024-11-20 10:03:53.502251] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:22.839 [2024-11-20 10:03:53.502472] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:22.839 [2024-11-20 10:03:53.502692] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.839 [2024-11-20 10:03:53.502705] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.839 [2024-11-20 10:03:53.502713] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.839 [2024-11-20 10:03:53.502724] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.839 [2024-11-20 10:03:53.515495] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.839 [2024-11-20 10:03:53.516104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.839 [2024-11-20 10:03:53.516132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:22.839 [2024-11-20 10:03:53.516141] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:22.839 [2024-11-20 10:03:53.516366] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:22.839 [2024-11-20 10:03:53.516586] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.839 [2024-11-20 10:03:53.516598] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.839 [2024-11-20 10:03:53.516606] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.839 [2024-11-20 10:03:53.516614] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.839 [2024-11-20 10:03:53.529389] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.839 [2024-11-20 10:03:53.529998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.839 [2024-11-20 10:03:53.530023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:22.839 [2024-11-20 10:03:53.530032] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:22.839 [2024-11-20 10:03:53.530259] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:22.839 [2024-11-20 10:03:53.530479] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.839 [2024-11-20 10:03:53.530491] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.839 [2024-11-20 10:03:53.530499] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.839 [2024-11-20 10:03:53.530507] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.839 [2024-11-20 10:03:53.543304] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.839 [2024-11-20 10:03:53.543907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.839 [2024-11-20 10:03:53.543944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:22.839 [2024-11-20 10:03:53.543953] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:22.839 [2024-11-20 10:03:53.544178] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:22.839 [2024-11-20 10:03:53.544401] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.839 [2024-11-20 10:03:53.544413] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.839 [2024-11-20 10:03:53.544422] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.839 [2024-11-20 10:03:53.544430] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.839 [2024-11-20 10:03:53.557219] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.839 [2024-11-20 10:03:53.557871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.839 [2024-11-20 10:03:53.557937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:22.839 [2024-11-20 10:03:53.557951] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:22.839 [2024-11-20 10:03:53.558216] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:22.839 [2024-11-20 10:03:53.558443] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.839 [2024-11-20 10:03:53.558457] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.839 [2024-11-20 10:03:53.558466] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.839 [2024-11-20 10:03:53.558476] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.839 9389.33 IOPS, 36.68 MiB/s [2024-11-20T09:03:53.755Z] [2024-11-20 10:03:53.571077] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.839 [2024-11-20 10:03:53.571671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.839 [2024-11-20 10:03:53.571704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:22.839 [2024-11-20 10:03:53.571713] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:22.839 [2024-11-20 10:03:53.571933] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:22.839 [2024-11-20 10:03:53.572153] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.839 [2024-11-20 10:03:53.572172] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.839 [2024-11-20 10:03:53.572180] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.839 [2024-11-20 10:03:53.572189] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.839 [2024-11-20 10:03:53.585007] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.839 [2024-11-20 10:03:53.585691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.839 [2024-11-20 10:03:53.585757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:22.839 [2024-11-20 10:03:53.585771] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:22.839 [2024-11-20 10:03:53.586031] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:22.839 [2024-11-20 10:03:53.586269] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.839 [2024-11-20 10:03:53.586284] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.839 [2024-11-20 10:03:53.586293] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.840 [2024-11-20 10:03:53.586303] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.840 [2024-11-20 10:03:53.598892] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.840 [2024-11-20 10:03:53.599581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.840 [2024-11-20 10:03:53.599647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:22.840 [2024-11-20 10:03:53.599660] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:22.840 [2024-11-20 10:03:53.599912] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:22.840 [2024-11-20 10:03:53.600138] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.840 [2024-11-20 10:03:53.600150] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.840 [2024-11-20 10:03:53.600169] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.840 [2024-11-20 10:03:53.600180] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.840 [2024-11-20 10:03:53.613007] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.840 [2024-11-20 10:03:53.613625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.840 [2024-11-20 10:03:53.613655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:22.840 [2024-11-20 10:03:53.613664] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:22.840 [2024-11-20 10:03:53.613885] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:22.840 [2024-11-20 10:03:53.614105] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.840 [2024-11-20 10:03:53.614116] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.840 [2024-11-20 10:03:53.614125] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.840 [2024-11-20 10:03:53.614133] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.840 [2024-11-20 10:03:53.626917] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.840 [2024-11-20 10:03:53.627505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.840 [2024-11-20 10:03:53.627532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:22.840 [2024-11-20 10:03:53.627541] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:22.840 [2024-11-20 10:03:53.627760] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:22.840 [2024-11-20 10:03:53.627979] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.840 [2024-11-20 10:03:53.628001] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.840 [2024-11-20 10:03:53.628009] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.840 [2024-11-20 10:03:53.628017] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.840 [2024-11-20 10:03:53.640817] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.840 [2024-11-20 10:03:53.641401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.840 [2024-11-20 10:03:53.641429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:22.840 [2024-11-20 10:03:53.641438] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:22.840 [2024-11-20 10:03:53.641657] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:22.840 [2024-11-20 10:03:53.641879] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.840 [2024-11-20 10:03:53.641891] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.840 [2024-11-20 10:03:53.641899] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.840 [2024-11-20 10:03:53.641907] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.840 [2024-11-20 10:03:53.654680] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.840 [2024-11-20 10:03:53.655269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.840 [2024-11-20 10:03:53.655297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:22.840 [2024-11-20 10:03:53.655306] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:22.840 [2024-11-20 10:03:53.655524] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:22.840 [2024-11-20 10:03:53.655743] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.840 [2024-11-20 10:03:53.655756] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.840 [2024-11-20 10:03:53.655764] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.840 [2024-11-20 10:03:53.655772] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.840 [2024-11-20 10:03:53.668568] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.840 [2024-11-20 10:03:53.669176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.840 [2024-11-20 10:03:53.669203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:22.840 [2024-11-20 10:03:53.669213] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:22.840 [2024-11-20 10:03:53.669430] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:22.840 [2024-11-20 10:03:53.669650] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.840 [2024-11-20 10:03:53.669663] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.840 [2024-11-20 10:03:53.669671] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.840 [2024-11-20 10:03:53.669686] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.840 [2024-11-20 10:03:53.682458] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.840 [2024-11-20 10:03:53.683016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.840 [2024-11-20 10:03:53.683041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:22.840 [2024-11-20 10:03:53.683050] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:22.840 [2024-11-20 10:03:53.683276] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:22.840 [2024-11-20 10:03:53.683496] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.840 [2024-11-20 10:03:53.683508] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.840 [2024-11-20 10:03:53.683516] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.840 [2024-11-20 10:03:53.683524] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.840 [2024-11-20 10:03:53.696281] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.840 [2024-11-20 10:03:53.696895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.840 [2024-11-20 10:03:53.696923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:22.840 [2024-11-20 10:03:53.696932] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:22.840 [2024-11-20 10:03:53.697150] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:22.840 [2024-11-20 10:03:53.697379] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.840 [2024-11-20 10:03:53.697391] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.840 [2024-11-20 10:03:53.697400] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.840 [2024-11-20 10:03:53.697409] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.840 [2024-11-20 10:03:53.710182] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.840 [2024-11-20 10:03:53.710879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.840 [2024-11-20 10:03:53.710946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:22.840 [2024-11-20 10:03:53.710960] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:22.840 [2024-11-20 10:03:53.711224] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:22.840 [2024-11-20 10:03:53.711451] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.840 [2024-11-20 10:03:53.711463] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.840 [2024-11-20 10:03:53.711472] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.840 [2024-11-20 10:03:53.711482] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.840 [2024-11-20 10:03:53.724047] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.840 [2024-11-20 10:03:53.724775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.840 [2024-11-20 10:03:53.724849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:22.840 [2024-11-20 10:03:53.724862] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:22.840 [2024-11-20 10:03:53.725115] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:22.840 [2024-11-20 10:03:53.725357] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.840 [2024-11-20 10:03:53.725371] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.841 [2024-11-20 10:03:53.725380] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.841 [2024-11-20 10:03:53.725390] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:22.841 [2024-11-20 10:03:53.737980] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:22.841 [2024-11-20 10:03:53.738676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.841 [2024-11-20 10:03:53.738741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:22.841 [2024-11-20 10:03:53.738755] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:22.841 [2024-11-20 10:03:53.739008] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:22.841 [2024-11-20 10:03:53.739245] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:22.841 [2024-11-20 10:03:53.739258] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:22.841 [2024-11-20 10:03:53.739266] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:22.841 [2024-11-20 10:03:53.739276] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.104 [2024-11-20 10:03:53.751871] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.104 [2024-11-20 10:03:53.752610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.104 [2024-11-20 10:03:53.752675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:23.104 [2024-11-20 10:03:53.752688] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:23.104 [2024-11-20 10:03:53.752941] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:23.104 [2024-11-20 10:03:53.753178] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.104 [2024-11-20 10:03:53.753191] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.104 [2024-11-20 10:03:53.753200] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.104 [2024-11-20 10:03:53.753210] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.104 [2024-11-20 10:03:53.765797] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.104 [2024-11-20 10:03:53.766394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.104 [2024-11-20 10:03:53.766425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:23.104 [2024-11-20 10:03:53.766435] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:23.104 [2024-11-20 10:03:53.766664] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:23.104 [2024-11-20 10:03:53.766886] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.104 [2024-11-20 10:03:53.766897] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.104 [2024-11-20 10:03:53.766905] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.104 [2024-11-20 10:03:53.766913] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.104 [2024-11-20 10:03:53.779682] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.104 [2024-11-20 10:03:53.780255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.104 [2024-11-20 10:03:53.780283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:23.104 [2024-11-20 10:03:53.780292] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:23.104 [2024-11-20 10:03:53.780511] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:23.104 [2024-11-20 10:03:53.780731] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.104 [2024-11-20 10:03:53.780744] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.104 [2024-11-20 10:03:53.780752] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.104 [2024-11-20 10:03:53.780760] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.104 [2024-11-20 10:03:53.793529] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.104 [2024-11-20 10:03:53.794237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.104 [2024-11-20 10:03:53.794303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:23.104 [2024-11-20 10:03:53.794318] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:23.104 [2024-11-20 10:03:53.794573] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:23.104 [2024-11-20 10:03:53.794799] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.104 [2024-11-20 10:03:53.794813] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.104 [2024-11-20 10:03:53.794823] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.104 [2024-11-20 10:03:53.794832] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.104 [2024-11-20 10:03:53.807422] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.104 [2024-11-20 10:03:53.808147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.104 [2024-11-20 10:03:53.808225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:23.104 [2024-11-20 10:03:53.808239] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:23.104 [2024-11-20 10:03:53.808492] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:23.104 [2024-11-20 10:03:53.808717] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.104 [2024-11-20 10:03:53.808737] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.104 [2024-11-20 10:03:53.808746] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.104 [2024-11-20 10:03:53.808756] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.104 [2024-11-20 10:03:53.821344] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.104 [2024-11-20 10:03:53.822050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.104 [2024-11-20 10:03:53.822117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:23.104 [2024-11-20 10:03:53.822130] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:23.104 [2024-11-20 10:03:53.822394] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:23.104 [2024-11-20 10:03:53.822620] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.104 [2024-11-20 10:03:53.822632] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.104 [2024-11-20 10:03:53.822640] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.104 [2024-11-20 10:03:53.822650] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.104 [2024-11-20 10:03:53.835231] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.104 [2024-11-20 10:03:53.835853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.104 [2024-11-20 10:03:53.835883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:23.104 [2024-11-20 10:03:53.835893] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:23.104 [2024-11-20 10:03:53.836113] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:23.104 [2024-11-20 10:03:53.836343] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.104 [2024-11-20 10:03:53.836356] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.104 [2024-11-20 10:03:53.836364] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.104 [2024-11-20 10:03:53.836373] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.104 [2024-11-20 10:03:53.849144] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.104 [2024-11-20 10:03:53.849690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.104 [2024-11-20 10:03:53.849716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:23.104 [2024-11-20 10:03:53.849725] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:23.104 [2024-11-20 10:03:53.849944] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:23.104 [2024-11-20 10:03:53.850172] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.104 [2024-11-20 10:03:53.850186] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.105 [2024-11-20 10:03:53.850194] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.105 [2024-11-20 10:03:53.850212] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.105 [2024-11-20 10:03:53.862983] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.105 [2024-11-20 10:03:53.863588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.105 [2024-11-20 10:03:53.863615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:23.105 [2024-11-20 10:03:53.863624] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:23.105 [2024-11-20 10:03:53.863842] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:23.105 [2024-11-20 10:03:53.864061] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.105 [2024-11-20 10:03:53.864076] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.105 [2024-11-20 10:03:53.864083] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.105 [2024-11-20 10:03:53.864091] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.105 [2024-11-20 10:03:53.876877] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.105 [2024-11-20 10:03:53.877436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.105 [2024-11-20 10:03:53.877481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:23.105 [2024-11-20 10:03:53.877492] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:23.105 [2024-11-20 10:03:53.877728] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:23.105 [2024-11-20 10:03:53.877951] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.105 [2024-11-20 10:03:53.877962] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.105 [2024-11-20 10:03:53.877970] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.105 [2024-11-20 10:03:53.877979] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.105 [2024-11-20 10:03:53.890784] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.105 [2024-11-20 10:03:53.891369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.105 [2024-11-20 10:03:53.891436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:23.105 [2024-11-20 10:03:53.891449] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:23.105 [2024-11-20 10:03:53.891702] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:23.105 [2024-11-20 10:03:53.891928] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.105 [2024-11-20 10:03:53.891940] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.105 [2024-11-20 10:03:53.891949] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.105 [2024-11-20 10:03:53.891959] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.105 [2024-11-20 10:03:53.904539] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.105 [2024-11-20 10:03:53.905209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.105 [2024-11-20 10:03:53.905294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:23.105 [2024-11-20 10:03:53.905309] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:23.105 [2024-11-20 10:03:53.905563] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:23.105 [2024-11-20 10:03:53.905788] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.105 [2024-11-20 10:03:53.905801] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.105 [2024-11-20 10:03:53.905809] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.105 [2024-11-20 10:03:53.905819] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.105 [2024-11-20 10:03:53.918404] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.105 [2024-11-20 10:03:53.919011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.105 [2024-11-20 10:03:53.919077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:23.105 [2024-11-20 10:03:53.919090] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:23.105 [2024-11-20 10:03:53.919356] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:23.105 [2024-11-20 10:03:53.919583] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.105 [2024-11-20 10:03:53.919594] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.105 [2024-11-20 10:03:53.919604] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.105 [2024-11-20 10:03:53.919614] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.105 [2024-11-20 10:03:53.932178] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.105 [2024-11-20 10:03:53.932798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.105 [2024-11-20 10:03:53.932828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:23.105 [2024-11-20 10:03:53.932837] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:23.105 [2024-11-20 10:03:53.933057] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:23.105 [2024-11-20 10:03:53.933286] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.105 [2024-11-20 10:03:53.933300] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.105 [2024-11-20 10:03:53.933308] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.105 [2024-11-20 10:03:53.933316] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.105 [2024-11-20 10:03:53.946121] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.105 [2024-11-20 10:03:53.946705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.105 [2024-11-20 10:03:53.946772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:23.105 [2024-11-20 10:03:53.946785] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:23.105 [2024-11-20 10:03:53.947045] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:23.105 [2024-11-20 10:03:53.947282] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.105 [2024-11-20 10:03:53.947295] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.105 [2024-11-20 10:03:53.947304] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.105 [2024-11-20 10:03:53.947314] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.105 [2024-11-20 10:03:53.959885] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.105 [2024-11-20 10:03:53.960554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.105 [2024-11-20 10:03:53.960621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:23.105 [2024-11-20 10:03:53.960634] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:23.105 [2024-11-20 10:03:53.960887] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:23.105 [2024-11-20 10:03:53.961113] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.105 [2024-11-20 10:03:53.961125] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.105 [2024-11-20 10:03:53.961134] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.105 [2024-11-20 10:03:53.961144] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.105 [2024-11-20 10:03:53.973494] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.105 [2024-11-20 10:03:53.974044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.105 [2024-11-20 10:03:53.974071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:23.105 [2024-11-20 10:03:53.974078] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:23.105 [2024-11-20 10:03:53.974242] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:23.105 [2024-11-20 10:03:53.974395] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.105 [2024-11-20 10:03:53.974405] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.105 [2024-11-20 10:03:53.974411] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.105 [2024-11-20 10:03:53.974417] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.105 [2024-11-20 10:03:53.986125] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.105 [2024-11-20 10:03:53.986702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.105 [2024-11-20 10:03:53.986762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:23.105 [2024-11-20 10:03:53.986773] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:23.105 [2024-11-20 10:03:53.986956] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:23.105 [2024-11-20 10:03:53.987113] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.106 [2024-11-20 10:03:53.987130] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.106 [2024-11-20 10:03:53.987138] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.106 [2024-11-20 10:03:53.987146] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.106 [2024-11-20 10:03:53.998739] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.106 [2024-11-20 10:03:53.999417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.106 [2024-11-20 10:03:53.999468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:23.106 [2024-11-20 10:03:53.999477] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:23.106 [2024-11-20 10:03:53.999654] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:23.106 [2024-11-20 10:03:53.999811] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.106 [2024-11-20 10:03:53.999821] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.106 [2024-11-20 10:03:53.999827] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.106 [2024-11-20 10:03:53.999834] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.106 [2024-11-20 10:03:54.011419] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.106 [2024-11-20 10:03:54.012019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.106 [2024-11-20 10:03:54.012067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:23.106 [2024-11-20 10:03:54.012076] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:23.106 [2024-11-20 10:03:54.012261] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:23.106 [2024-11-20 10:03:54.012417] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.106 [2024-11-20 10:03:54.012425] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.106 [2024-11-20 10:03:54.012431] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.106 [2024-11-20 10:03:54.012438] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.369 [2024-11-20 10:03:54.024013] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.370 [2024-11-20 10:03:54.024544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.370 [2024-11-20 10:03:54.024565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:23.370 [2024-11-20 10:03:54.024572] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:23.370 [2024-11-20 10:03:54.024723] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:23.370 [2024-11-20 10:03:54.024874] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.370 [2024-11-20 10:03:54.024883] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.370 [2024-11-20 10:03:54.024889] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.370 [2024-11-20 10:03:54.024900] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.370 [2024-11-20 10:03:54.036616] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.370 [2024-11-20 10:03:54.037280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.370 [2024-11-20 10:03:54.037323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:23.370 [2024-11-20 10:03:54.037332] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:23.370 [2024-11-20 10:03:54.037503] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:23.370 [2024-11-20 10:03:54.037658] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.370 [2024-11-20 10:03:54.037667] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.370 [2024-11-20 10:03:54.037673] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.370 [2024-11-20 10:03:54.037682] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.370 [2024-11-20 10:03:54.049253] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.370 [2024-11-20 10:03:54.049827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.370 [2024-11-20 10:03:54.049868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:23.370 [2024-11-20 10:03:54.049876] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:23.370 [2024-11-20 10:03:54.050046] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:23.370 [2024-11-20 10:03:54.050210] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.370 [2024-11-20 10:03:54.050219] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.370 [2024-11-20 10:03:54.050226] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.370 [2024-11-20 10:03:54.050234] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.370 [2024-11-20 10:03:54.061938] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.370 [2024-11-20 10:03:54.062555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.370 [2024-11-20 10:03:54.062594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:23.370 [2024-11-20 10:03:54.062602] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:23.370 [2024-11-20 10:03:54.062771] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:23.370 [2024-11-20 10:03:54.062925] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.370 [2024-11-20 10:03:54.062933] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.370 [2024-11-20 10:03:54.062940] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.370 [2024-11-20 10:03:54.062946] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.370 [2024-11-20 10:03:54.074525] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.370 [2024-11-20 10:03:54.075101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.370 [2024-11-20 10:03:54.075143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:23.370 [2024-11-20 10:03:54.075152] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:23.370 [2024-11-20 10:03:54.075328] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:23.370 [2024-11-20 10:03:54.075482] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.370 [2024-11-20 10:03:54.075489] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.370 [2024-11-20 10:03:54.075495] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.370 [2024-11-20 10:03:54.075501] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.370 [2024-11-20 10:03:54.087202] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.370 [2024-11-20 10:03:54.087677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.370 [2024-11-20 10:03:54.087695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:23.370 [2024-11-20 10:03:54.087702] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:23.370 [2024-11-20 10:03:54.087851] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:23.370 [2024-11-20 10:03:54.088000] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.370 [2024-11-20 10:03:54.088007] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.370 [2024-11-20 10:03:54.088013] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.370 [2024-11-20 10:03:54.088019] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.370 [2024-11-20 10:03:54.099859] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.370 [2024-11-20 10:03:54.100298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.370 [2024-11-20 10:03:54.100314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:23.370 [2024-11-20 10:03:54.100320] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:23.370 [2024-11-20 10:03:54.100470] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:23.370 [2024-11-20 10:03:54.100620] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.370 [2024-11-20 10:03:54.100626] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.370 [2024-11-20 10:03:54.100632] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.370 [2024-11-20 10:03:54.100637] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.370 [2024-11-20 10:03:54.112465] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.370 [2024-11-20 10:03:54.113034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.370 [2024-11-20 10:03:54.113068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:23.370 [2024-11-20 10:03:54.113077] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:23.370 [2024-11-20 10:03:54.113254] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:23.370 [2024-11-20 10:03:54.113408] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.370 [2024-11-20 10:03:54.113415] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.370 [2024-11-20 10:03:54.113421] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.370 [2024-11-20 10:03:54.113427] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.370 [2024-11-20 10:03:54.125062] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.370 [2024-11-20 10:03:54.125648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.370 [2024-11-20 10:03:54.125682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:23.370 [2024-11-20 10:03:54.125691] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:23.370 [2024-11-20 10:03:54.125857] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:23.370 [2024-11-20 10:03:54.126010] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.370 [2024-11-20 10:03:54.126018] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.370 [2024-11-20 10:03:54.126024] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.370 [2024-11-20 10:03:54.126031] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.370 [2024-11-20 10:03:54.137736] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.370 [2024-11-20 10:03:54.138434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.370 [2024-11-20 10:03:54.138467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:23.370 [2024-11-20 10:03:54.138476] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:23.370 [2024-11-20 10:03:54.138642] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:23.370 [2024-11-20 10:03:54.138795] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.370 [2024-11-20 10:03:54.138802] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.370 [2024-11-20 10:03:54.138808] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.371 [2024-11-20 10:03:54.138814] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.371 [2024-11-20 10:03:54.150365] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.371 [2024-11-20 10:03:54.150969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.371 [2024-11-20 10:03:54.151001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:23.371 [2024-11-20 10:03:54.151009] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:23.371 [2024-11-20 10:03:54.151179] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:23.371 [2024-11-20 10:03:54.151332] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.371 [2024-11-20 10:03:54.151346] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.371 [2024-11-20 10:03:54.151351] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.371 [2024-11-20 10:03:54.151357] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.371 [2024-11-20 10:03:54.163048] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.371 [2024-11-20 10:03:54.163619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.371 [2024-11-20 10:03:54.163651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:23.371 [2024-11-20 10:03:54.163660] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:23.371 [2024-11-20 10:03:54.163824] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:23.371 [2024-11-20 10:03:54.163976] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.371 [2024-11-20 10:03:54.163983] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.371 [2024-11-20 10:03:54.163990] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.371 [2024-11-20 10:03:54.163996] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.371 [2024-11-20 10:03:54.175701] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.371 [2024-11-20 10:03:54.176193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.371 [2024-11-20 10:03:54.176209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:23.371 [2024-11-20 10:03:54.176215] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:23.371 [2024-11-20 10:03:54.176364] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:23.371 [2024-11-20 10:03:54.176513] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.371 [2024-11-20 10:03:54.176520] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.371 [2024-11-20 10:03:54.176525] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.371 [2024-11-20 10:03:54.176530] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.371 [2024-11-20 10:03:54.188359] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.371 [2024-11-20 10:03:54.188671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.371 [2024-11-20 10:03:54.188686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:23.371 [2024-11-20 10:03:54.188692] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:23.371 [2024-11-20 10:03:54.188840] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:23.371 [2024-11-20 10:03:54.188989] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.371 [2024-11-20 10:03:54.188996] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.371 [2024-11-20 10:03:54.189001] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.371 [2024-11-20 10:03:54.189010] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.371 [2024-11-20 10:03:54.200982] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.371 [2024-11-20 10:03:54.201477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.371 [2024-11-20 10:03:54.201510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:23.371 [2024-11-20 10:03:54.201519] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:23.371 [2024-11-20 10:03:54.201683] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:23.371 [2024-11-20 10:03:54.201835] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.371 [2024-11-20 10:03:54.201843] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.371 [2024-11-20 10:03:54.201849] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.371 [2024-11-20 10:03:54.201855] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.371 [2024-11-20 10:03:54.213544] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.371 [2024-11-20 10:03:54.214043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.371 [2024-11-20 10:03:54.214058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:23.371 [2024-11-20 10:03:54.214064] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:23.371 [2024-11-20 10:03:54.214218] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:23.371 [2024-11-20 10:03:54.214369] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.371 [2024-11-20 10:03:54.214376] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.371 [2024-11-20 10:03:54.214382] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.371 [2024-11-20 10:03:54.214387] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.371 [2024-11-20 10:03:54.226209] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.371 [2024-11-20 10:03:54.226587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.371 [2024-11-20 10:03:54.226601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:23.371 [2024-11-20 10:03:54.226606] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:23.371 [2024-11-20 10:03:54.226755] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:23.371 [2024-11-20 10:03:54.226904] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.371 [2024-11-20 10:03:54.226910] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.371 [2024-11-20 10:03:54.226916] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.371 [2024-11-20 10:03:54.226921] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.371 [2024-11-20 10:03:54.238894] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.371 [2024-11-20 10:03:54.239386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.371 [2024-11-20 10:03:54.239404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:23.371 [2024-11-20 10:03:54.239409] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:23.371 [2024-11-20 10:03:54.239558] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:23.371 [2024-11-20 10:03:54.239707] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.371 [2024-11-20 10:03:54.239714] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.371 [2024-11-20 10:03:54.239719] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.371 [2024-11-20 10:03:54.239724] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.371 [2024-11-20 10:03:54.251547] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.371 [2024-11-20 10:03:54.252030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.371 [2024-11-20 10:03:54.252043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:23.371 [2024-11-20 10:03:54.252049] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:23.371 [2024-11-20 10:03:54.252202] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:23.371 [2024-11-20 10:03:54.252352] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.371 [2024-11-20 10:03:54.252358] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.371 [2024-11-20 10:03:54.252363] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.371 [2024-11-20 10:03:54.252368] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.371 [2024-11-20 10:03:54.264197] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.371 [2024-11-20 10:03:54.264779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.371 [2024-11-20 10:03:54.264811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:23.371 [2024-11-20 10:03:54.264820] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:23.371 [2024-11-20 10:03:54.264985] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:23.371 [2024-11-20 10:03:54.265137] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.372 [2024-11-20 10:03:54.265145] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.372 [2024-11-20 10:03:54.265151] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.372 [2024-11-20 10:03:54.265157] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.372 [2024-11-20 10:03:54.276862] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.372 [2024-11-20 10:03:54.277222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.372 [2024-11-20 10:03:54.277238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:23.372 [2024-11-20 10:03:54.277244] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:23.372 [2024-11-20 10:03:54.277398] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:23.372 [2024-11-20 10:03:54.277546] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.372 [2024-11-20 10:03:54.277553] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.372 [2024-11-20 10:03:54.277559] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.372 [2024-11-20 10:03:54.277565] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.634 [2024-11-20 10:03:54.289539] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.634 [2024-11-20 10:03:54.290029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.634 [2024-11-20 10:03:54.290042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:23.634 [2024-11-20 10:03:54.290048] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:23.634 [2024-11-20 10:03:54.290200] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:23.634 [2024-11-20 10:03:54.290350] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.634 [2024-11-20 10:03:54.290357] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.634 [2024-11-20 10:03:54.290363] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.634 [2024-11-20 10:03:54.290368] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.634 [2024-11-20 10:03:54.302187] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.634 [2024-11-20 10:03:54.302749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.634 [2024-11-20 10:03:54.302781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:23.634 [2024-11-20 10:03:54.302790] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:23.634 [2024-11-20 10:03:54.302954] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:23.634 [2024-11-20 10:03:54.303106] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.634 [2024-11-20 10:03:54.303113] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.634 [2024-11-20 10:03:54.303119] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.634 [2024-11-20 10:03:54.303125] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.634 [2024-11-20 10:03:54.314816] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.634 [2024-11-20 10:03:54.315397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.634 [2024-11-20 10:03:54.315429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:23.634 [2024-11-20 10:03:54.315437] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:23.634 [2024-11-20 10:03:54.315602] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:23.634 [2024-11-20 10:03:54.315754] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.634 [2024-11-20 10:03:54.315765] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.634 [2024-11-20 10:03:54.315771] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.634 [2024-11-20 10:03:54.315777] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.634 [2024-11-20 10:03:54.327474] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.634 [2024-11-20 10:03:54.327963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.635 [2024-11-20 10:03:54.327994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:23.635 [2024-11-20 10:03:54.328002] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:23.635 [2024-11-20 10:03:54.328173] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:23.635 [2024-11-20 10:03:54.328326] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.635 [2024-11-20 10:03:54.328333] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.635 [2024-11-20 10:03:54.328339] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.635 [2024-11-20 10:03:54.328346] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.635 [2024-11-20 10:03:54.340188] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.635 [2024-11-20 10:03:54.340723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.635 [2024-11-20 10:03:54.340755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:23.635 [2024-11-20 10:03:54.340764] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:23.635 [2024-11-20 10:03:54.340928] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:23.635 [2024-11-20 10:03:54.341080] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.635 [2024-11-20 10:03:54.341087] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.635 [2024-11-20 10:03:54.341094] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.635 [2024-11-20 10:03:54.341101] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.635 [2024-11-20 10:03:54.352795] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.635 [2024-11-20 10:03:54.353306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.635 [2024-11-20 10:03:54.353338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:23.635 [2024-11-20 10:03:54.353347] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:23.635 [2024-11-20 10:03:54.353514] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:23.635 [2024-11-20 10:03:54.353666] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.635 [2024-11-20 10:03:54.353673] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.635 [2024-11-20 10:03:54.353679] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.635 [2024-11-20 10:03:54.353689] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.635 [2024-11-20 10:03:54.365388] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.635 [2024-11-20 10:03:54.365942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.635 [2024-11-20 10:03:54.365974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:23.635 [2024-11-20 10:03:54.365983] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:23.635 [2024-11-20 10:03:54.366147] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:23.635 [2024-11-20 10:03:54.366307] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.635 [2024-11-20 10:03:54.366315] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.635 [2024-11-20 10:03:54.366321] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.635 [2024-11-20 10:03:54.366328] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.635 [2024-11-20 10:03:54.378021] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.635 [2024-11-20 10:03:54.378619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.635 [2024-11-20 10:03:54.378651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:23.635 [2024-11-20 10:03:54.378660] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:23.635 [2024-11-20 10:03:54.378824] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:23.635 [2024-11-20 10:03:54.378977] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.635 [2024-11-20 10:03:54.378985] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.635 [2024-11-20 10:03:54.378991] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.635 [2024-11-20 10:03:54.378997] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.635 [2024-11-20 10:03:54.390686] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.635 [2024-11-20 10:03:54.391145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.635 [2024-11-20 10:03:54.391165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:23.635 [2024-11-20 10:03:54.391171] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:23.635 [2024-11-20 10:03:54.391320] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:23.635 [2024-11-20 10:03:54.391470] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.635 [2024-11-20 10:03:54.391477] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.635 [2024-11-20 10:03:54.391482] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.635 [2024-11-20 10:03:54.391487] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.635 [2024-11-20 10:03:54.403308] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.635 [2024-11-20 10:03:54.403868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.635 [2024-11-20 10:03:54.403903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:23.635 [2024-11-20 10:03:54.403911] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:23.635 [2024-11-20 10:03:54.404076] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:23.635 [2024-11-20 10:03:54.404235] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.635 [2024-11-20 10:03:54.404243] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.635 [2024-11-20 10:03:54.404249] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.635 [2024-11-20 10:03:54.404255] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.635 [2024-11-20 10:03:54.415934] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.635 [2024-11-20 10:03:54.416524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.635 [2024-11-20 10:03:54.416556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:23.635 [2024-11-20 10:03:54.416565] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:23.635 [2024-11-20 10:03:54.416729] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:23.635 [2024-11-20 10:03:54.416881] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.635 [2024-11-20 10:03:54.416889] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.635 [2024-11-20 10:03:54.416896] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.635 [2024-11-20 10:03:54.416902] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.635 [2024-11-20 10:03:54.428590] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.635 [2024-11-20 10:03:54.429202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.635 [2024-11-20 10:03:54.429234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:23.635 [2024-11-20 10:03:54.429243] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:23.635 [2024-11-20 10:03:54.429408] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:23.635 [2024-11-20 10:03:54.429560] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.635 [2024-11-20 10:03:54.429567] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.635 [2024-11-20 10:03:54.429573] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.635 [2024-11-20 10:03:54.429579] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.635 [2024-11-20 10:03:54.441279] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.635 [2024-11-20 10:03:54.441856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.635 [2024-11-20 10:03:54.441888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:23.635 [2024-11-20 10:03:54.441897] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:23.635 [2024-11-20 10:03:54.442064] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:23.635 [2024-11-20 10:03:54.442223] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.635 [2024-11-20 10:03:54.442231] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.635 [2024-11-20 10:03:54.442237] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.635 [2024-11-20 10:03:54.442243] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.636 [2024-11-20 10:03:54.453927] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.636 [2024-11-20 10:03:54.454489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.636 [2024-11-20 10:03:54.454521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:23.636 [2024-11-20 10:03:54.454530] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:23.636 [2024-11-20 10:03:54.454694] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:23.636 [2024-11-20 10:03:54.454846] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.636 [2024-11-20 10:03:54.454853] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.636 [2024-11-20 10:03:54.454859] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.636 [2024-11-20 10:03:54.454866] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.636 [2024-11-20 10:03:54.466556] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.636 [2024-11-20 10:03:54.467145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.636 [2024-11-20 10:03:54.467182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:23.636 [2024-11-20 10:03:54.467190] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:23.636 [2024-11-20 10:03:54.467354] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:23.636 [2024-11-20 10:03:54.467506] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.636 [2024-11-20 10:03:54.467513] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.636 [2024-11-20 10:03:54.467520] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.636 [2024-11-20 10:03:54.467526] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.636 [2024-11-20 10:03:54.479215] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.636 [2024-11-20 10:03:54.479655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.636 [2024-11-20 10:03:54.479686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:23.636 [2024-11-20 10:03:54.479696] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:23.636 [2024-11-20 10:03:54.479862] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:23.636 [2024-11-20 10:03:54.480014] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.636 [2024-11-20 10:03:54.480025] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.636 [2024-11-20 10:03:54.480031] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.636 [2024-11-20 10:03:54.480037] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.636 [2024-11-20 10:03:54.491869] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.636 [2024-11-20 10:03:54.492470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.636 [2024-11-20 10:03:54.492502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:23.636 [2024-11-20 10:03:54.492511] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:23.636 [2024-11-20 10:03:54.492676] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:23.636 [2024-11-20 10:03:54.492827] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.636 [2024-11-20 10:03:54.492834] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.636 [2024-11-20 10:03:54.492840] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.636 [2024-11-20 10:03:54.492847] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.636 [2024-11-20 10:03:54.504533] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.636 [2024-11-20 10:03:54.505106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.636 [2024-11-20 10:03:54.505138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:23.636 [2024-11-20 10:03:54.505147] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:23.636 [2024-11-20 10:03:54.505318] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:23.636 [2024-11-20 10:03:54.505471] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.636 [2024-11-20 10:03:54.505478] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.636 [2024-11-20 10:03:54.505483] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.636 [2024-11-20 10:03:54.505490] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.636 [2024-11-20 10:03:54.517186] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.636 [2024-11-20 10:03:54.517751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.636 [2024-11-20 10:03:54.517782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:23.636 [2024-11-20 10:03:54.517791] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:23.636 [2024-11-20 10:03:54.517955] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:23.636 [2024-11-20 10:03:54.518107] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.636 [2024-11-20 10:03:54.518115] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.636 [2024-11-20 10:03:54.518122] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.636 [2024-11-20 10:03:54.518131] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.636 [2024-11-20 10:03:54.529819] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.636 [2024-11-20 10:03:54.530441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.636 [2024-11-20 10:03:54.530473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:23.636 [2024-11-20 10:03:54.530482] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:23.636 [2024-11-20 10:03:54.530646] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:23.636 [2024-11-20 10:03:54.530798] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.636 [2024-11-20 10:03:54.530806] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.636 [2024-11-20 10:03:54.530811] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.636 [2024-11-20 10:03:54.530818] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.636 [2024-11-20 10:03:54.542519] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.636 [2024-11-20 10:03:54.543052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.636 [2024-11-20 10:03:54.543084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:23.636 [2024-11-20 10:03:54.543093] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:23.636 [2024-11-20 10:03:54.543264] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:23.636 [2024-11-20 10:03:54.543417] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.636 [2024-11-20 10:03:54.543424] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.636 [2024-11-20 10:03:54.543430] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.636 [2024-11-20 10:03:54.543436] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.899 [2024-11-20 10:03:54.555123] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.899 [2024-11-20 10:03:54.555721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.899 [2024-11-20 10:03:54.555753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:23.899 [2024-11-20 10:03:54.555761] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:23.899 [2024-11-20 10:03:54.555926] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:23.899 [2024-11-20 10:03:54.556079] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.899 [2024-11-20 10:03:54.556085] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.899 [2024-11-20 10:03:54.556092] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.899 [2024-11-20 10:03:54.556098] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.899 7042.00 IOPS, 27.51 MiB/s [2024-11-20T09:03:54.815Z] [2024-11-20 10:03:54.567806] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.899 [2024-11-20 10:03:54.568468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.899 [2024-11-20 10:03:54.568500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:23.899 [2024-11-20 10:03:54.568508] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:23.899 [2024-11-20 10:03:54.568673] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:23.899 [2024-11-20 10:03:54.568833] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.899 [2024-11-20 10:03:54.568841] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.899 [2024-11-20 10:03:54.568848] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.899 [2024-11-20 10:03:54.568854] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.899 [2024-11-20 10:03:54.580403] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.899 [2024-11-20 10:03:54.581015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.899 [2024-11-20 10:03:54.581046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:23.899 [2024-11-20 10:03:54.581055] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:23.899 [2024-11-20 10:03:54.581227] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:23.899 [2024-11-20 10:03:54.581380] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.899 [2024-11-20 10:03:54.581388] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.899 [2024-11-20 10:03:54.581394] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.899 [2024-11-20 10:03:54.581400] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.899 [2024-11-20 10:03:54.593084] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.899 [2024-11-20 10:03:54.593618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.899 [2024-11-20 10:03:54.593649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:23.899 [2024-11-20 10:03:54.593658] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:23.899 [2024-11-20 10:03:54.593822] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:23.899 [2024-11-20 10:03:54.593975] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.899 [2024-11-20 10:03:54.593981] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.899 [2024-11-20 10:03:54.593987] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.899 [2024-11-20 10:03:54.593993] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.899 [2024-11-20 10:03:54.605690] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.899 [2024-11-20 10:03:54.606144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.899 [2024-11-20 10:03:54.606181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:23.899 [2024-11-20 10:03:54.606191] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:23.899 [2024-11-20 10:03:54.606361] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:23.899 [2024-11-20 10:03:54.606513] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.899 [2024-11-20 10:03:54.606520] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.899 [2024-11-20 10:03:54.606526] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.899 [2024-11-20 10:03:54.606532] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.899 [2024-11-20 10:03:54.618379] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.899 [2024-11-20 10:03:54.618833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.899 [2024-11-20 10:03:54.618849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:23.899 [2024-11-20 10:03:54.618856] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:23.899 [2024-11-20 10:03:54.619004] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:23.899 [2024-11-20 10:03:54.619154] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.899 [2024-11-20 10:03:54.619167] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.899 [2024-11-20 10:03:54.619173] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.899 [2024-11-20 10:03:54.619179] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.899 [2024-11-20 10:03:54.630996] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.899 [2024-11-20 10:03:54.631586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.899 [2024-11-20 10:03:54.631618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:23.899 [2024-11-20 10:03:54.631626] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:23.900 [2024-11-20 10:03:54.631791] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:23.900 [2024-11-20 10:03:54.631943] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.900 [2024-11-20 10:03:54.631950] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.900 [2024-11-20 10:03:54.631956] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.900 [2024-11-20 10:03:54.631963] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.900 [2024-11-20 10:03:54.643662] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.900 [2024-11-20 10:03:54.644282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.900 [2024-11-20 10:03:54.644314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:23.900 [2024-11-20 10:03:54.644323] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:23.900 [2024-11-20 10:03:54.644487] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:23.900 [2024-11-20 10:03:54.644639] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.900 [2024-11-20 10:03:54.644650] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.900 [2024-11-20 10:03:54.644657] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.900 [2024-11-20 10:03:54.644663] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.900 [2024-11-20 10:03:54.656246] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.900 [2024-11-20 10:03:54.656776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.900 [2024-11-20 10:03:54.656808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:23.900 [2024-11-20 10:03:54.656817] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:23.900 [2024-11-20 10:03:54.656981] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:23.900 [2024-11-20 10:03:54.657134] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.900 [2024-11-20 10:03:54.657141] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.900 [2024-11-20 10:03:54.657148] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.900 [2024-11-20 10:03:54.657154] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.900 [2024-11-20 10:03:54.668843] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.900 [2024-11-20 10:03:54.669451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.900 [2024-11-20 10:03:54.669483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:23.900 [2024-11-20 10:03:54.669492] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:23.900 [2024-11-20 10:03:54.669664] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:23.900 [2024-11-20 10:03:54.669817] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.900 [2024-11-20 10:03:54.669825] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.900 [2024-11-20 10:03:54.669831] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.900 [2024-11-20 10:03:54.669836] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.900 [2024-11-20 10:03:54.681525] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.900 [2024-11-20 10:03:54.682115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.900 [2024-11-20 10:03:54.682147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:23.900 [2024-11-20 10:03:54.682156] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:23.900 [2024-11-20 10:03:54.682327] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:23.900 [2024-11-20 10:03:54.682479] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.900 [2024-11-20 10:03:54.682487] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.900 [2024-11-20 10:03:54.682492] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.900 [2024-11-20 10:03:54.682502] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.900 [2024-11-20 10:03:54.694187] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.900 [2024-11-20 10:03:54.694785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.900 [2024-11-20 10:03:54.694816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:23.900 [2024-11-20 10:03:54.694825] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:23.900 [2024-11-20 10:03:54.694989] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:23.900 [2024-11-20 10:03:54.695142] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.900 [2024-11-20 10:03:54.695149] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.900 [2024-11-20 10:03:54.695155] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.900 [2024-11-20 10:03:54.695170] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.900 [2024-11-20 10:03:54.706855] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.900 [2024-11-20 10:03:54.707428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.900 [2024-11-20 10:03:54.707459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:23.900 [2024-11-20 10:03:54.707468] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:23.900 [2024-11-20 10:03:54.707633] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:23.900 [2024-11-20 10:03:54.707785] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.900 [2024-11-20 10:03:54.707792] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.900 [2024-11-20 10:03:54.707799] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.900 [2024-11-20 10:03:54.707805] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.900 [2024-11-20 10:03:54.719493] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.900 [2024-11-20 10:03:54.720041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.900 [2024-11-20 10:03:54.720072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:23.900 [2024-11-20 10:03:54.720081] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:23.900 [2024-11-20 10:03:54.720252] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:23.900 [2024-11-20 10:03:54.720405] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.900 [2024-11-20 10:03:54.720413] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.900 [2024-11-20 10:03:54.720418] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.900 [2024-11-20 10:03:54.720424] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.900 [2024-11-20 10:03:54.732110] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.900 [2024-11-20 10:03:54.732709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.900 [2024-11-20 10:03:54.732741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:23.900 [2024-11-20 10:03:54.732750] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:23.900 [2024-11-20 10:03:54.732915] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:23.900 [2024-11-20 10:03:54.733067] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.900 [2024-11-20 10:03:54.733074] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.900 [2024-11-20 10:03:54.733081] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.900 [2024-11-20 10:03:54.733087] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.900 [2024-11-20 10:03:54.744788] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.900 [2024-11-20 10:03:54.745450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.900 [2024-11-20 10:03:54.745482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:23.900 [2024-11-20 10:03:54.745491] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:23.900 [2024-11-20 10:03:54.745655] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:23.901 [2024-11-20 10:03:54.745808] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.901 [2024-11-20 10:03:54.745815] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.901 [2024-11-20 10:03:54.745822] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.901 [2024-11-20 10:03:54.745828] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.901 [2024-11-20 10:03:54.757376] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.901 [2024-11-20 10:03:54.757953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.901 [2024-11-20 10:03:54.757985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:23.901 [2024-11-20 10:03:54.757994] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:23.901 [2024-11-20 10:03:54.758165] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:23.901 [2024-11-20 10:03:54.758318] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.901 [2024-11-20 10:03:54.758325] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.901 [2024-11-20 10:03:54.758331] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.901 [2024-11-20 10:03:54.758337] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.901 [2024-11-20 10:03:54.770022] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.901 [2024-11-20 10:03:54.770581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.901 [2024-11-20 10:03:54.770613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:23.901 [2024-11-20 10:03:54.770622] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:23.901 [2024-11-20 10:03:54.770793] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:23.901 [2024-11-20 10:03:54.770945] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.901 [2024-11-20 10:03:54.770952] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.901 [2024-11-20 10:03:54.770959] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.901 [2024-11-20 10:03:54.770965] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.901 [2024-11-20 10:03:54.782654] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.901 [2024-11-20 10:03:54.783201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.901 [2024-11-20 10:03:54.783233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:23.901 [2024-11-20 10:03:54.783242] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:23.901 [2024-11-20 10:03:54.783409] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:23.901 [2024-11-20 10:03:54.783562] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.901 [2024-11-20 10:03:54.783569] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.901 [2024-11-20 10:03:54.783575] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.901 [2024-11-20 10:03:54.783581] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.901 [2024-11-20 10:03:54.795277] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.901 [2024-11-20 10:03:54.795736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.901 [2024-11-20 10:03:54.795767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:23.901 [2024-11-20 10:03:54.795777] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:23.901 [2024-11-20 10:03:54.795943] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:23.901 [2024-11-20 10:03:54.796096] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.901 [2024-11-20 10:03:54.796103] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.901 [2024-11-20 10:03:54.796108] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.901 [2024-11-20 10:03:54.796115] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:23.901 [2024-11-20 10:03:54.807944] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:23.901 [2024-11-20 10:03:54.808407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.901 [2024-11-20 10:03:54.808438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:23.901 [2024-11-20 10:03:54.808447] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:23.901 [2024-11-20 10:03:54.808611] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:23.901 [2024-11-20 10:03:54.808763] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:23.901 [2024-11-20 10:03:54.808775] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:23.901 [2024-11-20 10:03:54.808780] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:23.901 [2024-11-20 10:03:54.808786] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.163 [2024-11-20 10:03:54.820618] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.163 [2024-11-20 10:03:54.821191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.163 [2024-11-20 10:03:54.821223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:24.163 [2024-11-20 10:03:54.821232] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:24.163 [2024-11-20 10:03:54.821397] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:24.163 [2024-11-20 10:03:54.821549] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.163 [2024-11-20 10:03:54.821556] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.163 [2024-11-20 10:03:54.821562] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.163 [2024-11-20 10:03:54.821568] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.163 [2024-11-20 10:03:54.833256] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.163 [2024-11-20 10:03:54.833847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.163 [2024-11-20 10:03:54.833878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:24.163 [2024-11-20 10:03:54.833887] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:24.163 [2024-11-20 10:03:54.834051] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:24.164 [2024-11-20 10:03:54.834211] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.164 [2024-11-20 10:03:54.834219] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.164 [2024-11-20 10:03:54.834225] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.164 [2024-11-20 10:03:54.834232] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.164 [2024-11-20 10:03:54.845824] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.164 [2024-11-20 10:03:54.846285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.164 [2024-11-20 10:03:54.846317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:24.164 [2024-11-20 10:03:54.846326] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:24.164 [2024-11-20 10:03:54.846493] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:24.164 [2024-11-20 10:03:54.846645] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.164 [2024-11-20 10:03:54.846653] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.164 [2024-11-20 10:03:54.846659] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.164 [2024-11-20 10:03:54.846669] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.164 [2024-11-20 10:03:54.858501] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.164 [2024-11-20 10:03:54.859080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.164 [2024-11-20 10:03:54.859112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:24.164 [2024-11-20 10:03:54.859121] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:24.164 [2024-11-20 10:03:54.859293] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:24.164 [2024-11-20 10:03:54.859446] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.164 [2024-11-20 10:03:54.859453] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.164 [2024-11-20 10:03:54.859459] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.164 [2024-11-20 10:03:54.859465] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.164 [2024-11-20 10:03:54.871154] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.164 [2024-11-20 10:03:54.871656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.164 [2024-11-20 10:03:54.871671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:24.164 [2024-11-20 10:03:54.871677] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:24.164 [2024-11-20 10:03:54.871826] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:24.164 [2024-11-20 10:03:54.871975] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.164 [2024-11-20 10:03:54.871982] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.164 [2024-11-20 10:03:54.871987] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.164 [2024-11-20 10:03:54.871993] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.164 [2024-11-20 10:03:54.883810] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.164 [2024-11-20 10:03:54.884294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.164 [2024-11-20 10:03:54.884326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:24.164 [2024-11-20 10:03:54.884335] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:24.164 [2024-11-20 10:03:54.884502] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:24.164 [2024-11-20 10:03:54.884654] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.164 [2024-11-20 10:03:54.884661] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.164 [2024-11-20 10:03:54.884668] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.164 [2024-11-20 10:03:54.884674] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.164 [2024-11-20 10:03:54.896502] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.164 [2024-11-20 10:03:54.897075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.164 [2024-11-20 10:03:54.897107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:24.164 [2024-11-20 10:03:54.897116] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:24.164 [2024-11-20 10:03:54.897287] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:24.164 [2024-11-20 10:03:54.897440] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.164 [2024-11-20 10:03:54.897447] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.164 [2024-11-20 10:03:54.897453] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.164 [2024-11-20 10:03:54.897459] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.164 [2024-11-20 10:03:54.909138] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.164 [2024-11-20 10:03:54.909697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.164 [2024-11-20 10:03:54.909729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:24.164 [2024-11-20 10:03:54.909738] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:24.164 [2024-11-20 10:03:54.909902] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:24.164 [2024-11-20 10:03:54.910055] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.164 [2024-11-20 10:03:54.910062] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.164 [2024-11-20 10:03:54.910068] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.164 [2024-11-20 10:03:54.910075] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.164 [2024-11-20 10:03:54.921762] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.164 [2024-11-20 10:03:54.922266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.164 [2024-11-20 10:03:54.922298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:24.164 [2024-11-20 10:03:54.922307] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:24.164 [2024-11-20 10:03:54.922473] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:24.164 [2024-11-20 10:03:54.922626] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.164 [2024-11-20 10:03:54.922633] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.164 [2024-11-20 10:03:54.922639] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.164 [2024-11-20 10:03:54.922645] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.164 [2024-11-20 10:03:54.934335] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.164 [2024-11-20 10:03:54.934904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.164 [2024-11-20 10:03:54.934936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:24.164 [2024-11-20 10:03:54.934945] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:24.164 [2024-11-20 10:03:54.935113] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:24.164 [2024-11-20 10:03:54.935273] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.164 [2024-11-20 10:03:54.935281] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.164 [2024-11-20 10:03:54.935287] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.164 [2024-11-20 10:03:54.935293] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.164 [2024-11-20 10:03:54.946992] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.164 [2024-11-20 10:03:54.947592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.164 [2024-11-20 10:03:54.947623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:24.164 [2024-11-20 10:03:54.947632] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:24.164 [2024-11-20 10:03:54.947796] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:24.164 [2024-11-20 10:03:54.947948] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.164 [2024-11-20 10:03:54.947955] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.164 [2024-11-20 10:03:54.947962] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.164 [2024-11-20 10:03:54.947969] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.164 [2024-11-20 10:03:54.959668] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.164 [2024-11-20 10:03:54.960116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.164 [2024-11-20 10:03:54.960132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:24.165 [2024-11-20 10:03:54.960139] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:24.165 [2024-11-20 10:03:54.960294] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:24.165 [2024-11-20 10:03:54.960444] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.165 [2024-11-20 10:03:54.960450] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.165 [2024-11-20 10:03:54.960456] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.165 [2024-11-20 10:03:54.960461] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.165 [2024-11-20 10:03:54.972287] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.165 [2024-11-20 10:03:54.972798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.165 [2024-11-20 10:03:54.972830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:24.165 [2024-11-20 10:03:54.972839] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:24.165 [2024-11-20 10:03:54.973004] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:24.165 [2024-11-20 10:03:54.973156] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.165 [2024-11-20 10:03:54.973176] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.165 [2024-11-20 10:03:54.973182] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.165 [2024-11-20 10:03:54.973188] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.165 [2024-11-20 10:03:54.984881] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.165 [2024-11-20 10:03:54.985378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.165 [2024-11-20 10:03:54.985395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:24.165 [2024-11-20 10:03:54.985401] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:24.165 [2024-11-20 10:03:54.985550] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:24.165 [2024-11-20 10:03:54.985699] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.165 [2024-11-20 10:03:54.985706] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.165 [2024-11-20 10:03:54.985712] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.165 [2024-11-20 10:03:54.985717] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.165 [2024-11-20 10:03:54.997539] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.165 [2024-11-20 10:03:54.998014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.165 [2024-11-20 10:03:54.998028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:24.165 [2024-11-20 10:03:54.998033] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:24.165 [2024-11-20 10:03:54.998187] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:24.165 [2024-11-20 10:03:54.998337] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.165 [2024-11-20 10:03:54.998344] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.165 [2024-11-20 10:03:54.998350] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.165 [2024-11-20 10:03:54.998355] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.165 [2024-11-20 10:03:55.010167] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.165 [2024-11-20 10:03:55.010749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.165 [2024-11-20 10:03:55.010781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:24.165 [2024-11-20 10:03:55.010789] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:24.165 [2024-11-20 10:03:55.010953] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:24.165 [2024-11-20 10:03:55.011106] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.165 [2024-11-20 10:03:55.011113] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.165 [2024-11-20 10:03:55.011120] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.165 [2024-11-20 10:03:55.011129] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.165 [2024-11-20 10:03:55.022819] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.165 [2024-11-20 10:03:55.023377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.165 [2024-11-20 10:03:55.023409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:24.165 [2024-11-20 10:03:55.023418] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:24.165 [2024-11-20 10:03:55.023582] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:24.165 [2024-11-20 10:03:55.023735] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.165 [2024-11-20 10:03:55.023742] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.165 [2024-11-20 10:03:55.023748] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.165 [2024-11-20 10:03:55.023755] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.165 [2024-11-20 10:03:55.035445] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.165 [2024-11-20 10:03:55.036046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.165 [2024-11-20 10:03:55.036079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:24.165 [2024-11-20 10:03:55.036087] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:24.165 [2024-11-20 10:03:55.036260] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:24.165 [2024-11-20 10:03:55.036413] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.165 [2024-11-20 10:03:55.036420] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.165 [2024-11-20 10:03:55.036426] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.165 [2024-11-20 10:03:55.036432] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.165 [2024-11-20 10:03:55.048123] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.165 [2024-11-20 10:03:55.048754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.165 [2024-11-20 10:03:55.048786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:24.165 [2024-11-20 10:03:55.048795] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:24.165 [2024-11-20 10:03:55.048959] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:24.165 [2024-11-20 10:03:55.049112] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.165 [2024-11-20 10:03:55.049119] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.165 [2024-11-20 10:03:55.049124] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.165 [2024-11-20 10:03:55.049130] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.165 [2024-11-20 10:03:55.060822] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.165 [2024-11-20 10:03:55.061380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.165 [2024-11-20 10:03:55.061412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:24.165 [2024-11-20 10:03:55.061421] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:24.165 [2024-11-20 10:03:55.061585] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:24.165 [2024-11-20 10:03:55.061737] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.165 [2024-11-20 10:03:55.061744] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.165 [2024-11-20 10:03:55.061750] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.165 [2024-11-20 10:03:55.061756] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.165 [2024-11-20 10:03:55.073460] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.165 [2024-11-20 10:03:55.074012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.165 [2024-11-20 10:03:55.074044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:24.165 [2024-11-20 10:03:55.074053] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:24.165 [2024-11-20 10:03:55.074225] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:24.165 [2024-11-20 10:03:55.074378] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.165 [2024-11-20 10:03:55.074385] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.165 [2024-11-20 10:03:55.074392] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.165 [2024-11-20 10:03:55.074397] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.428 [2024-11-20 10:03:55.086089] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.429 [2024-11-20 10:03:55.086660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.429 [2024-11-20 10:03:55.086691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:24.429 [2024-11-20 10:03:55.086700] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:24.429 [2024-11-20 10:03:55.086864] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:24.429 [2024-11-20 10:03:55.087017] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.429 [2024-11-20 10:03:55.087024] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.429 [2024-11-20 10:03:55.087029] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.429 [2024-11-20 10:03:55.087035] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.429 [2024-11-20 10:03:55.098723] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.429 [2024-11-20 10:03:55.099199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.429 [2024-11-20 10:03:55.099230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:24.429 [2024-11-20 10:03:55.099239] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:24.429 [2024-11-20 10:03:55.099410] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:24.429 [2024-11-20 10:03:55.099562] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.429 [2024-11-20 10:03:55.099570] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.429 [2024-11-20 10:03:55.099576] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.429 [2024-11-20 10:03:55.099583] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.429 [2024-11-20 10:03:55.111416] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.429 [2024-11-20 10:03:55.111869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.429 [2024-11-20 10:03:55.111886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:24.429 [2024-11-20 10:03:55.111892] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:24.429 [2024-11-20 10:03:55.112042] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:24.429 [2024-11-20 10:03:55.112196] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.429 [2024-11-20 10:03:55.112205] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.429 [2024-11-20 10:03:55.112211] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.429 [2024-11-20 10:03:55.112216] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.429 [2024-11-20 10:03:55.124057] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.429 [2024-11-20 10:03:55.124635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.429 [2024-11-20 10:03:55.124667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:24.429 [2024-11-20 10:03:55.124676] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:24.429 [2024-11-20 10:03:55.124840] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:24.429 [2024-11-20 10:03:55.124992] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.429 [2024-11-20 10:03:55.125000] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.429 [2024-11-20 10:03:55.125006] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.429 [2024-11-20 10:03:55.125012] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.429 [2024-11-20 10:03:55.136720] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.429 [2024-11-20 10:03:55.137060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.429 [2024-11-20 10:03:55.137076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:24.429 [2024-11-20 10:03:55.137082] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:24.429 [2024-11-20 10:03:55.137236] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:24.429 [2024-11-20 10:03:55.137386] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.429 [2024-11-20 10:03:55.137396] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.429 [2024-11-20 10:03:55.137402] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.429 [2024-11-20 10:03:55.137407] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.429 [2024-11-20 10:03:55.149314] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.429 [2024-11-20 10:03:55.149811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.429 [2024-11-20 10:03:55.149825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:24.429 [2024-11-20 10:03:55.149831] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:24.429 [2024-11-20 10:03:55.149979] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:24.429 [2024-11-20 10:03:55.150128] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.429 [2024-11-20 10:03:55.150135] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.429 [2024-11-20 10:03:55.150140] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.429 [2024-11-20 10:03:55.150145] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.429 [2024-11-20 10:03:55.161978] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.429 [2024-11-20 10:03:55.162478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.429 [2024-11-20 10:03:55.162492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:24.429 [2024-11-20 10:03:55.162497] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:24.429 [2024-11-20 10:03:55.162646] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:24.429 [2024-11-20 10:03:55.162795] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.429 [2024-11-20 10:03:55.162801] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.429 [2024-11-20 10:03:55.162807] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.429 [2024-11-20 10:03:55.162812] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.429 [2024-11-20 10:03:55.174643] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.429 [2024-11-20 10:03:55.175225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.429 [2024-11-20 10:03:55.175257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:24.429 [2024-11-20 10:03:55.175266] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:24.429 [2024-11-20 10:03:55.175430] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:24.429 [2024-11-20 10:03:55.175582] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.429 [2024-11-20 10:03:55.175590] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.429 [2024-11-20 10:03:55.175595] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.429 [2024-11-20 10:03:55.175605] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.429 [2024-11-20 10:03:55.187296] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.429 [2024-11-20 10:03:55.187887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.429 [2024-11-20 10:03:55.187918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:24.429 [2024-11-20 10:03:55.187927] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:24.430 [2024-11-20 10:03:55.188091] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:24.430 [2024-11-20 10:03:55.188252] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.430 [2024-11-20 10:03:55.188261] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.430 [2024-11-20 10:03:55.188267] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.430 [2024-11-20 10:03:55.188274] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.430 [2024-11-20 10:03:55.199950] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.430 [2024-11-20 10:03:55.200471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.430 [2024-11-20 10:03:55.200503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:24.430 [2024-11-20 10:03:55.200511] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:24.430 [2024-11-20 10:03:55.200676] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:24.430 [2024-11-20 10:03:55.200828] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.430 [2024-11-20 10:03:55.200835] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.430 [2024-11-20 10:03:55.200841] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.430 [2024-11-20 10:03:55.200848] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.430 [2024-11-20 10:03:55.212540] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.430 [2024-11-20 10:03:55.212918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.430 [2024-11-20 10:03:55.212934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:24.430 [2024-11-20 10:03:55.212941] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:24.430 [2024-11-20 10:03:55.213090] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:24.430 [2024-11-20 10:03:55.213245] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.430 [2024-11-20 10:03:55.213252] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.430 [2024-11-20 10:03:55.213258] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.430 [2024-11-20 10:03:55.213263] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.430 [2024-11-20 10:03:55.225218] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.430 [2024-11-20 10:03:55.225754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.430 [2024-11-20 10:03:55.225790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:24.430 [2024-11-20 10:03:55.225799] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:24.430 [2024-11-20 10:03:55.225963] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:24.430 [2024-11-20 10:03:55.226116] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.430 [2024-11-20 10:03:55.226123] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.430 [2024-11-20 10:03:55.226128] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.430 [2024-11-20 10:03:55.226134] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.430 [2024-11-20 10:03:55.237824] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.430 [2024-11-20 10:03:55.238381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.430 [2024-11-20 10:03:55.238413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:24.430 [2024-11-20 10:03:55.238422] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:24.430 [2024-11-20 10:03:55.238586] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:24.430 [2024-11-20 10:03:55.238739] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.430 [2024-11-20 10:03:55.238746] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.430 [2024-11-20 10:03:55.238751] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.430 [2024-11-20 10:03:55.238757] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.430 [2024-11-20 10:03:55.250461] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.430 [2024-11-20 10:03:55.251053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.430 [2024-11-20 10:03:55.251084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:24.430 [2024-11-20 10:03:55.251093] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:24.430 [2024-11-20 10:03:55.251265] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:24.430 [2024-11-20 10:03:55.251418] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.430 [2024-11-20 10:03:55.251425] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.430 [2024-11-20 10:03:55.251431] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.430 [2024-11-20 10:03:55.251438] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.430 [2024-11-20 10:03:55.263123] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.430 [2024-11-20 10:03:55.263724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.430 [2024-11-20 10:03:55.263756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:24.430 [2024-11-20 10:03:55.263764] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:24.430 [2024-11-20 10:03:55.263932] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:24.430 [2024-11-20 10:03:55.264085] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.430 [2024-11-20 10:03:55.264092] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.430 [2024-11-20 10:03:55.264098] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.430 [2024-11-20 10:03:55.264104] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.430 [2024-11-20 10:03:55.275808] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.430 [2024-11-20 10:03:55.276460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.430 [2024-11-20 10:03:55.276492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:24.430 [2024-11-20 10:03:55.276501] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:24.430 [2024-11-20 10:03:55.276665] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:24.430 [2024-11-20 10:03:55.276818] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.430 [2024-11-20 10:03:55.276825] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.430 [2024-11-20 10:03:55.276831] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.430 [2024-11-20 10:03:55.276837] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.430 [2024-11-20 10:03:55.288379] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.430 [2024-11-20 10:03:55.288868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.430 [2024-11-20 10:03:55.288883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:24.430 [2024-11-20 10:03:55.288889] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:24.431 [2024-11-20 10:03:55.289038] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:24.431 [2024-11-20 10:03:55.289193] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.431 [2024-11-20 10:03:55.289200] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.431 [2024-11-20 10:03:55.289206] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.431 [2024-11-20 10:03:55.289212] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.431 [2024-11-20 10:03:55.301028] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.431 [2024-11-20 10:03:55.301500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.431 [2024-11-20 10:03:55.301514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:24.431 [2024-11-20 10:03:55.301519] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:24.431 [2024-11-20 10:03:55.301667] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:24.431 [2024-11-20 10:03:55.301816] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.431 [2024-11-20 10:03:55.301826] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.431 [2024-11-20 10:03:55.301832] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.431 [2024-11-20 10:03:55.301838] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.431 [2024-11-20 10:03:55.313656] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.431 [2024-11-20 10:03:55.314258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.431 [2024-11-20 10:03:55.314289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:24.431 [2024-11-20 10:03:55.314298] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:24.431 [2024-11-20 10:03:55.314465] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:24.431 [2024-11-20 10:03:55.314617] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.431 [2024-11-20 10:03:55.314624] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.431 [2024-11-20 10:03:55.314630] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.431 [2024-11-20 10:03:55.314636] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.431 [2024-11-20 10:03:55.326331] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.431 [2024-11-20 10:03:55.326797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.431 [2024-11-20 10:03:55.326828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:24.431 [2024-11-20 10:03:55.326838] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:24.431 [2024-11-20 10:03:55.327003] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:24.431 [2024-11-20 10:03:55.327156] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.431 [2024-11-20 10:03:55.327171] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.431 [2024-11-20 10:03:55.327177] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.431 [2024-11-20 10:03:55.327184] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.431 [2024-11-20 10:03:55.339018] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.431 [2024-11-20 10:03:55.339513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.431 [2024-11-20 10:03:55.339529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:24.431 [2024-11-20 10:03:55.339535] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:24.431 [2024-11-20 10:03:55.339683] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:24.431 [2024-11-20 10:03:55.339832] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.431 [2024-11-20 10:03:55.339839] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.431 [2024-11-20 10:03:55.339845] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.431 [2024-11-20 10:03:55.339854] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.694 [2024-11-20 10:03:55.351697] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.694 [2024-11-20 10:03:55.352006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.694 [2024-11-20 10:03:55.352021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:24.694 [2024-11-20 10:03:55.352027] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:24.694 [2024-11-20 10:03:55.352180] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:24.694 [2024-11-20 10:03:55.352330] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.694 [2024-11-20 10:03:55.352337] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.694 [2024-11-20 10:03:55.352342] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.694 [2024-11-20 10:03:55.352347] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.694 [2024-11-20 10:03:55.364329] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.694 [2024-11-20 10:03:55.364831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.694 [2024-11-20 10:03:55.364844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:24.694 [2024-11-20 10:03:55.364850] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:24.694 [2024-11-20 10:03:55.364999] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:24.694 [2024-11-20 10:03:55.365148] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.694 [2024-11-20 10:03:55.365156] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.694 [2024-11-20 10:03:55.365166] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.694 [2024-11-20 10:03:55.365171] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.694 [2024-11-20 10:03:55.377026] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.694 [2024-11-20 10:03:55.377508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.694 [2024-11-20 10:03:55.377540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:24.694 [2024-11-20 10:03:55.377549] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:24.694 [2024-11-20 10:03:55.377715] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:24.694 [2024-11-20 10:03:55.377867] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.694 [2024-11-20 10:03:55.377874] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.694 [2024-11-20 10:03:55.377880] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.694 [2024-11-20 10:03:55.377886] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.694 [2024-11-20 10:03:55.389613] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.694 [2024-11-20 10:03:55.390176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.694 [2024-11-20 10:03:55.390216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:24.694 [2024-11-20 10:03:55.390225] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:24.694 [2024-11-20 10:03:55.390389] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:24.694 [2024-11-20 10:03:55.390541] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.694 [2024-11-20 10:03:55.390548] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.694 [2024-11-20 10:03:55.390554] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.694 [2024-11-20 10:03:55.390560] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.695 [2024-11-20 10:03:55.402254] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.695 [2024-11-20 10:03:55.402849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.695 [2024-11-20 10:03:55.402882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:24.695 [2024-11-20 10:03:55.402890] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:24.695 [2024-11-20 10:03:55.403055] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:24.695 [2024-11-20 10:03:55.403215] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.695 [2024-11-20 10:03:55.403223] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.695 [2024-11-20 10:03:55.403229] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.695 [2024-11-20 10:03:55.403235] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.695 [2024-11-20 10:03:55.414935] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.695 [2024-11-20 10:03:55.415490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.695 [2024-11-20 10:03:55.415523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:24.695 [2024-11-20 10:03:55.415532] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:24.695 [2024-11-20 10:03:55.415696] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:24.695 [2024-11-20 10:03:55.415848] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.695 [2024-11-20 10:03:55.415855] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.695 [2024-11-20 10:03:55.415861] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.695 [2024-11-20 10:03:55.415867] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.695 [2024-11-20 10:03:55.427613] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.695 [2024-11-20 10:03:55.428105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.695 [2024-11-20 10:03:55.428121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:24.695 [2024-11-20 10:03:55.428127] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:24.695 [2024-11-20 10:03:55.428285] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:24.695 [2024-11-20 10:03:55.428435] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.695 [2024-11-20 10:03:55.428442] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.695 [2024-11-20 10:03:55.428448] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.695 [2024-11-20 10:03:55.428453] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.695 [2024-11-20 10:03:55.440288] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.695 [2024-11-20 10:03:55.440835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.695 [2024-11-20 10:03:55.440866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:24.695 [2024-11-20 10:03:55.440876] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:24.695 [2024-11-20 10:03:55.441040] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:24.695 [2024-11-20 10:03:55.441199] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.695 [2024-11-20 10:03:55.441207] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.695 [2024-11-20 10:03:55.441213] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.695 [2024-11-20 10:03:55.441219] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.695 [2024-11-20 10:03:55.452917] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.695 [2024-11-20 10:03:55.453396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.695 [2024-11-20 10:03:55.453412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:24.695 [2024-11-20 10:03:55.453418] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:24.695 [2024-11-20 10:03:55.453567] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:24.695 [2024-11-20 10:03:55.453716] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.695 [2024-11-20 10:03:55.453723] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.695 [2024-11-20 10:03:55.453728] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.695 [2024-11-20 10:03:55.453733] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.695 [2024-11-20 10:03:55.465560] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.695 [2024-11-20 10:03:55.466045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.695 [2024-11-20 10:03:55.466059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:24.695 [2024-11-20 10:03:55.466064] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:24.695 [2024-11-20 10:03:55.466218] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:24.695 [2024-11-20 10:03:55.466368] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.695 [2024-11-20 10:03:55.466379] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.695 [2024-11-20 10:03:55.466384] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.695 [2024-11-20 10:03:55.466389] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.695 [2024-11-20 10:03:55.478216] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.695 [2024-11-20 10:03:55.478658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.695 [2024-11-20 10:03:55.478671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:24.695 [2024-11-20 10:03:55.478677] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:24.695 [2024-11-20 10:03:55.478825] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:24.695 [2024-11-20 10:03:55.478974] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.695 [2024-11-20 10:03:55.478981] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.695 [2024-11-20 10:03:55.478986] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.695 [2024-11-20 10:03:55.478991] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.695 [2024-11-20 10:03:55.490848] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.695 [2024-11-20 10:03:55.491253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.695 [2024-11-20 10:03:55.491268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:24.695 [2024-11-20 10:03:55.491273] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:24.695 [2024-11-20 10:03:55.491422] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:24.695 [2024-11-20 10:03:55.491571] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.695 [2024-11-20 10:03:55.491578] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.695 [2024-11-20 10:03:55.491583] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.695 [2024-11-20 10:03:55.491588] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.696 [2024-11-20 10:03:55.503424] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.696 [2024-11-20 10:03:55.503900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.696 [2024-11-20 10:03:55.503912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:24.696 [2024-11-20 10:03:55.503918] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:24.696 [2024-11-20 10:03:55.504066] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:24.696 [2024-11-20 10:03:55.504220] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.696 [2024-11-20 10:03:55.504227] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.696 [2024-11-20 10:03:55.504233] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.696 [2024-11-20 10:03:55.504241] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.696 [2024-11-20 10:03:55.516068] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.696 [2024-11-20 10:03:55.516555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.696 [2024-11-20 10:03:55.516569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:24.696 [2024-11-20 10:03:55.516574] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:24.696 [2024-11-20 10:03:55.516722] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:24.696 [2024-11-20 10:03:55.516871] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.696 [2024-11-20 10:03:55.516879] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.696 [2024-11-20 10:03:55.516884] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.696 [2024-11-20 10:03:55.516889] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.696 [2024-11-20 10:03:55.528729] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.696 [2024-11-20 10:03:55.529095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.696 [2024-11-20 10:03:55.529108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:24.696 [2024-11-20 10:03:55.529113] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:24.696 [2024-11-20 10:03:55.529266] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:24.696 [2024-11-20 10:03:55.529415] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.696 [2024-11-20 10:03:55.529422] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.696 [2024-11-20 10:03:55.529427] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.696 [2024-11-20 10:03:55.529432] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.696 [2024-11-20 10:03:55.541415] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.696 [2024-11-20 10:03:55.541887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.696 [2024-11-20 10:03:55.541901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:24.696 [2024-11-20 10:03:55.541906] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:24.696 [2024-11-20 10:03:55.542055] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:24.696 [2024-11-20 10:03:55.542216] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.696 [2024-11-20 10:03:55.542224] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.696 [2024-11-20 10:03:55.542229] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.696 [2024-11-20 10:03:55.542235] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.696 [2024-11-20 10:03:55.554075] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.696 [2024-11-20 10:03:55.554553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.696 [2024-11-20 10:03:55.554569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:24.696 [2024-11-20 10:03:55.554575] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:24.696 [2024-11-20 10:03:55.554723] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:24.696 [2024-11-20 10:03:55.554872] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.696 [2024-11-20 10:03:55.554880] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.696 [2024-11-20 10:03:55.554885] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.696 [2024-11-20 10:03:55.554890] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.696 5633.60 IOPS, 22.01 MiB/s [2024-11-20T09:03:55.612Z] [2024-11-20 10:03:55.566742] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.696 [2024-11-20 10:03:55.567188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.696 [2024-11-20 10:03:55.567202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:24.696 [2024-11-20 10:03:55.567207] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:24.696 [2024-11-20 10:03:55.567356] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:24.696 [2024-11-20 10:03:55.567504] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.696 [2024-11-20 10:03:55.567511] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.696 [2024-11-20 10:03:55.567517] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.696 [2024-11-20 10:03:55.567522] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.696 [2024-11-20 10:03:55.579372] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.696 [2024-11-20 10:03:55.579812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.696 [2024-11-20 10:03:55.579825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:24.696 [2024-11-20 10:03:55.579831] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:24.696 [2024-11-20 10:03:55.579979] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:24.696 [2024-11-20 10:03:55.580128] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.696 [2024-11-20 10:03:55.580135] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.696 [2024-11-20 10:03:55.580140] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.696 [2024-11-20 10:03:55.580145] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.696 [2024-11-20 10:03:55.591982] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.696 [2024-11-20 10:03:55.592508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.696 [2024-11-20 10:03:55.592540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:24.696 [2024-11-20 10:03:55.592549] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:24.696 [2024-11-20 10:03:55.592716] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:24.696 [2024-11-20 10:03:55.592869] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.696 [2024-11-20 10:03:55.592876] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.696 [2024-11-20 10:03:55.592882] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.696 [2024-11-20 10:03:55.592888] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.697 [2024-11-20 10:03:55.604597] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.697 [2024-11-20 10:03:55.605048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.697 [2024-11-20 10:03:55.605064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:24.697 [2024-11-20 10:03:55.605070] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:24.697 [2024-11-20 10:03:55.605224] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:24.697 [2024-11-20 10:03:55.605373] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.697 [2024-11-20 10:03:55.605380] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.697 [2024-11-20 10:03:55.605385] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.697 [2024-11-20 10:03:55.605391] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.958 [2024-11-20 10:03:55.617261] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.958 [2024-11-20 10:03:55.617719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.959 [2024-11-20 10:03:55.617734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:24.959 [2024-11-20 10:03:55.617740] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:24.959 [2024-11-20 10:03:55.617889] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:24.959 [2024-11-20 10:03:55.618038] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.959 [2024-11-20 10:03:55.618046] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.959 [2024-11-20 10:03:55.618051] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.959 [2024-11-20 10:03:55.618057] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.959 [2024-11-20 10:03:55.629896] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.959 [2024-11-20 10:03:55.630331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.959 [2024-11-20 10:03:55.630344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:24.959 [2024-11-20 10:03:55.630350] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:24.959 [2024-11-20 10:03:55.630499] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:24.959 [2024-11-20 10:03:55.630648] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.959 [2024-11-20 10:03:55.630659] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.959 [2024-11-20 10:03:55.630664] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.959 [2024-11-20 10:03:55.630669] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.959 [2024-11-20 10:03:55.642521] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.959 [2024-11-20 10:03:55.642994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.959 [2024-11-20 10:03:55.643007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:24.959 [2024-11-20 10:03:55.643013] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:24.959 [2024-11-20 10:03:55.643166] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:24.959 [2024-11-20 10:03:55.643316] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.959 [2024-11-20 10:03:55.643323] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.959 [2024-11-20 10:03:55.643328] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.959 [2024-11-20 10:03:55.643334] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.959 [2024-11-20 10:03:55.655173] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.959 [2024-11-20 10:03:55.655616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.959 [2024-11-20 10:03:55.655629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:24.959 [2024-11-20 10:03:55.655634] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:24.959 [2024-11-20 10:03:55.655782] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:24.959 [2024-11-20 10:03:55.655931] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.959 [2024-11-20 10:03:55.655938] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.959 [2024-11-20 10:03:55.655943] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.959 [2024-11-20 10:03:55.655948] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.959 [2024-11-20 10:03:55.667792] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.959 [2024-11-20 10:03:55.668307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.959 [2024-11-20 10:03:55.668321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:24.959 [2024-11-20 10:03:55.668327] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:24.959 [2024-11-20 10:03:55.668475] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:24.959 [2024-11-20 10:03:55.668624] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.959 [2024-11-20 10:03:55.668631] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.959 [2024-11-20 10:03:55.668636] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.959 [2024-11-20 10:03:55.668645] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.959 [2024-11-20 10:03:55.680361] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.959 [2024-11-20 10:03:55.680851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.959 [2024-11-20 10:03:55.680866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:24.959 [2024-11-20 10:03:55.680871] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:24.959 [2024-11-20 10:03:55.681020] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:24.959 [2024-11-20 10:03:55.681173] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.959 [2024-11-20 10:03:55.681180] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.959 [2024-11-20 10:03:55.681186] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.959 [2024-11-20 10:03:55.681190] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.959 [2024-11-20 10:03:55.693025] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.959 [2024-11-20 10:03:55.693446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.959 [2024-11-20 10:03:55.693459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:24.959 [2024-11-20 10:03:55.693465] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:24.959 [2024-11-20 10:03:55.693613] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:24.959 [2024-11-20 10:03:55.693762] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.959 [2024-11-20 10:03:55.693769] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.959 [2024-11-20 10:03:55.693775] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.959 [2024-11-20 10:03:55.693780] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.959 [2024-11-20 10:03:55.705611] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.959 [2024-11-20 10:03:55.705960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.959 [2024-11-20 10:03:55.705973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:24.959 [2024-11-20 10:03:55.705979] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:24.959 [2024-11-20 10:03:55.706126] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:24.959 [2024-11-20 10:03:55.706282] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.959 [2024-11-20 10:03:55.706289] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.959 [2024-11-20 10:03:55.706294] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.959 [2024-11-20 10:03:55.706299] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.959 [2024-11-20 10:03:55.718279] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.959 [2024-11-20 10:03:55.718864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.959 [2024-11-20 10:03:55.718897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:24.959 [2024-11-20 10:03:55.718906] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:24.959 [2024-11-20 10:03:55.719070] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:24.959 [2024-11-20 10:03:55.719230] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.959 [2024-11-20 10:03:55.719239] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.959 [2024-11-20 10:03:55.719245] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.959 [2024-11-20 10:03:55.719250] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.959 [2024-11-20 10:03:55.730951] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.959 [2024-11-20 10:03:55.731452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.959 [2024-11-20 10:03:55.731468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:24.959 [2024-11-20 10:03:55.731474] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:24.959 [2024-11-20 10:03:55.731623] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:24.959 [2024-11-20 10:03:55.731773] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.959 [2024-11-20 10:03:55.731780] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.959 [2024-11-20 10:03:55.731785] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.959 [2024-11-20 10:03:55.731790] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.960 [2024-11-20 10:03:55.743640] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.960 [2024-11-20 10:03:55.744135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.960 [2024-11-20 10:03:55.744148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:24.960 [2024-11-20 10:03:55.744153] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:24.960 [2024-11-20 10:03:55.744307] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:24.960 [2024-11-20 10:03:55.744456] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.960 [2024-11-20 10:03:55.744464] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.960 [2024-11-20 10:03:55.744469] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.960 [2024-11-20 10:03:55.744474] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.960 [2024-11-20 10:03:55.756312] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.960 [2024-11-20 10:03:55.756792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.960 [2024-11-20 10:03:55.756806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:24.960 [2024-11-20 10:03:55.756812] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:24.960 [2024-11-20 10:03:55.756963] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:24.960 [2024-11-20 10:03:55.757113] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.960 [2024-11-20 10:03:55.757120] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.960 [2024-11-20 10:03:55.757125] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.960 [2024-11-20 10:03:55.757130] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.960 [2024-11-20 10:03:55.768976] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.960 [2024-11-20 10:03:55.769438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.960 [2024-11-20 10:03:55.769452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:24.960 [2024-11-20 10:03:55.769458] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:24.960 [2024-11-20 10:03:55.769606] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:24.960 [2024-11-20 10:03:55.769755] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.960 [2024-11-20 10:03:55.769762] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.960 [2024-11-20 10:03:55.769767] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.960 [2024-11-20 10:03:55.769772] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.960 [2024-11-20 10:03:55.781621] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.960 [2024-11-20 10:03:55.782143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.960 [2024-11-20 10:03:55.782156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:24.960 [2024-11-20 10:03:55.782167] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:24.960 [2024-11-20 10:03:55.782315] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:24.960 [2024-11-20 10:03:55.782464] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.960 [2024-11-20 10:03:55.782471] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.960 [2024-11-20 10:03:55.782477] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.960 [2024-11-20 10:03:55.782482] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.960 [2024-11-20 10:03:55.794318] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.960 [2024-11-20 10:03:55.794800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.960 [2024-11-20 10:03:55.794813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:24.960 [2024-11-20 10:03:55.794819] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:24.960 [2024-11-20 10:03:55.794966] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:24.960 [2024-11-20 10:03:55.795115] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.960 [2024-11-20 10:03:55.795125] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.960 [2024-11-20 10:03:55.795131] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.960 [2024-11-20 10:03:55.795136] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.960 [2024-11-20 10:03:55.806976] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.960 [2024-11-20 10:03:55.807521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.960 [2024-11-20 10:03:55.807553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:24.960 [2024-11-20 10:03:55.807562] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:24.960 [2024-11-20 10:03:55.807726] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:24.960 [2024-11-20 10:03:55.807878] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.960 [2024-11-20 10:03:55.807885] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.960 [2024-11-20 10:03:55.807891] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.960 [2024-11-20 10:03:55.807897] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.960 [2024-11-20 10:03:55.819592] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.960 [2024-11-20 10:03:55.820085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.960 [2024-11-20 10:03:55.820101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:24.960 [2024-11-20 10:03:55.820107] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:24.960 [2024-11-20 10:03:55.820261] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:24.960 [2024-11-20 10:03:55.820411] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.960 [2024-11-20 10:03:55.820418] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.960 [2024-11-20 10:03:55.820423] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.960 [2024-11-20 10:03:55.820428] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.960 [2024-11-20 10:03:55.832260] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.960 [2024-11-20 10:03:55.832771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.960 [2024-11-20 10:03:55.832803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:24.960 [2024-11-20 10:03:55.832812] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:24.960 [2024-11-20 10:03:55.832976] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:24.960 [2024-11-20 10:03:55.833129] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.960 [2024-11-20 10:03:55.833136] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.960 [2024-11-20 10:03:55.833143] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.960 [2024-11-20 10:03:55.833152] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.960 [2024-11-20 10:03:55.844872] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.960 [2024-11-20 10:03:55.845371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.960 [2024-11-20 10:03:55.845388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:24.960 [2024-11-20 10:03:55.845394] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:24.960 [2024-11-20 10:03:55.845543] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:24.960 [2024-11-20 10:03:55.845692] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.960 [2024-11-20 10:03:55.845699] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.960 [2024-11-20 10:03:55.845705] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.960 [2024-11-20 10:03:55.845710] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.960 [2024-11-20 10:03:55.857551] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:24.960 [2024-11-20 10:03:55.858002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.960 [2024-11-20 10:03:55.858034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:24.960 [2024-11-20 10:03:55.858043] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:24.960 [2024-11-20 10:03:55.858220] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:24.960 [2024-11-20 10:03:55.858374] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:24.960 [2024-11-20 10:03:55.858381] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:24.961 [2024-11-20 10:03:55.858387] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:24.961 [2024-11-20 10:03:55.858393] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:24.961 [2024-11-20 10:03:55.870245] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:25.226 [2024-11-20 10:03:55.870696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.226 [2024-11-20 10:03:55.870713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:25.226 [2024-11-20 10:03:55.870721] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:25.226 [2024-11-20 10:03:55.870871] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:25.226 [2024-11-20 10:03:55.871022] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:25.226 [2024-11-20 10:03:55.871029] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:25.226 [2024-11-20 10:03:55.871034] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:25.226 [2024-11-20 10:03:55.871039] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:25.226 [2024-11-20 10:03:55.882894] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:25.226 [2024-11-20 10:03:55.883362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.226 [2024-11-20 10:03:55.883377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:25.226 [2024-11-20 10:03:55.883382] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:25.226 [2024-11-20 10:03:55.883531] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:25.226 [2024-11-20 10:03:55.883680] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:25.226 [2024-11-20 10:03:55.883687] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:25.226 [2024-11-20 10:03:55.883693] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:25.226 [2024-11-20 10:03:55.883697] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:25.226 [2024-11-20 10:03:55.895538] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:25.226 [2024-11-20 10:03:55.896025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.226 [2024-11-20 10:03:55.896038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:25.226 [2024-11-20 10:03:55.896044] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:25.226 [2024-11-20 10:03:55.896199] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:25.226 [2024-11-20 10:03:55.896349] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:25.226 [2024-11-20 10:03:55.896355] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:25.226 [2024-11-20 10:03:55.896361] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:25.226 [2024-11-20 10:03:55.896367] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:25.226 [2024-11-20 10:03:55.908204] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:25.226 [2024-11-20 10:03:55.908782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.226 [2024-11-20 10:03:55.908814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:25.226 [2024-11-20 10:03:55.908822] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:25.226 [2024-11-20 10:03:55.908987] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:25.226 [2024-11-20 10:03:55.909139] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:25.226 [2024-11-20 10:03:55.909146] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:25.226 [2024-11-20 10:03:55.909152] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:25.226 [2024-11-20 10:03:55.909166] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:25.226 [2024-11-20 10:03:55.920869] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:25.226 [2024-11-20 10:03:55.921319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.226 [2024-11-20 10:03:55.921335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:25.226 [2024-11-20 10:03:55.921341] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:25.226 [2024-11-20 10:03:55.921494] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:25.226 [2024-11-20 10:03:55.921643] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:25.226 [2024-11-20 10:03:55.921650] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:25.226 [2024-11-20 10:03:55.921656] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:25.226 [2024-11-20 10:03:55.921661] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:25.226 [2024-11-20 10:03:55.933530] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:25.226 [2024-11-20 10:03:55.933986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.226 [2024-11-20 10:03:55.933999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:25.226 [2024-11-20 10:03:55.934005] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:25.226 [2024-11-20 10:03:55.934154] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:25.226 [2024-11-20 10:03:55.934311] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:25.226 [2024-11-20 10:03:55.934318] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:25.226 [2024-11-20 10:03:55.934323] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:25.226 [2024-11-20 10:03:55.934328] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:25.226 [2024-11-20 10:03:55.946177] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:25.226 [2024-11-20 10:03:55.946715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.226 [2024-11-20 10:03:55.946747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:25.226 [2024-11-20 10:03:55.946756] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:25.226 [2024-11-20 10:03:55.946920] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:25.226 [2024-11-20 10:03:55.947072] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:25.226 [2024-11-20 10:03:55.947080] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:25.226 [2024-11-20 10:03:55.947087] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:25.226 [2024-11-20 10:03:55.947094] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:25.227 [2024-11-20 10:03:55.958809] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:25.227 [2024-11-20 10:03:55.959282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.227 [2024-11-20 10:03:55.959299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:25.227 [2024-11-20 10:03:55.959305] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:25.227 [2024-11-20 10:03:55.959454] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:25.227 [2024-11-20 10:03:55.959605] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:25.227 [2024-11-20 10:03:55.959615] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:25.227 [2024-11-20 10:03:55.959621] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:25.227 [2024-11-20 10:03:55.959627] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:25.227 [2024-11-20 10:03:55.971476] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:25.227 [2024-11-20 10:03:55.971926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.227 [2024-11-20 10:03:55.971940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:25.227 [2024-11-20 10:03:55.971946] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:25.227 [2024-11-20 10:03:55.972094] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:25.227 [2024-11-20 10:03:55.972250] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:25.227 [2024-11-20 10:03:55.972258] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:25.227 [2024-11-20 10:03:55.972264] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:25.227 [2024-11-20 10:03:55.972269] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:25.227 [2024-11-20 10:03:55.984119] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:25.227 [2024-11-20 10:03:55.984451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.227 [2024-11-20 10:03:55.984465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:25.227 [2024-11-20 10:03:55.984471] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:25.227 [2024-11-20 10:03:55.984619] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:25.227 [2024-11-20 10:03:55.984768] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:25.227 [2024-11-20 10:03:55.984775] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:25.227 [2024-11-20 10:03:55.984780] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:25.227 [2024-11-20 10:03:55.984785] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:25.227 [2024-11-20 10:03:55.996769] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:25.227 [2024-11-20 10:03:55.997223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.227 [2024-11-20 10:03:55.997237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:25.227 [2024-11-20 10:03:55.997243] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:25.227 [2024-11-20 10:03:55.997391] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:25.227 [2024-11-20 10:03:55.997540] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:25.227 [2024-11-20 10:03:55.997547] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:25.227 [2024-11-20 10:03:55.997553] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:25.227 [2024-11-20 10:03:55.997564] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:25.227 [2024-11-20 10:03:56.009403] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:25.227 [2024-11-20 10:03:56.009845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.227 [2024-11-20 10:03:56.009858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:25.227 [2024-11-20 10:03:56.009864] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:25.227 [2024-11-20 10:03:56.010013] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:25.227 [2024-11-20 10:03:56.010167] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:25.227 [2024-11-20 10:03:56.010174] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:25.227 [2024-11-20 10:03:56.010180] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:25.227 [2024-11-20 10:03:56.010185] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:25.227 [2024-11-20 10:03:56.022020] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:25.227 [2024-11-20 10:03:56.022609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.227 [2024-11-20 10:03:56.022642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:25.227 [2024-11-20 10:03:56.022651] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:25.227 [2024-11-20 10:03:56.022815] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:25.227 [2024-11-20 10:03:56.022967] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:25.227 [2024-11-20 10:03:56.022974] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:25.227 [2024-11-20 10:03:56.022980] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:25.227 [2024-11-20 10:03:56.022986] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:25.227 [2024-11-20 10:03:56.034699] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:25.227 [2024-11-20 10:03:56.035204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.227 [2024-11-20 10:03:56.035226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:25.227 [2024-11-20 10:03:56.035233] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:25.227 [2024-11-20 10:03:56.035388] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:25.227 [2024-11-20 10:03:56.035538] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:25.227 [2024-11-20 10:03:56.035545] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:25.227 [2024-11-20 10:03:56.035551] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:25.227 [2024-11-20 10:03:56.035556] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:25.227 [2024-11-20 10:03:56.047402] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:25.227 [2024-11-20 10:03:56.047891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.227 [2024-11-20 10:03:56.047905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:25.227 [2024-11-20 10:03:56.047911] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:25.227 [2024-11-20 10:03:56.048059] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:25.227 [2024-11-20 10:03:56.048214] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:25.227 [2024-11-20 10:03:56.048222] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:25.227 [2024-11-20 10:03:56.048227] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:25.227 [2024-11-20 10:03:56.048232] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:25.227 [2024-11-20 10:03:56.060060] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:25.227 [2024-11-20 10:03:56.060520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.227 [2024-11-20 10:03:56.060534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:25.227 [2024-11-20 10:03:56.060539] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:25.227 [2024-11-20 10:03:56.060688] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:25.227 [2024-11-20 10:03:56.060837] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:25.227 [2024-11-20 10:03:56.060844] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:25.227 [2024-11-20 10:03:56.060849] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:25.227 [2024-11-20 10:03:56.060854] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:25.228 [2024-11-20 10:03:56.072692] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:25.228 [2024-11-20 10:03:56.073230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.228 [2024-11-20 10:03:56.073262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:25.228 [2024-11-20 10:03:56.073271] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:25.228 [2024-11-20 10:03:56.073435] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:25.228 [2024-11-20 10:03:56.073595] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:25.228 [2024-11-20 10:03:56.073603] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:25.228 [2024-11-20 10:03:56.073609] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:25.228 [2024-11-20 10:03:56.073614] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:25.228 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1556754 Killed "${NVMF_APP[@]}" "$@" 00:30:25.228 10:03:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:30:25.228 10:03:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:30:25.228 10:03:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:25.228 10:03:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:25.228 10:03:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:25.228 [2024-11-20 10:03:56.085319] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:25.228 [2024-11-20 10:03:56.085850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.228 [2024-11-20 10:03:56.085882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:25.228 [2024-11-20 10:03:56.085890] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:25.228 [2024-11-20 10:03:56.086054] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:25.228 [2024-11-20 10:03:56.086213] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:25.228 [2024-11-20 10:03:56.086221] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:25.228 [2024-11-20 10:03:56.086227] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:25.228 [2024-11-20 10:03:56.086233] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:25.228 10:03:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=1558456 00:30:25.228 10:03:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 1558456 00:30:25.228 10:03:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:25.228 10:03:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 1558456 ']' 00:30:25.228 10:03:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:25.228 10:03:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:25.228 10:03:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:25.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:25.228 10:03:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:25.228 10:03:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:25.228 [2024-11-20 10:03:56.097923] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:25.228 [2024-11-20 10:03:56.098268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.228 [2024-11-20 10:03:56.098285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:25.228 [2024-11-20 10:03:56.098291] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:25.228 [2024-11-20 10:03:56.098440] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:25.228 [2024-11-20 10:03:56.098589] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:25.228 [2024-11-20 10:03:56.098596] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:25.228 [2024-11-20 10:03:56.098602] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:25.228 [2024-11-20 10:03:56.098607] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:25.228 [2024-11-20 10:03:56.110572] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:25.228 [2024-11-20 10:03:56.110922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.228 [2024-11-20 10:03:56.110936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:25.228 [2024-11-20 10:03:56.110946] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:25.228 [2024-11-20 10:03:56.111094] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:25.228 [2024-11-20 10:03:56.111248] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:25.228 [2024-11-20 10:03:56.111255] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:25.228 [2024-11-20 10:03:56.111261] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:25.228 [2024-11-20 10:03:56.111266] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:25.228 [2024-11-20 10:03:56.123272] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:25.228 [2024-11-20 10:03:56.123738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.228 [2024-11-20 10:03:56.123770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:25.228 [2024-11-20 10:03:56.123779] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:25.228 [2024-11-20 10:03:56.123945] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:25.228 [2024-11-20 10:03:56.124098] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:25.228 [2024-11-20 10:03:56.124105] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:25.228 [2024-11-20 10:03:56.124111] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:25.228 [2024-11-20 10:03:56.124118] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:25.228 [2024-11-20 10:03:56.135954] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:25.491 [2024-11-20 10:03:56.136459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.491 [2024-11-20 10:03:56.136477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:25.491 [2024-11-20 10:03:56.136482] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:25.491 [2024-11-20 10:03:56.136632] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:25.491 [2024-11-20 10:03:56.136781] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:25.491 [2024-11-20 10:03:56.136788] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:25.491 [2024-11-20 10:03:56.136794] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:25.491 [2024-11-20 10:03:56.136799] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:25.491 [2024-11-20 10:03:56.138007] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:30:25.491 [2024-11-20 10:03:56.138057] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:25.491 [2024-11-20 10:03:56.148641] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:25.491 [2024-11-20 10:03:56.149282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.491 [2024-11-20 10:03:56.149314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:25.491 [2024-11-20 10:03:56.149327] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:25.491 [2024-11-20 10:03:56.149491] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:25.491 [2024-11-20 10:03:56.149644] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:25.491 [2024-11-20 10:03:56.149651] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:25.491 [2024-11-20 10:03:56.149657] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:25.491 [2024-11-20 10:03:56.149663] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:25.491 [2024-11-20 10:03:56.161225] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:25.491 [2024-11-20 10:03:56.161689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.491 [2024-11-20 10:03:56.161704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:25.491 [2024-11-20 10:03:56.161710] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:25.491 [2024-11-20 10:03:56.161859] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:25.491 [2024-11-20 10:03:56.162009] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:25.491 [2024-11-20 10:03:56.162015] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:25.491 [2024-11-20 10:03:56.162021] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:25.491 [2024-11-20 10:03:56.162026] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:25.491 [2024-11-20 10:03:56.173878] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:25.491 [2024-11-20 10:03:56.174239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.491 [2024-11-20 10:03:56.174254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:25.491 [2024-11-20 10:03:56.174260] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:25.491 [2024-11-20 10:03:56.174409] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:25.491 [2024-11-20 10:03:56.174558] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:25.491 [2024-11-20 10:03:56.174565] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:25.491 [2024-11-20 10:03:56.174570] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:25.491 [2024-11-20 10:03:56.174575] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:25.491 [2024-11-20 10:03:56.186472] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:25.491 [2024-11-20 10:03:56.187052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.491 [2024-11-20 10:03:56.187083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:25.491 [2024-11-20 10:03:56.187093] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:25.491 [2024-11-20 10:03:56.187263] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:25.491 [2024-11-20 10:03:56.187419] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:25.491 [2024-11-20 10:03:56.187427] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:25.491 [2024-11-20 10:03:56.187432] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:25.491 [2024-11-20 10:03:56.187438] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:25.491 [2024-11-20 10:03:56.199130] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:25.491 [2024-11-20 10:03:56.199720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.491 [2024-11-20 10:03:56.199752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:25.491 [2024-11-20 10:03:56.199762] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:25.491 [2024-11-20 10:03:56.199926] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:25.491 [2024-11-20 10:03:56.200079] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:25.491 [2024-11-20 10:03:56.200085] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:25.491 [2024-11-20 10:03:56.200092] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:25.491 [2024-11-20 10:03:56.200098] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:25.491 [2024-11-20 10:03:56.211790] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:25.491 [2024-11-20 10:03:56.212284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.491 [2024-11-20 10:03:56.212315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:25.491 [2024-11-20 10:03:56.212324] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:25.491 [2024-11-20 10:03:56.212491] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:25.491 [2024-11-20 10:03:56.212643] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:25.491 [2024-11-20 10:03:56.212651] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:25.491 [2024-11-20 10:03:56.212657] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:25.491 [2024-11-20 10:03:56.212663] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:25.491 [2024-11-20 10:03:56.224495] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:25.491 [2024-11-20 10:03:56.225093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.492 [2024-11-20 10:03:56.225125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:25.492 [2024-11-20 10:03:56.225134] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:25.492 [2024-11-20 10:03:56.225306] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:25.492 [2024-11-20 10:03:56.225459] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:25.492 [2024-11-20 10:03:56.225467] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:25.492 [2024-11-20 10:03:56.225477] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:25.492 [2024-11-20 10:03:56.225483] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:25.492 [2024-11-20 10:03:56.228517] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:25.492 [2024-11-20 10:03:56.237177] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:25.492 [2024-11-20 10:03:56.237797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.492 [2024-11-20 10:03:56.237829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:25.492 [2024-11-20 10:03:56.237838] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:25.492 [2024-11-20 10:03:56.238003] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:25.492 [2024-11-20 10:03:56.238156] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:25.492 [2024-11-20 10:03:56.238170] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:25.492 [2024-11-20 10:03:56.238177] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:25.492 [2024-11-20 10:03:56.238183] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:25.492 [2024-11-20 10:03:56.249882] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:25.492 [2024-11-20 10:03:56.250501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.492 [2024-11-20 10:03:56.250533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:25.492 [2024-11-20 10:03:56.250542] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:25.492 [2024-11-20 10:03:56.250707] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:25.492 [2024-11-20 10:03:56.250860] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:25.492 [2024-11-20 10:03:56.250867] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:25.492 [2024-11-20 10:03:56.250874] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:25.492 [2024-11-20 10:03:56.250880] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:25.492 [2024-11-20 10:03:56.257778] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:25.492 [2024-11-20 10:03:56.257799] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:25.492 [2024-11-20 10:03:56.257806] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:25.492 [2024-11-20 10:03:56.257812] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:25.492 [2024-11-20 10:03:56.257817] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:25.492 [2024-11-20 10:03:56.259018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:25.492 [2024-11-20 10:03:56.259188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:25.492 [2024-11-20 10:03:56.259203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:25.492 [2024-11-20 10:03:56.262576] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:25.492 [2024-11-20 10:03:56.263182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.492 [2024-11-20 10:03:56.263219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:25.492 [2024-11-20 10:03:56.263228] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:25.492 [2024-11-20 10:03:56.263396] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:25.492 [2024-11-20 10:03:56.263548] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:25.492 [2024-11-20 10:03:56.263556] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:25.492 [2024-11-20 10:03:56.263561] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:25.492 [2024-11-20 10:03:56.263568] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:25.492 [2024-11-20 10:03:56.275279] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:25.492 [2024-11-20 10:03:56.275875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.492 [2024-11-20 10:03:56.275906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:25.492 [2024-11-20 10:03:56.275915] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:25.492 [2024-11-20 10:03:56.276080] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:25.492 [2024-11-20 10:03:56.276239] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:25.492 [2024-11-20 10:03:56.276247] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:25.492 [2024-11-20 10:03:56.276253] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:25.492 [2024-11-20 10:03:56.276259] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:25.492 [2024-11-20 10:03:56.287949] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:25.492 [2024-11-20 10:03:56.288517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.492 [2024-11-20 10:03:56.288550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:25.492 [2024-11-20 10:03:56.288560] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:25.492 [2024-11-20 10:03:56.288724] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:25.492 [2024-11-20 10:03:56.288877] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:25.492 [2024-11-20 10:03:56.288884] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:25.492 [2024-11-20 10:03:56.288890] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:25.492 [2024-11-20 10:03:56.288897] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:25.492 [2024-11-20 10:03:56.300602] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:25.492 [2024-11-20 10:03:56.301101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.492 [2024-11-20 10:03:56.301117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:25.492 [2024-11-20 10:03:56.301123] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:25.492 [2024-11-20 10:03:56.301282] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:25.492 [2024-11-20 10:03:56.301432] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:25.492 [2024-11-20 10:03:56.301439] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:25.492 [2024-11-20 10:03:56.301445] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:25.492 [2024-11-20 10:03:56.301450] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:25.492 [2024-11-20 10:03:56.313271] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:25.492 [2024-11-20 10:03:56.313794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.492 [2024-11-20 10:03:56.313809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:25.492 [2024-11-20 10:03:56.313814] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:25.492 [2024-11-20 10:03:56.313963] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:25.492 [2024-11-20 10:03:56.314112] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:25.492 [2024-11-20 10:03:56.314119] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:25.492 [2024-11-20 10:03:56.314124] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:25.492 [2024-11-20 10:03:56.314129] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:25.492 [2024-11-20 10:03:56.325957] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:25.492 [2024-11-20 10:03:56.326412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.492 [2024-11-20 10:03:56.326426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:25.492 [2024-11-20 10:03:56.326431] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:25.492 [2024-11-20 10:03:56.326580] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:25.492 [2024-11-20 10:03:56.326729] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:25.493 [2024-11-20 10:03:56.326736] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:25.493 [2024-11-20 10:03:56.326741] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:25.493 [2024-11-20 10:03:56.326746] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:25.493 [2024-11-20 10:03:56.338566] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:25.493 [2024-11-20 10:03:56.339080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.493 [2024-11-20 10:03:56.339093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:25.493 [2024-11-20 10:03:56.339098] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:25.493 [2024-11-20 10:03:56.339251] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:25.493 [2024-11-20 10:03:56.339400] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:25.493 [2024-11-20 10:03:56.339407] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:25.493 [2024-11-20 10:03:56.339416] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:25.493 [2024-11-20 10:03:56.339422] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:25.493 [2024-11-20 10:03:56.351257] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:25.493 [2024-11-20 10:03:56.351708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.493 [2024-11-20 10:03:56.351722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:25.493 [2024-11-20 10:03:56.351728] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:25.493 [2024-11-20 10:03:56.351876] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:25.493 [2024-11-20 10:03:56.352025] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:25.493 [2024-11-20 10:03:56.352031] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:25.493 [2024-11-20 10:03:56.352036] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:25.493 [2024-11-20 10:03:56.352041] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:25.493 [2024-11-20 10:03:56.363862] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:25.493 [2024-11-20 10:03:56.364445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.493 [2024-11-20 10:03:56.364477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:25.493 [2024-11-20 10:03:56.364486] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:25.493 [2024-11-20 10:03:56.364651] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:25.493 [2024-11-20 10:03:56.364804] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:25.493 [2024-11-20 10:03:56.364811] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:25.493 [2024-11-20 10:03:56.364817] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:25.493 [2024-11-20 10:03:56.364824] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:25.493 [2024-11-20 10:03:56.376528] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:25.493 [2024-11-20 10:03:56.376995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.493 [2024-11-20 10:03:56.377027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:25.493 [2024-11-20 10:03:56.377036] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:25.493 [2024-11-20 10:03:56.377206] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:25.493 [2024-11-20 10:03:56.377359] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:25.493 [2024-11-20 10:03:56.377367] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:25.493 [2024-11-20 10:03:56.377373] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:25.493 [2024-11-20 10:03:56.377379] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:25.493 [2024-11-20 10:03:56.389216] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:25.493 [2024-11-20 10:03:56.389693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.493 [2024-11-20 10:03:56.389724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:25.493 [2024-11-20 10:03:56.389733] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:25.493 [2024-11-20 10:03:56.389897] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:25.493 [2024-11-20 10:03:56.390050] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:25.493 [2024-11-20 10:03:56.390057] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:25.493 [2024-11-20 10:03:56.390063] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:25.493 [2024-11-20 10:03:56.390069] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:25.493 [2024-11-20 10:03:56.401898] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:25.493 [2024-11-20 10:03:56.402304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.493 [2024-11-20 10:03:56.402335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:25.493 [2024-11-20 10:03:56.402344] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:25.493 [2024-11-20 10:03:56.402511] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:25.493 [2024-11-20 10:03:56.402663] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:25.493 [2024-11-20 10:03:56.402671] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:25.755 [2024-11-20 10:03:56.402678] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:25.755 [2024-11-20 10:03:56.402686] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:25.755 [2024-11-20 10:03:56.414521] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:25.755 [2024-11-20 10:03:56.415087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.755 [2024-11-20 10:03:56.415119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:25.755 [2024-11-20 10:03:56.415128] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:25.755 [2024-11-20 10:03:56.415300] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:25.755 [2024-11-20 10:03:56.415453] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:25.755 [2024-11-20 10:03:56.415460] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:25.755 [2024-11-20 10:03:56.415466] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:25.755 [2024-11-20 10:03:56.415473] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:25.755 [2024-11-20 10:03:56.427153] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:25.755 [2024-11-20 10:03:56.427670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.755 [2024-11-20 10:03:56.427690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:25.755 [2024-11-20 10:03:56.427697] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:25.755 [2024-11-20 10:03:56.427846] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:25.755 [2024-11-20 10:03:56.427995] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:25.755 [2024-11-20 10:03:56.428002] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:25.755 [2024-11-20 10:03:56.428007] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:25.755 [2024-11-20 10:03:56.428012] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:25.755 [2024-11-20 10:03:56.439838] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:25.755 [2024-11-20 10:03:56.440434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.755 [2024-11-20 10:03:56.440467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:25.755 [2024-11-20 10:03:56.440475] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:25.755 [2024-11-20 10:03:56.440640] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:25.755 [2024-11-20 10:03:56.440792] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:25.755 [2024-11-20 10:03:56.440799] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:25.755 [2024-11-20 10:03:56.440806] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:25.755 [2024-11-20 10:03:56.440812] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:25.755 [2024-11-20 10:03:56.452511] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:25.755 [2024-11-20 10:03:56.453133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.755 [2024-11-20 10:03:56.453170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:25.755 [2024-11-20 10:03:56.453179] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:25.755 [2024-11-20 10:03:56.453344] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:25.755 [2024-11-20 10:03:56.453496] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:25.755 [2024-11-20 10:03:56.453503] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:25.755 [2024-11-20 10:03:56.453509] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:25.755 [2024-11-20 10:03:56.453515] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:25.756 [2024-11-20 10:03:56.465203] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:25.756 [2024-11-20 10:03:56.465818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.756 [2024-11-20 10:03:56.465850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:25.756 [2024-11-20 10:03:56.465859] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:25.756 [2024-11-20 10:03:56.466027] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:25.756 [2024-11-20 10:03:56.466185] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:25.756 [2024-11-20 10:03:56.466193] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:25.756 [2024-11-20 10:03:56.466199] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:25.756 [2024-11-20 10:03:56.466205] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:25.756 [2024-11-20 10:03:56.477770] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:25.756 [2024-11-20 10:03:56.478402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.756 [2024-11-20 10:03:56.478435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:25.756 [2024-11-20 10:03:56.478445] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:25.756 [2024-11-20 10:03:56.478609] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:25.756 [2024-11-20 10:03:56.478761] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:25.756 [2024-11-20 10:03:56.478769] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:25.756 [2024-11-20 10:03:56.478774] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:25.756 [2024-11-20 10:03:56.478781] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:25.756 [2024-11-20 10:03:56.490471] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:25.756 [2024-11-20 10:03:56.490961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.756 [2024-11-20 10:03:56.490993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:25.756 [2024-11-20 10:03:56.491002] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:25.756 [2024-11-20 10:03:56.491174] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:25.756 [2024-11-20 10:03:56.491327] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:25.756 [2024-11-20 10:03:56.491334] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:25.756 [2024-11-20 10:03:56.491340] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:25.756 [2024-11-20 10:03:56.491347] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:25.756 [2024-11-20 10:03:56.503174] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:25.756 [2024-11-20 10:03:56.503736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.756 [2024-11-20 10:03:56.503768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:25.756 [2024-11-20 10:03:56.503777] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:25.756 [2024-11-20 10:03:56.503941] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:25.756 [2024-11-20 10:03:56.504094] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:25.756 [2024-11-20 10:03:56.504101] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:25.756 [2024-11-20 10:03:56.504112] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:25.756 [2024-11-20 10:03:56.504118] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:25.756 [2024-11-20 10:03:56.515809] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:25.756 [2024-11-20 10:03:56.516295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.756 [2024-11-20 10:03:56.516311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:25.756 [2024-11-20 10:03:56.516317] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:25.756 [2024-11-20 10:03:56.516466] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:25.756 [2024-11-20 10:03:56.516615] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:25.756 [2024-11-20 10:03:56.516622] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:25.756 [2024-11-20 10:03:56.516627] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:25.756 [2024-11-20 10:03:56.516632] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:25.756 [2024-11-20 10:03:56.528455] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:25.756 [2024-11-20 10:03:56.528886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.756 [2024-11-20 10:03:56.528900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:25.756 [2024-11-20 10:03:56.528906] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:25.756 [2024-11-20 10:03:56.529054] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:25.756 [2024-11-20 10:03:56.529208] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:25.756 [2024-11-20 10:03:56.529216] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:25.756 [2024-11-20 10:03:56.529221] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:25.756 [2024-11-20 10:03:56.529226] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:25.756 [2024-11-20 10:03:56.541047] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:25.756 [2024-11-20 10:03:56.541535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.756 [2024-11-20 10:03:56.541548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:25.756 [2024-11-20 10:03:56.541554] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:25.756 [2024-11-20 10:03:56.541703] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:25.756 [2024-11-20 10:03:56.541853] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:25.756 [2024-11-20 10:03:56.541859] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:25.756 [2024-11-20 10:03:56.541865] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:25.756 [2024-11-20 10:03:56.541870] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:25.756 [2024-11-20 10:03:56.553702] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:25.756 [2024-11-20 10:03:56.554167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.756 [2024-11-20 10:03:56.554181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:25.756 [2024-11-20 10:03:56.554187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:25.756 [2024-11-20 10:03:56.554335] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:25.756 [2024-11-20 10:03:56.554485] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:25.756 [2024-11-20 10:03:56.554491] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:25.756 [2024-11-20 10:03:56.554496] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:25.756 [2024-11-20 10:03:56.554501] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:25.756 4694.67 IOPS, 18.34 MiB/s [2024-11-20T09:03:56.672Z] [2024-11-20 10:03:56.567460] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:25.756 [2024-11-20 10:03:56.567914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.756 [2024-11-20 10:03:56.567928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:25.756 [2024-11-20 10:03:56.567934] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:25.756 [2024-11-20 10:03:56.568082] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:25.756 [2024-11-20 10:03:56.568235] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:25.756 [2024-11-20 10:03:56.568242] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:25.756 [2024-11-20 10:03:56.568248] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:25.756 [2024-11-20 10:03:56.568254] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:25.756 [2024-11-20 10:03:56.580109] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:25.756 [2024-11-20 10:03:56.580667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.756 [2024-11-20 10:03:56.580699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:25.756 [2024-11-20 10:03:56.580708] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:25.756 [2024-11-20 10:03:56.580872] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:25.756 [2024-11-20 10:03:56.581024] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:25.756 [2024-11-20 10:03:56.581031] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:25.756 [2024-11-20 10:03:56.581038] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:25.757 [2024-11-20 10:03:56.581045] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:25.757 [2024-11-20 10:03:56.592728] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:25.757 [2024-11-20 10:03:56.593380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.757 [2024-11-20 10:03:56.593416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:25.757 [2024-11-20 10:03:56.593425] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:25.757 [2024-11-20 10:03:56.593589] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:25.757 [2024-11-20 10:03:56.593742] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:25.757 [2024-11-20 10:03:56.593749] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:25.757 [2024-11-20 10:03:56.593755] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:25.757 [2024-11-20 10:03:56.593761] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:25.757 [2024-11-20 10:03:56.605318] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:25.757 [2024-11-20 10:03:56.605936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.757 [2024-11-20 10:03:56.605968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:25.757 [2024-11-20 10:03:56.605977] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:25.757 [2024-11-20 10:03:56.606141] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:25.757 [2024-11-20 10:03:56.606300] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:25.757 [2024-11-20 10:03:56.606308] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:25.757 [2024-11-20 10:03:56.606314] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:25.757 [2024-11-20 10:03:56.606320] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:25.757 [2024-11-20 10:03:56.617898] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:25.757 [2024-11-20 10:03:56.618515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.757 [2024-11-20 10:03:56.618547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:25.757 [2024-11-20 10:03:56.618556] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:25.757 [2024-11-20 10:03:56.618723] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:25.757 [2024-11-20 10:03:56.618876] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:25.757 [2024-11-20 10:03:56.618883] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:25.757 [2024-11-20 10:03:56.618890] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:25.757 [2024-11-20 10:03:56.618897] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:25.757 [2024-11-20 10:03:56.630587] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:25.757 [2024-11-20 10:03:56.631174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.757 [2024-11-20 10:03:56.631206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:25.757 [2024-11-20 10:03:56.631215] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:25.757 [2024-11-20 10:03:56.631387] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:25.757 [2024-11-20 10:03:56.631540] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:25.757 [2024-11-20 10:03:56.631547] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:25.757 [2024-11-20 10:03:56.631554] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:25.757 [2024-11-20 10:03:56.631560] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:25.757 [2024-11-20 10:03:56.643260] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:25.757 [2024-11-20 10:03:56.643851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.757 [2024-11-20 10:03:56.643883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:25.757 [2024-11-20 10:03:56.643892] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:25.757 [2024-11-20 10:03:56.644057] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:25.757 [2024-11-20 10:03:56.644214] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:25.757 [2024-11-20 10:03:56.644222] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:25.757 [2024-11-20 10:03:56.644228] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:25.757 [2024-11-20 10:03:56.644234] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:25.757 [2024-11-20 10:03:56.655935] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:25.757 [2024-11-20 10:03:56.656416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.757 [2024-11-20 10:03:56.656448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:25.757 [2024-11-20 10:03:56.656457] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:25.757 [2024-11-20 10:03:56.656621] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:25.757 [2024-11-20 10:03:56.656773] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:25.757 [2024-11-20 10:03:56.656780] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:25.757 [2024-11-20 10:03:56.656787] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:25.757 [2024-11-20 10:03:56.656793] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:26.019 [2024-11-20 10:03:56.668635] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:26.019 [2024-11-20 10:03:56.669244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.019 [2024-11-20 10:03:56.669276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:26.019 [2024-11-20 10:03:56.669285] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:26.019 [2024-11-20 10:03:56.669452] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:26.019 [2024-11-20 10:03:56.669605] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:26.019 [2024-11-20 10:03:56.669616] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:26.019 [2024-11-20 10:03:56.669623] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:26.019 [2024-11-20 10:03:56.669628] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:26.019 [2024-11-20 10:03:56.681329] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:26.019 [2024-11-20 10:03:56.681831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.019 [2024-11-20 10:03:56.681847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:26.019 [2024-11-20 10:03:56.681853] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:26.019 [2024-11-20 10:03:56.682002] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:26.019 [2024-11-20 10:03:56.682152] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:26.019 [2024-11-20 10:03:56.682165] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:26.019 [2024-11-20 10:03:56.682170] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:26.019 [2024-11-20 10:03:56.682175] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:26.019 [2024-11-20 10:03:56.693996] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:26.019 [2024-11-20 10:03:56.694506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.019 [2024-11-20 10:03:56.694537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:26.019 [2024-11-20 10:03:56.694547] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:26.019 [2024-11-20 10:03:56.694711] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:26.019 [2024-11-20 10:03:56.694863] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:26.019 [2024-11-20 10:03:56.694871] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:26.019 [2024-11-20 10:03:56.694877] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:26.019 [2024-11-20 10:03:56.694883] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:26.019 [2024-11-20 10:03:56.706563] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:26.019 [2024-11-20 10:03:56.707176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.019 [2024-11-20 10:03:56.707207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:26.019 [2024-11-20 10:03:56.707216] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:26.019 [2024-11-20 10:03:56.707381] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:26.019 [2024-11-20 10:03:56.707534] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:26.019 [2024-11-20 10:03:56.707541] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:26.019 [2024-11-20 10:03:56.707547] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:26.019 [2024-11-20 10:03:56.707553] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:26.019 [2024-11-20 10:03:56.719247] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:26.019 [2024-11-20 10:03:56.719826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.019 [2024-11-20 10:03:56.719857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:26.019 [2024-11-20 10:03:56.719866] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:26.019 [2024-11-20 10:03:56.720031] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:26.019 [2024-11-20 10:03:56.720190] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:26.019 [2024-11-20 10:03:56.720198] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:26.019 [2024-11-20 10:03:56.720204] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:26.019 [2024-11-20 10:03:56.720210] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:26.019 [2024-11-20 10:03:56.731894] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:26.019 [2024-11-20 10:03:56.732499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.020 [2024-11-20 10:03:56.732531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:26.020 [2024-11-20 10:03:56.732540] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:26.020 [2024-11-20 10:03:56.732704] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:26.020 [2024-11-20 10:03:56.732856] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:26.020 [2024-11-20 10:03:56.732864] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:26.020 [2024-11-20 10:03:56.732870] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:26.020 [2024-11-20 10:03:56.732876] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:26.020 [2024-11-20 10:03:56.744567] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:26.020 [2024-11-20 10:03:56.745163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.020 [2024-11-20 10:03:56.745195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:26.020 [2024-11-20 10:03:56.745204] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:26.020 [2024-11-20 10:03:56.745368] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:26.020 [2024-11-20 10:03:56.745520] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:26.020 [2024-11-20 10:03:56.745528] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:26.020 [2024-11-20 10:03:56.745534] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:26.020 [2024-11-20 10:03:56.745540] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:26.020 [2024-11-20 10:03:56.757237] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:26.020 [2024-11-20 10:03:56.757822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.020 [2024-11-20 10:03:56.757857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:26.020 [2024-11-20 10:03:56.757866] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:26.020 [2024-11-20 10:03:56.758033] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:26.020 [2024-11-20 10:03:56.758191] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:26.020 [2024-11-20 10:03:56.758199] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:26.020 [2024-11-20 10:03:56.758205] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:26.020 [2024-11-20 10:03:56.758211] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:26.020 [2024-11-20 10:03:56.769897] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:26.020 [2024-11-20 10:03:56.770459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.020 [2024-11-20 10:03:56.770490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:26.020 [2024-11-20 10:03:56.770500] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:26.020 [2024-11-20 10:03:56.770664] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:26.020 [2024-11-20 10:03:56.770816] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:26.020 [2024-11-20 10:03:56.770823] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:26.020 [2024-11-20 10:03:56.770829] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:26.020 [2024-11-20 10:03:56.770835] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:26.020 [2024-11-20 10:03:56.782543] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:26.020 [2024-11-20 10:03:56.783008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.020 [2024-11-20 10:03:56.783024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:26.020 [2024-11-20 10:03:56.783030] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:26.020 [2024-11-20 10:03:56.783183] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:26.020 [2024-11-20 10:03:56.783334] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:26.020 [2024-11-20 10:03:56.783341] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:26.020 [2024-11-20 10:03:56.783347] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:26.020 [2024-11-20 10:03:56.783352] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:26.020 [2024-11-20 10:03:56.795178] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:26.020 [2024-11-20 10:03:56.795606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.020 [2024-11-20 10:03:56.795639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:26.020 [2024-11-20 10:03:56.795648] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:26.020 [2024-11-20 10:03:56.795817] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:26.020 [2024-11-20 10:03:56.795970] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:26.020 [2024-11-20 10:03:56.795977] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:26.020 [2024-11-20 10:03:56.795982] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:26.020 [2024-11-20 10:03:56.795989] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:26.020 [2024-11-20 10:03:56.807821] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:26.020 [2024-11-20 10:03:56.808462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.020 [2024-11-20 10:03:56.808493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:26.020 [2024-11-20 10:03:56.808503] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:26.020 [2024-11-20 10:03:56.808667] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:26.020 [2024-11-20 10:03:56.808819] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:26.020 [2024-11-20 10:03:56.808827] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:26.020 [2024-11-20 10:03:56.808832] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:26.020 [2024-11-20 10:03:56.808840] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:26.020 [2024-11-20 10:03:56.820397] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:26.020 [2024-11-20 10:03:56.820855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.020 [2024-11-20 10:03:56.820871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:26.020 [2024-11-20 10:03:56.820877] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:26.020 [2024-11-20 10:03:56.821026] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:26.020 [2024-11-20 10:03:56.821180] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:26.020 [2024-11-20 10:03:56.821187] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:26.020 [2024-11-20 10:03:56.821193] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:26.020 [2024-11-20 10:03:56.821198] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:26.020 [2024-11-20 10:03:56.833018] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:26.020 [2024-11-20 10:03:56.833565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.020 [2024-11-20 10:03:56.833597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:26.020 [2024-11-20 10:03:56.833606] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:26.020 [2024-11-20 10:03:56.833770] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:26.020 [2024-11-20 10:03:56.833923] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:26.020 [2024-11-20 10:03:56.833933] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:26.020 [2024-11-20 10:03:56.833940] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:26.020 [2024-11-20 10:03:56.833946] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:26.020 [2024-11-20 10:03:56.845638] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:26.020 [2024-11-20 10:03:56.846138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.020 [2024-11-20 10:03:56.846177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:26.020 [2024-11-20 10:03:56.846186] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:26.020 [2024-11-20 10:03:56.846353] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:26.020 [2024-11-20 10:03:56.846506] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:26.020 [2024-11-20 10:03:56.846513] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:26.020 [2024-11-20 10:03:56.846519] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:26.020 [2024-11-20 10:03:56.846525] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:26.020 [2024-11-20 10:03:56.858221] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:26.020 [2024-11-20 10:03:56.858837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.020 [2024-11-20 10:03:56.858869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:26.020 [2024-11-20 10:03:56.858878] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:26.020 [2024-11-20 10:03:56.859042] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:26.020 [2024-11-20 10:03:56.859202] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:26.020 [2024-11-20 10:03:56.859210] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:26.020 [2024-11-20 10:03:56.859216] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:26.020 [2024-11-20 10:03:56.859222] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:26.021 [2024-11-20 10:03:56.870910] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:26.021 [2024-11-20 10:03:56.871554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.021 [2024-11-20 10:03:56.871586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:26.021 [2024-11-20 10:03:56.871595] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:26.021 [2024-11-20 10:03:56.871760] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:26.021 [2024-11-20 10:03:56.871912] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:26.021 [2024-11-20 10:03:56.871919] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:26.021 [2024-11-20 10:03:56.871925] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:26.021 [2024-11-20 10:03:56.871932] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:26.021 [2024-11-20 10:03:56.883494] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:26.021 [2024-11-20 10:03:56.883977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.021 [2024-11-20 10:03:56.884009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:26.021 [2024-11-20 10:03:56.884018] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:26.021 [2024-11-20 10:03:56.884192] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:26.021 [2024-11-20 10:03:56.884345] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:26.021 [2024-11-20 10:03:56.884352] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:26.021 [2024-11-20 10:03:56.884358] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:26.021 [2024-11-20 10:03:56.884364] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:26.021 [2024-11-20 10:03:56.896191] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:26.021 [2024-11-20 10:03:56.896776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.021 [2024-11-20 10:03:56.896807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:26.021 [2024-11-20 10:03:56.896817] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:26.021 [2024-11-20 10:03:56.896982] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:26.021 [2024-11-20 10:03:56.897134] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:26.021 [2024-11-20 10:03:56.897142] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:26.021 [2024-11-20 10:03:56.897149] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:26.021 [2024-11-20 10:03:56.897155] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:26.021 [2024-11-20 10:03:56.908847] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:26.021 [2024-11-20 10:03:56.909455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.021 [2024-11-20 10:03:56.909487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:26.021 [2024-11-20 10:03:56.909496] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:26.021 [2024-11-20 10:03:56.909662] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:26.021 [2024-11-20 10:03:56.909815] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:26.021 [2024-11-20 10:03:56.909822] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:26.021 [2024-11-20 10:03:56.909828] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:26.021 [2024-11-20 10:03:56.909834] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:26.021 [2024-11-20 10:03:56.921531] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:26.021 [2024-11-20 10:03:56.921987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.021 [2024-11-20 10:03:56.922022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:26.021 [2024-11-20 10:03:56.922031] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:26.021 [2024-11-20 10:03:56.922204] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:26.021 [2024-11-20 10:03:56.922357] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:26.021 [2024-11-20 10:03:56.922364] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:26.021 [2024-11-20 10:03:56.922370] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:26.021 [2024-11-20 10:03:56.922376] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:26.283 [2024-11-20 10:03:56.934204] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:26.283 [2024-11-20 10:03:56.934741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.283 [2024-11-20 10:03:56.934772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:26.283 [2024-11-20 10:03:56.934781] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:26.283 [2024-11-20 10:03:56.934947] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:26.283 [2024-11-20 10:03:56.935100] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:26.283 [2024-11-20 10:03:56.935108] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:26.283 [2024-11-20 10:03:56.935115] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:26.283 [2024-11-20 10:03:56.935121] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:26.283 10:03:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:26.283 10:03:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:30:26.283 10:03:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:26.283 10:03:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:26.283 10:03:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:26.283 [2024-11-20 10:03:56.946818] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:26.283 [2024-11-20 10:03:56.947301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.283 [2024-11-20 10:03:56.947334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:26.283 [2024-11-20 10:03:56.947343] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:26.283 [2024-11-20 10:03:56.947507] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:26.283 [2024-11-20 10:03:56.947659] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:26.283 [2024-11-20 10:03:56.947667] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:26.283 [2024-11-20 10:03:56.947673] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:26.283 [2024-11-20 10:03:56.947679] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:26.283 [2024-11-20 10:03:56.959389] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:26.283 [2024-11-20 10:03:56.959856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.283 [2024-11-20 10:03:56.959872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:26.283 [2024-11-20 10:03:56.959879] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:26.283 [2024-11-20 10:03:56.960028] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:26.283 [2024-11-20 10:03:56.960183] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:26.283 [2024-11-20 10:03:56.960191] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:26.283 [2024-11-20 10:03:56.960197] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:26.283 [2024-11-20 10:03:56.960202] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:26.283 [2024-11-20 10:03:56.972030] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:26.283 [2024-11-20 10:03:56.972569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.284 [2024-11-20 10:03:56.972601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:26.284 [2024-11-20 10:03:56.972610] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:26.284 [2024-11-20 10:03:56.972774] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:26.284 [2024-11-20 10:03:56.972927] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:26.284 [2024-11-20 10:03:56.972934] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:26.284 [2024-11-20 10:03:56.972941] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:26.284 [2024-11-20 10:03:56.972947] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:26.284 10:03:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:26.284 10:03:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:26.284 10:03:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:26.284 10:03:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:26.284 [2024-11-20 10:03:56.984655] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:26.284 [2024-11-20 10:03:56.985169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.284 [2024-11-20 10:03:56.985185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:26.284 [2024-11-20 10:03:56.985191] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:26.284 [2024-11-20 10:03:56.985340] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:26.284 [2024-11-20 10:03:56.985489] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:26.284 [2024-11-20 10:03:56.985496] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:26.284 [2024-11-20 10:03:56.985501] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:26.284 [2024-11-20 10:03:56.985507] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:26.284 [2024-11-20 10:03:56.986738] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:26.284 10:03:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:26.284 10:03:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:26.284 10:03:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:26.284 10:03:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:26.284 [2024-11-20 10:03:56.997340] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:26.284 [2024-11-20 10:03:56.997951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.284 [2024-11-20 10:03:56.997982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:26.284 [2024-11-20 10:03:56.997991] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:26.284 [2024-11-20 10:03:56.998155] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:26.284 [2024-11-20 10:03:56.998314] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:26.284 [2024-11-20 10:03:56.998322] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:26.284 [2024-11-20 10:03:56.998328] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:26.284 [2024-11-20 10:03:56.998334] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:26.284 [2024-11-20 10:03:57.010028] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:26.284 [2024-11-20 10:03:57.010475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.284 [2024-11-20 10:03:57.010507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:26.284 [2024-11-20 10:03:57.010516] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:26.284 [2024-11-20 10:03:57.010681] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:26.284 [2024-11-20 10:03:57.010833] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:26.284 [2024-11-20 10:03:57.010840] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:26.284 [2024-11-20 10:03:57.010846] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:26.284 [2024-11-20 10:03:57.010852] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:26.284 Malloc0 00:30:26.284 10:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:26.284 10:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:26.284 10:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:26.284 10:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:26.284 [2024-11-20 10:03:57.022608] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:26.284 [2024-11-20 10:03:57.023255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.284 [2024-11-20 10:03:57.023288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:26.284 [2024-11-20 10:03:57.023297] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:26.284 [2024-11-20 10:03:57.023463] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:26.284 [2024-11-20 10:03:57.023620] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:26.284 [2024-11-20 10:03:57.023628] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:26.284 [2024-11-20 10:03:57.023634] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:26.284 [2024-11-20 10:03:57.023640] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:26.284 10:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:26.284 10:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:26.284 10:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:26.284 10:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:26.284 [2024-11-20 10:03:57.035207] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:26.284 [2024-11-20 10:03:57.035771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.284 [2024-11-20 10:03:57.035803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:26.284 [2024-11-20 10:03:57.035812] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:26.284 [2024-11-20 10:03:57.035976] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:26.284 [2024-11-20 10:03:57.036129] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:26.284 [2024-11-20 10:03:57.036138] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:26.284 [2024-11-20 10:03:57.036143] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:26.284 [2024-11-20 10:03:57.036149] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:26.284 10:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:26.284 10:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:26.284 10:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:26.284 10:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:26.284 [2024-11-20 10:03:57.047844] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:26.284 [2024-11-20 10:03:57.048470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.284 [2024-11-20 10:03:57.048502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d000 with addr=10.0.0.2, port=4420 00:30:26.284 [2024-11-20 10:03:57.048511] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d000 is same with the state(6) to be set 00:30:26.284 [2024-11-20 10:03:57.048664] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:26.284 [2024-11-20 10:03:57.048676] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d000 (9): Bad file descriptor 00:30:26.284 [2024-11-20 10:03:57.048828] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:26.284 [2024-11-20 10:03:57.048835] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:26.284 [2024-11-20 10:03:57.048842] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:26.284 [2024-11-20 10:03:57.048848] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:26.284 10:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:26.284 10:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1557161 00:30:26.284 [2024-11-20 10:03:57.060429] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:26.284 [2024-11-20 10:03:57.087761] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:30:27.800 4904.86 IOPS, 19.16 MiB/s [2024-11-20T09:03:59.660Z] 5928.25 IOPS, 23.16 MiB/s [2024-11-20T09:04:00.600Z] 6690.44 IOPS, 26.13 MiB/s [2024-11-20T09:04:01.983Z] 7306.10 IOPS, 28.54 MiB/s [2024-11-20T09:04:02.924Z] 7826.18 IOPS, 30.57 MiB/s [2024-11-20T09:04:03.865Z] 8240.25 IOPS, 32.19 MiB/s [2024-11-20T09:04:04.806Z] 8612.15 IOPS, 33.64 MiB/s [2024-11-20T09:04:05.747Z] 8925.50 IOPS, 34.87 MiB/s 00:30:34.831 Latency(us) 00:30:34.831 [2024-11-20T09:04:05.747Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:34.831 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:34.831 Verification LBA range: start 0x0 length 0x4000 00:30:34.831 Nvme1n1 : 15.00 9198.58 35.93 13515.90 0.00 5616.18 570.03 14745.60 00:30:34.831 [2024-11-20T09:04:05.747Z] =================================================================================================================== 00:30:34.831 [2024-11-20T09:04:05.747Z] Total : 9198.58 35.93 13515.90 0.00 5616.18 570.03 14745.60 00:30:34.831 10:04:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:30:34.831 10:04:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:34.831 10:04:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:34.831 10:04:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:34.831 10:04:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:34.831 10:04:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:30:34.831 10:04:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:30:34.831 10:04:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:34.831 10:04:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:30:34.831 10:04:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:34.831 10:04:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:30:34.831 10:04:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:34.831 10:04:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:34.831 rmmod nvme_tcp 00:30:34.831 rmmod nvme_fabrics 00:30:34.831 rmmod nvme_keyring 00:30:35.092 10:04:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:35.092 10:04:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:30:35.092 10:04:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:30:35.092 10:04:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 1558456 ']' 00:30:35.092 10:04:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 1558456 00:30:35.092 10:04:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 1558456 ']' 00:30:35.092 10:04:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 1558456 00:30:35.092 10:04:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:30:35.092 10:04:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:35.092 10:04:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1558456 00:30:35.092 10:04:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:35.092 10:04:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:35.092 10:04:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1558456' 00:30:35.092 killing process with pid 1558456 00:30:35.092 10:04:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 1558456 00:30:35.092 10:04:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 1558456 00:30:35.092 10:04:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:35.092 10:04:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:35.092 10:04:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:35.092 10:04:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:30:35.092 10:04:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:30:35.092 10:04:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:35.092 10:04:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:30:35.092 10:04:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:35.092 10:04:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:35.092 10:04:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:35.092 10:04:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:35.092 10:04:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:37.638 10:04:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:37.638 00:30:37.638 real 0m28.266s 00:30:37.638 user 1m3.237s 00:30:37.638 sys 0m7.771s 00:30:37.638 10:04:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:37.638 10:04:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:37.638 ************************************ 00:30:37.638 END TEST nvmf_bdevperf 00:30:37.638 ************************************ 00:30:37.638 10:04:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:30:37.638 10:04:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:37.638 10:04:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:37.638 10:04:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:37.638 ************************************ 00:30:37.638 START TEST nvmf_target_disconnect 00:30:37.638 ************************************ 00:30:37.638 10:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:30:37.638 * Looking for test storage... 00:30:37.638 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:37.638 10:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:37.638 10:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:30:37.638 10:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:37.638 10:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:37.638 10:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:37.638 10:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:37.638 10:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:37.638 10:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:30:37.638 10:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:30:37.638 10:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:30:37.638 10:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:30:37.638 10:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:30:37.638 10:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:30:37.638 10:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:30:37.638 10:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:37.638 10:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:30:37.638 10:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:30:37.638 10:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:37.638 10:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:37.638 10:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:30:37.638 10:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:30:37.638 10:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:37.638 10:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:30:37.638 10:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:30:37.638 10:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:30:37.638 10:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:30:37.638 10:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:37.638 10:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:30:37.638 10:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:30:37.638 10:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:37.638 10:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:37.638 10:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:30:37.638 10:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:37.638 10:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:37.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:37.638 --rc genhtml_branch_coverage=1 00:30:37.638 --rc genhtml_function_coverage=1 00:30:37.638 --rc genhtml_legend=1 00:30:37.638 --rc geninfo_all_blocks=1 00:30:37.638 --rc geninfo_unexecuted_blocks=1 00:30:37.638 00:30:37.638 ' 00:30:37.638 10:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:37.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:37.638 --rc genhtml_branch_coverage=1 00:30:37.638 --rc genhtml_function_coverage=1 00:30:37.638 --rc genhtml_legend=1 00:30:37.638 --rc geninfo_all_blocks=1 00:30:37.638 --rc geninfo_unexecuted_blocks=1 00:30:37.638 00:30:37.638 ' 00:30:37.638 10:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:37.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:37.638 --rc genhtml_branch_coverage=1 00:30:37.638 --rc genhtml_function_coverage=1 00:30:37.638 --rc genhtml_legend=1 00:30:37.638 --rc geninfo_all_blocks=1 00:30:37.638 --rc geninfo_unexecuted_blocks=1 00:30:37.638 00:30:37.638 ' 00:30:37.638 10:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:37.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:37.638 --rc genhtml_branch_coverage=1 00:30:37.638 --rc genhtml_function_coverage=1 00:30:37.638 --rc genhtml_legend=1 00:30:37.638 --rc geninfo_all_blocks=1 00:30:37.638 --rc geninfo_unexecuted_blocks=1 00:30:37.638 00:30:37.638 ' 00:30:37.638 10:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:37.638 10:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:30:37.638 10:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:37.638 10:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:37.638 10:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:37.638 10:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:37.638 10:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:37.638 10:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:37.638 10:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:37.638 10:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:37.638 10:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:37.638 10:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:37.638 10:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:37.638 10:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:37.638 10:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:37.638 10:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:37.638 10:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:37.638 10:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:37.639 10:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:37.639 10:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:30:37.639 10:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:37.639 10:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:37.639 10:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:37.639 10:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:37.639 10:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:37.639 10:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:37.639 10:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:30:37.639 10:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:37.639 10:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:30:37.639 10:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:37.639 10:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:37.639 10:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:37.639 10:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:37.639 10:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:37.639 10:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:37.639 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:37.639 10:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:37.639 10:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:37.639 10:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:37.639 10:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:30:37.639 10:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:30:37.639 10:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:30:37.639 10:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:30:37.639 10:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:37.639 10:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:37.639 10:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:37.639 10:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:37.639 10:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:37.639 10:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:37.639 10:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:37.639 10:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:37.639 10:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:37.639 10:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:37.639 10:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:30:37.639 10:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:45.789 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:45.789 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:30:45.789 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:45.789 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:45.789 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:45.789 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:45.789 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:45.789 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:30:45.789 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:45.789 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:30:45.789 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:30:45.789 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:30:45.789 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:30:45.789 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:30:45.789 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:30:45.789 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:45.789 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:45.789 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:45.789 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:45.789 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:45.789 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:45.789 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:45.789 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:45.789 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:45.789 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:45.789 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:45.789 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:45.789 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:45.789 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:45.789 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:45.789 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:45.789 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:45.789 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:45.789 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:45.789 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:45.789 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:45.789 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:45.789 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:45.789 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:45.789 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:45.789 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:45.789 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:45.789 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:45.789 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:45.789 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:45.789 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:45.789 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:45.789 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:45.789 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:45.789 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:45.789 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:45.789 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:45.789 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:45.789 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:45.789 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:45.789 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:45.789 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:45.789 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:45.789 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:45.789 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:45.789 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:45.789 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:45.789 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:45.789 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:45.789 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:45.789 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:45.789 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:45.789 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:45.789 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:45.789 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:45.789 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:45.789 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:45.789 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:45.789 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:30:45.789 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:45.789 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:45.789 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:45.789 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:45.789 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:45.789 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:45.789 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:45.789 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:45.789 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:45.789 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:45.789 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:45.789 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:45.789 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:45.789 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:45.789 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:45.790 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:45.790 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:45.790 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:45.790 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:45.790 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:45.790 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:45.790 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:45.790 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:45.790 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:45.790 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:45.790 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:45.790 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:45.790 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.646 ms 00:30:45.790 00:30:45.790 --- 10.0.0.2 ping statistics --- 00:30:45.790 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:45.790 rtt min/avg/max/mdev = 0.646/0.646/0.646/0.000 ms 00:30:45.790 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:45.790 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:45.790 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:30:45.790 00:30:45.790 --- 10.0.0.1 ping statistics --- 00:30:45.790 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:45.790 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:30:45.790 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:45.790 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:30:45.790 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:45.790 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:45.790 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:45.790 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:45.790 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:45.790 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:45.790 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:45.790 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:30:45.790 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:45.790 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:45.790 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:45.790 ************************************ 00:30:45.790 START TEST nvmf_target_disconnect_tc1 00:30:45.790 ************************************ 00:30:45.790 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:30:45.790 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:45.790 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:30:45.790 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:45.790 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:45.790 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:45.790 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:45.790 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:45.790 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:45.790 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:45.790 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:45.790 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:30:45.790 10:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:45.790 [2024-11-20 10:04:15.995886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.790 [2024-11-20 10:04:15.995986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73ad0 with addr=10.0.0.2, port=4420 00:30:45.790 [2024-11-20 10:04:15.996020] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:45.790 [2024-11-20 10:04:15.996031] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:45.790 [2024-11-20 10:04:15.996039] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:30:45.790 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:30:45.790 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:30:45.790 Initializing NVMe Controllers 00:30:45.790 10:04:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:30:45.790 10:04:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:45.790 10:04:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:45.790 10:04:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:45.790 00:30:45.790 real 0m0.145s 00:30:45.790 user 0m0.070s 00:30:45.790 sys 0m0.076s 00:30:45.790 10:04:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:45.790 10:04:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:45.790 ************************************ 00:30:45.790 END TEST nvmf_target_disconnect_tc1 00:30:45.790 ************************************ 00:30:45.790 10:04:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:30:45.790 10:04:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:45.790 10:04:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:45.790 10:04:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:45.790 ************************************ 00:30:45.790 START TEST nvmf_target_disconnect_tc2 00:30:45.790 ************************************ 00:30:45.790 10:04:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:30:45.790 10:04:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:30:45.790 10:04:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:45.790 10:04:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:45.790 10:04:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:45.790 10:04:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:45.790 10:04:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1564508 00:30:45.790 10:04:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1564508 00:30:45.790 10:04:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:45.790 10:04:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1564508 ']' 00:30:45.790 10:04:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:45.790 10:04:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:45.790 10:04:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:45.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:45.790 10:04:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:45.790 10:04:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:45.790 [2024-11-20 10:04:16.173145] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:30:45.790 [2024-11-20 10:04:16.173223] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:45.790 [2024-11-20 10:04:16.275526] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:45.790 [2024-11-20 10:04:16.327910] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:45.790 [2024-11-20 10:04:16.327965] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:45.790 [2024-11-20 10:04:16.327974] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:45.790 [2024-11-20 10:04:16.327981] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:45.791 [2024-11-20 10:04:16.327987] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:45.791 [2024-11-20 10:04:16.330037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:30:45.791 [2024-11-20 10:04:16.330185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:30:45.791 [2024-11-20 10:04:16.330329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:45.791 [2024-11-20 10:04:16.330329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:30:46.364 10:04:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:46.364 10:04:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:30:46.364 10:04:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:46.364 10:04:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:46.364 10:04:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:46.364 10:04:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:46.364 10:04:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:46.364 10:04:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.364 10:04:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:46.364 Malloc0 00:30:46.364 10:04:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.364 10:04:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:46.364 10:04:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.364 10:04:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:46.364 [2024-11-20 10:04:17.067691] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:46.364 10:04:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.364 10:04:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:46.364 10:04:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.364 10:04:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:46.364 10:04:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.364 10:04:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:46.364 10:04:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.364 10:04:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:46.364 10:04:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.364 10:04:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:46.364 10:04:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.364 10:04:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:46.364 [2024-11-20 10:04:17.108103] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:46.364 10:04:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.364 10:04:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:46.364 10:04:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.364 10:04:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:46.364 10:04:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.364 10:04:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1564691 00:30:46.364 10:04:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:30:46.364 10:04:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:48.282 10:04:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1564508 00:30:48.282 10:04:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:30:48.282 Read completed with error (sct=0, sc=8) 00:30:48.282 starting I/O failed 00:30:48.282 Read completed with error (sct=0, sc=8) 00:30:48.282 starting I/O failed 00:30:48.282 Read completed with error (sct=0, sc=8) 00:30:48.282 starting I/O failed 00:30:48.282 Read completed with error (sct=0, sc=8) 00:30:48.282 starting I/O failed 00:30:48.282 Read completed with error (sct=0, sc=8) 00:30:48.282 starting I/O failed 00:30:48.282 Read completed with error (sct=0, sc=8) 00:30:48.282 starting I/O failed 00:30:48.282 Read completed with error (sct=0, sc=8) 00:30:48.282 starting I/O failed 00:30:48.282 Read completed with error (sct=0, sc=8) 00:30:48.282 starting I/O failed 00:30:48.282 Read completed with error (sct=0, sc=8) 00:30:48.282 starting I/O failed 00:30:48.282 Read completed with error (sct=0, sc=8) 00:30:48.282 starting I/O failed 00:30:48.282 Read completed with error (sct=0, sc=8) 00:30:48.282 starting I/O failed 00:30:48.282 Read completed with error (sct=0, sc=8) 00:30:48.282 starting I/O failed 00:30:48.282 Read completed with error (sct=0, sc=8) 00:30:48.282 starting I/O failed 00:30:48.282 Write completed with error (sct=0, sc=8) 00:30:48.282 starting I/O failed 00:30:48.282 Read completed with error (sct=0, sc=8) 00:30:48.282 starting I/O failed 00:30:48.282 Write completed with error (sct=0, sc=8) 00:30:48.282 starting I/O failed 00:30:48.282 Read completed with error (sct=0, sc=8) 00:30:48.282 starting I/O failed 00:30:48.282 Read completed with error (sct=0, sc=8) 00:30:48.282 starting I/O failed 00:30:48.282 Read completed with error (sct=0, sc=8) 00:30:48.282 starting I/O failed 00:30:48.282 Read completed with error (sct=0, sc=8) 00:30:48.282 starting I/O failed 00:30:48.282 Write completed with error (sct=0, sc=8) 00:30:48.282 starting I/O failed 00:30:48.282 Read completed with error (sct=0, sc=8) 00:30:48.282 starting I/O failed 00:30:48.282 Read completed with error (sct=0, sc=8) 00:30:48.282 starting I/O failed 00:30:48.282 Read completed with error (sct=0, sc=8) 00:30:48.282 starting I/O failed 00:30:48.282 Read completed with error (sct=0, sc=8) 00:30:48.282 starting I/O failed 00:30:48.282 Write completed with error (sct=0, sc=8) 00:30:48.282 starting I/O failed 00:30:48.282 Read completed with error (sct=0, sc=8) 00:30:48.282 starting I/O failed 00:30:48.282 Write completed with error (sct=0, sc=8) 00:30:48.282 starting I/O failed 00:30:48.282 Read completed with error (sct=0, sc=8) 00:30:48.282 starting I/O failed 00:30:48.282 Read completed with error (sct=0, sc=8) 00:30:48.282 starting I/O failed 00:30:48.282 Read completed with error (sct=0, sc=8) 00:30:48.282 starting I/O failed 00:30:48.282 Read completed with error (sct=0, sc=8) 00:30:48.282 starting I/O failed 00:30:48.282 [2024-11-20 10:04:19.145460] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:48.282 Read completed with error (sct=0, sc=8) 00:30:48.282 starting I/O failed 00:30:48.282 Read completed with error (sct=0, sc=8) 00:30:48.282 starting I/O failed 00:30:48.282 Read completed with error (sct=0, sc=8) 00:30:48.282 starting I/O failed 00:30:48.282 Read completed with error (sct=0, sc=8) 00:30:48.282 starting I/O failed 00:30:48.282 Read completed with error (sct=0, sc=8) 00:30:48.282 starting I/O failed 00:30:48.282 Read completed with error (sct=0, sc=8) 00:30:48.282 starting I/O failed 00:30:48.282 Read completed with error (sct=0, sc=8) 00:30:48.282 starting I/O failed 00:30:48.282 Read completed with error (sct=0, sc=8) 00:30:48.282 starting I/O failed 00:30:48.282 Read completed with error (sct=0, sc=8) 00:30:48.282 starting I/O failed 00:30:48.282 Read completed with error (sct=0, sc=8) 00:30:48.282 starting I/O failed 00:30:48.282 Read completed with error (sct=0, sc=8) 00:30:48.282 starting I/O failed 00:30:48.282 Read completed with error (sct=0, sc=8) 00:30:48.282 starting I/O failed 00:30:48.282 Read completed with error (sct=0, sc=8) 00:30:48.282 starting I/O failed 00:30:48.282 Read completed with error (sct=0, sc=8) 00:30:48.282 starting I/O failed 00:30:48.282 Read completed with error (sct=0, sc=8) 00:30:48.282 starting I/O failed 00:30:48.282 Write completed with error (sct=0, sc=8) 00:30:48.282 starting I/O failed 00:30:48.282 Read completed with error (sct=0, sc=8) 00:30:48.282 starting I/O failed 00:30:48.282 Write completed with error (sct=0, sc=8) 00:30:48.282 starting I/O failed 00:30:48.282 Read completed with error (sct=0, sc=8) 00:30:48.282 starting I/O failed 00:30:48.282 Read completed with error (sct=0, sc=8) 00:30:48.282 starting I/O failed 00:30:48.282 Read completed with error (sct=0, sc=8) 00:30:48.282 starting I/O failed 00:30:48.282 Read completed with error (sct=0, sc=8) 00:30:48.282 starting I/O failed 00:30:48.282 Read completed with error (sct=0, sc=8) 00:30:48.282 starting I/O failed 00:30:48.282 Read completed with error (sct=0, sc=8) 00:30:48.282 starting I/O failed 00:30:48.282 Write completed with error (sct=0, sc=8) 00:30:48.282 starting I/O failed 00:30:48.282 Write completed with error (sct=0, sc=8) 00:30:48.282 starting I/O failed 00:30:48.282 Read completed with error (sct=0, sc=8) 00:30:48.282 starting I/O failed 00:30:48.282 Write completed with error (sct=0, sc=8) 00:30:48.282 starting I/O failed 00:30:48.283 Read completed with error (sct=0, sc=8) 00:30:48.283 starting I/O failed 00:30:48.283 Read completed with error (sct=0, sc=8) 00:30:48.283 starting I/O failed 00:30:48.283 Write completed with error (sct=0, sc=8) 00:30:48.283 starting I/O failed 00:30:48.283 Write completed with error (sct=0, sc=8) 00:30:48.283 starting I/O failed 00:30:48.283 [2024-11-20 10:04:19.145822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:48.283 [2024-11-20 10:04:19.146378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.283 [2024-11-20 10:04:19.146435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.283 qpair failed and we were unable to recover it. 00:30:48.283 [2024-11-20 10:04:19.146725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.283 [2024-11-20 10:04:19.146741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.283 qpair failed and we were unable to recover it. 00:30:48.283 [2024-11-20 10:04:19.147092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.283 [2024-11-20 10:04:19.147106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.283 qpair failed and we were unable to recover it. 00:30:48.283 [2024-11-20 10:04:19.147450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.283 [2024-11-20 10:04:19.147503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.283 qpair failed and we were unable to recover it. 00:30:48.283 [2024-11-20 10:04:19.148711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.283 [2024-11-20 10:04:19.148742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.283 qpair failed and we were unable to recover it. 00:30:48.283 [2024-11-20 10:04:19.149078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.283 [2024-11-20 10:04:19.149093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.283 qpair failed and we were unable to recover it. 00:30:48.283 [2024-11-20 10:04:19.149419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.283 [2024-11-20 10:04:19.149470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.283 qpair failed and we were unable to recover it. 00:30:48.283 [2024-11-20 10:04:19.149815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.283 [2024-11-20 10:04:19.149831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.283 qpair failed and we were unable to recover it. 00:30:48.283 [2024-11-20 10:04:19.150174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.283 [2024-11-20 10:04:19.150189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.283 qpair failed and we were unable to recover it. 00:30:48.283 [2024-11-20 10:04:19.150413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.283 [2024-11-20 10:04:19.150427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.283 qpair failed and we were unable to recover it. 00:30:48.283 [2024-11-20 10:04:19.150763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.283 [2024-11-20 10:04:19.150776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.283 qpair failed and we were unable to recover it. 00:30:48.283 [2024-11-20 10:04:19.151104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.283 [2024-11-20 10:04:19.151118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.283 qpair failed and we were unable to recover it. 00:30:48.283 [2024-11-20 10:04:19.151410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.283 [2024-11-20 10:04:19.151424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.283 qpair failed and we were unable to recover it. 00:30:48.283 [2024-11-20 10:04:19.151548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.283 [2024-11-20 10:04:19.151562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.283 qpair failed and we were unable to recover it. 00:30:48.283 [2024-11-20 10:04:19.151862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.283 [2024-11-20 10:04:19.151875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.283 qpair failed and we were unable to recover it. 00:30:48.283 [2024-11-20 10:04:19.152175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.283 [2024-11-20 10:04:19.152188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.283 qpair failed and we were unable to recover it. 00:30:48.283 [2024-11-20 10:04:19.153172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.283 [2024-11-20 10:04:19.153203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.283 qpair failed and we were unable to recover it. 00:30:48.283 [2024-11-20 10:04:19.153540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.283 [2024-11-20 10:04:19.153555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.283 qpair failed and we were unable to recover it. 00:30:48.283 [2024-11-20 10:04:19.153907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.283 [2024-11-20 10:04:19.153920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.283 qpair failed and we were unable to recover it. 00:30:48.283 [2024-11-20 10:04:19.154258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.283 [2024-11-20 10:04:19.154272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.283 qpair failed and we were unable to recover it. 00:30:48.283 [2024-11-20 10:04:19.154645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.283 [2024-11-20 10:04:19.154660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.283 qpair failed and we were unable to recover it. 00:30:48.283 [2024-11-20 10:04:19.154868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.283 [2024-11-20 10:04:19.154880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.283 qpair failed and we were unable to recover it. 00:30:48.283 [2024-11-20 10:04:19.155185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.283 [2024-11-20 10:04:19.155199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.283 qpair failed and we were unable to recover it. 00:30:48.283 [2024-11-20 10:04:19.155603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.283 [2024-11-20 10:04:19.155617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.283 qpair failed and we were unable to recover it. 00:30:48.283 [2024-11-20 10:04:19.155953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.283 [2024-11-20 10:04:19.155967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.283 qpair failed and we were unable to recover it. 00:30:48.283 [2024-11-20 10:04:19.156281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.283 [2024-11-20 10:04:19.156294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.283 qpair failed and we were unable to recover it. 00:30:48.283 [2024-11-20 10:04:19.156598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.283 [2024-11-20 10:04:19.156611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.283 qpair failed and we were unable to recover it. 00:30:48.283 [2024-11-20 10:04:19.156898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.283 [2024-11-20 10:04:19.156911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.283 qpair failed and we were unable to recover it. 00:30:48.283 [2024-11-20 10:04:19.157247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.283 [2024-11-20 10:04:19.157261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.283 qpair failed and we were unable to recover it. 00:30:48.283 [2024-11-20 10:04:19.157552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.283 [2024-11-20 10:04:19.157567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.283 qpair failed and we were unable to recover it. 00:30:48.283 [2024-11-20 10:04:19.157908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.283 [2024-11-20 10:04:19.157922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.283 qpair failed and we were unable to recover it. 00:30:48.283 [2024-11-20 10:04:19.158101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.283 [2024-11-20 10:04:19.158116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.283 qpair failed and we were unable to recover it. 00:30:48.283 [2024-11-20 10:04:19.158453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.283 [2024-11-20 10:04:19.158468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.283 qpair failed and we were unable to recover it. 00:30:48.283 [2024-11-20 10:04:19.158830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.283 [2024-11-20 10:04:19.158844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.283 qpair failed and we were unable to recover it. 00:30:48.283 [2024-11-20 10:04:19.159153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.283 [2024-11-20 10:04:19.159214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.283 qpair failed and we were unable to recover it. 00:30:48.283 [2024-11-20 10:04:19.159511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.283 [2024-11-20 10:04:19.159524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.283 qpair failed and we were unable to recover it. 00:30:48.284 [2024-11-20 10:04:19.159813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.284 [2024-11-20 10:04:19.159828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.284 qpair failed and we were unable to recover it. 00:30:48.284 [2024-11-20 10:04:19.160120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.284 [2024-11-20 10:04:19.160132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.284 qpair failed and we were unable to recover it. 00:30:48.284 [2024-11-20 10:04:19.160527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.284 [2024-11-20 10:04:19.160541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.284 qpair failed and we were unable to recover it. 00:30:48.284 [2024-11-20 10:04:19.160862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.284 [2024-11-20 10:04:19.160875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.284 qpair failed and we were unable to recover it. 00:30:48.284 [2024-11-20 10:04:19.161053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.284 [2024-11-20 10:04:19.161065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.284 qpair failed and we were unable to recover it. 00:30:48.284 [2024-11-20 10:04:19.161267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.284 [2024-11-20 10:04:19.161277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.284 qpair failed and we were unable to recover it. 00:30:48.284 [2024-11-20 10:04:19.161608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.284 [2024-11-20 10:04:19.161622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.284 qpair failed and we were unable to recover it. 00:30:48.284 [2024-11-20 10:04:19.161844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.284 [2024-11-20 10:04:19.161856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.284 qpair failed and we were unable to recover it. 00:30:48.284 [2024-11-20 10:04:19.162238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.284 [2024-11-20 10:04:19.162253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.284 qpair failed and we were unable to recover it. 00:30:48.284 [2024-11-20 10:04:19.162564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.284 [2024-11-20 10:04:19.162576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.284 qpair failed and we were unable to recover it. 00:30:48.284 [2024-11-20 10:04:19.162904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.284 [2024-11-20 10:04:19.162919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.284 qpair failed and we were unable to recover it. 00:30:48.284 [2024-11-20 10:04:19.163262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.284 [2024-11-20 10:04:19.163275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.284 qpair failed and we were unable to recover it. 00:30:48.284 [2024-11-20 10:04:19.163618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.284 [2024-11-20 10:04:19.163632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.284 qpair failed and we were unable to recover it. 00:30:48.284 [2024-11-20 10:04:19.163942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.284 [2024-11-20 10:04:19.163954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.284 qpair failed and we were unable to recover it. 00:30:48.284 [2024-11-20 10:04:19.164259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.284 [2024-11-20 10:04:19.164272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.284 qpair failed and we were unable to recover it. 00:30:48.284 [2024-11-20 10:04:19.164576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.284 [2024-11-20 10:04:19.164588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.284 qpair failed and we were unable to recover it. 00:30:48.284 [2024-11-20 10:04:19.164912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.284 [2024-11-20 10:04:19.164927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.284 qpair failed and we were unable to recover it. 00:30:48.284 [2024-11-20 10:04:19.165257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.284 [2024-11-20 10:04:19.165270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.284 qpair failed and we were unable to recover it. 00:30:48.284 [2024-11-20 10:04:19.166293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.284 [2024-11-20 10:04:19.166321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.284 qpair failed and we were unable to recover it. 00:30:48.284 [2024-11-20 10:04:19.166662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.284 [2024-11-20 10:04:19.166677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.284 qpair failed and we were unable to recover it. 00:30:48.284 [2024-11-20 10:04:19.166963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.284 [2024-11-20 10:04:19.166976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.284 qpair failed and we were unable to recover it. 00:30:48.284 [2024-11-20 10:04:19.168258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.284 [2024-11-20 10:04:19.168292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.284 qpair failed and we were unable to recover it. 00:30:48.284 [2024-11-20 10:04:19.168628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.284 [2024-11-20 10:04:19.168641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.284 qpair failed and we were unable to recover it. 00:30:48.284 [2024-11-20 10:04:19.168956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.284 [2024-11-20 10:04:19.168968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.284 qpair failed and we were unable to recover it. 00:30:48.284 [2024-11-20 10:04:19.169296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.284 [2024-11-20 10:04:19.169310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.284 qpair failed and we were unable to recover it. 00:30:48.284 [2024-11-20 10:04:19.169598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.284 [2024-11-20 10:04:19.169611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.284 qpair failed and we were unable to recover it. 00:30:48.284 [2024-11-20 10:04:19.169935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.284 [2024-11-20 10:04:19.169948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.284 qpair failed and we were unable to recover it. 00:30:48.284 [2024-11-20 10:04:19.170276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.284 [2024-11-20 10:04:19.170290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.284 qpair failed and we were unable to recover it. 00:30:48.284 [2024-11-20 10:04:19.170602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.284 [2024-11-20 10:04:19.170615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.284 qpair failed and we were unable to recover it. 00:30:48.284 [2024-11-20 10:04:19.170907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.284 [2024-11-20 10:04:19.170920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.284 qpair failed and we were unable to recover it. 00:30:48.284 [2024-11-20 10:04:19.171260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.284 [2024-11-20 10:04:19.171275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.284 qpair failed and we were unable to recover it. 00:30:48.284 [2024-11-20 10:04:19.171627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.284 [2024-11-20 10:04:19.171640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.284 qpair failed and we were unable to recover it. 00:30:48.284 [2024-11-20 10:04:19.171950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.284 [2024-11-20 10:04:19.171962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.284 qpair failed and we were unable to recover it. 00:30:48.284 [2024-11-20 10:04:19.172261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.284 [2024-11-20 10:04:19.172274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.284 qpair failed and we were unable to recover it. 00:30:48.284 [2024-11-20 10:04:19.172571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.284 [2024-11-20 10:04:19.172584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.284 qpair failed and we were unable to recover it. 00:30:48.284 [2024-11-20 10:04:19.172918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.284 [2024-11-20 10:04:19.172933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.284 qpair failed and we were unable to recover it. 00:30:48.284 [2024-11-20 10:04:19.173262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.284 [2024-11-20 10:04:19.173277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.284 qpair failed and we were unable to recover it. 00:30:48.284 [2024-11-20 10:04:19.173582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.284 [2024-11-20 10:04:19.173596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.285 qpair failed and we were unable to recover it. 00:30:48.285 [2024-11-20 10:04:19.173936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.285 [2024-11-20 10:04:19.173952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.285 qpair failed and we were unable to recover it. 00:30:48.285 [2024-11-20 10:04:19.174272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.285 [2024-11-20 10:04:19.174287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.285 qpair failed and we were unable to recover it. 00:30:48.285 [2024-11-20 10:04:19.174597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.285 [2024-11-20 10:04:19.174617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.285 qpair failed and we were unable to recover it. 00:30:48.285 [2024-11-20 10:04:19.174968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.285 [2024-11-20 10:04:19.174984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.285 qpair failed and we were unable to recover it. 00:30:48.285 [2024-11-20 10:04:19.175352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.285 [2024-11-20 10:04:19.175368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.285 qpair failed and we were unable to recover it. 00:30:48.285 [2024-11-20 10:04:19.176710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.285 [2024-11-20 10:04:19.176747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.285 qpair failed and we were unable to recover it. 00:30:48.285 [2024-11-20 10:04:19.177083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.285 [2024-11-20 10:04:19.177101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.285 qpair failed and we were unable to recover it. 00:30:48.285 [2024-11-20 10:04:19.177455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.285 [2024-11-20 10:04:19.177470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.285 qpair failed and we were unable to recover it. 00:30:48.285 [2024-11-20 10:04:19.178428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.285 [2024-11-20 10:04:19.178460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.285 qpair failed and we were unable to recover it. 00:30:48.285 [2024-11-20 10:04:19.178792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.285 [2024-11-20 10:04:19.178810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.285 qpair failed and we were unable to recover it. 00:30:48.285 [2024-11-20 10:04:19.179170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.285 [2024-11-20 10:04:19.179188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.285 qpair failed and we were unable to recover it. 00:30:48.285 [2024-11-20 10:04:19.179474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.285 [2024-11-20 10:04:19.179489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.285 qpair failed and we were unable to recover it. 00:30:48.285 [2024-11-20 10:04:19.179790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.285 [2024-11-20 10:04:19.179805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.285 qpair failed and we were unable to recover it. 00:30:48.285 [2024-11-20 10:04:19.180111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.285 [2024-11-20 10:04:19.180125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.285 qpair failed and we were unable to recover it. 00:30:48.285 [2024-11-20 10:04:19.180411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.285 [2024-11-20 10:04:19.180427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.285 qpair failed and we were unable to recover it. 00:30:48.285 [2024-11-20 10:04:19.180756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.285 [2024-11-20 10:04:19.180772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.285 qpair failed and we were unable to recover it. 00:30:48.285 [2024-11-20 10:04:19.181071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.285 [2024-11-20 10:04:19.181085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.285 qpair failed and we were unable to recover it. 00:30:48.285 [2024-11-20 10:04:19.181456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.285 [2024-11-20 10:04:19.181471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.285 qpair failed and we were unable to recover it. 00:30:48.285 [2024-11-20 10:04:19.181684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.285 [2024-11-20 10:04:19.181698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.285 qpair failed and we were unable to recover it. 00:30:48.285 [2024-11-20 10:04:19.181995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.285 [2024-11-20 10:04:19.182009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.285 qpair failed and we were unable to recover it. 00:30:48.285 [2024-11-20 10:04:19.182352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.285 [2024-11-20 10:04:19.182368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.285 qpair failed and we were unable to recover it. 00:30:48.285 [2024-11-20 10:04:19.182710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.285 [2024-11-20 10:04:19.182728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.285 qpair failed and we were unable to recover it. 00:30:48.285 [2024-11-20 10:04:19.183035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.285 [2024-11-20 10:04:19.183055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.285 qpair failed and we were unable to recover it. 00:30:48.285 [2024-11-20 10:04:19.183373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.285 [2024-11-20 10:04:19.183391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.285 qpair failed and we were unable to recover it. 00:30:48.285 [2024-11-20 10:04:19.183709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.285 [2024-11-20 10:04:19.183727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.285 qpair failed and we were unable to recover it. 00:30:48.285 [2024-11-20 10:04:19.184034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.285 [2024-11-20 10:04:19.184051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.285 qpair failed and we were unable to recover it. 00:30:48.285 [2024-11-20 10:04:19.184374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.285 [2024-11-20 10:04:19.184391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.285 qpair failed and we were unable to recover it. 00:30:48.285 [2024-11-20 10:04:19.184730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.285 [2024-11-20 10:04:19.184747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.285 qpair failed and we were unable to recover it. 00:30:48.285 [2024-11-20 10:04:19.185053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.285 [2024-11-20 10:04:19.185070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.285 qpair failed and we were unable to recover it. 00:30:48.285 [2024-11-20 10:04:19.185386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.285 [2024-11-20 10:04:19.185403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.285 qpair failed and we were unable to recover it. 00:30:48.285 [2024-11-20 10:04:19.185723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.285 [2024-11-20 10:04:19.185743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.285 qpair failed and we were unable to recover it. 00:30:48.285 [2024-11-20 10:04:19.186056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.285 [2024-11-20 10:04:19.186073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.285 qpair failed and we were unable to recover it. 00:30:48.285 [2024-11-20 10:04:19.186385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.285 [2024-11-20 10:04:19.186403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.285 qpair failed and we were unable to recover it. 00:30:48.285 [2024-11-20 10:04:19.186748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.285 [2024-11-20 10:04:19.186767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.285 qpair failed and we were unable to recover it. 00:30:48.285 [2024-11-20 10:04:19.187086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.285 [2024-11-20 10:04:19.187104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.285 qpair failed and we were unable to recover it. 00:30:48.285 [2024-11-20 10:04:19.187393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.285 [2024-11-20 10:04:19.187411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.285 qpair failed and we were unable to recover it. 00:30:48.285 [2024-11-20 10:04:19.187721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.285 [2024-11-20 10:04:19.187738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.285 qpair failed and we were unable to recover it. 00:30:48.286 [2024-11-20 10:04:19.188060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.286 [2024-11-20 10:04:19.188078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.286 qpair failed and we were unable to recover it. 00:30:48.286 [2024-11-20 10:04:19.188403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.286 [2024-11-20 10:04:19.188423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.286 qpair failed and we were unable to recover it. 00:30:48.286 [2024-11-20 10:04:19.188645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.286 [2024-11-20 10:04:19.188663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.286 qpair failed and we were unable to recover it. 00:30:48.286 [2024-11-20 10:04:19.188974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.286 [2024-11-20 10:04:19.188992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.286 qpair failed and we were unable to recover it. 00:30:48.286 [2024-11-20 10:04:19.189299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.286 [2024-11-20 10:04:19.189316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.286 qpair failed and we were unable to recover it. 00:30:48.286 [2024-11-20 10:04:19.189655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.286 [2024-11-20 10:04:19.189677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.286 qpair failed and we were unable to recover it. 00:30:48.286 [2024-11-20 10:04:19.189994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.286 [2024-11-20 10:04:19.190012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.286 qpair failed and we were unable to recover it. 00:30:48.286 [2024-11-20 10:04:19.190324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.286 [2024-11-20 10:04:19.190341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.286 qpair failed and we were unable to recover it. 00:30:48.286 [2024-11-20 10:04:19.190566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.286 [2024-11-20 10:04:19.190584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.286 qpair failed and we were unable to recover it. 00:30:48.286 [2024-11-20 10:04:19.190896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.286 [2024-11-20 10:04:19.190913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.286 qpair failed and we were unable to recover it. 00:30:48.286 [2024-11-20 10:04:19.191218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.286 [2024-11-20 10:04:19.191235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.286 qpair failed and we were unable to recover it. 00:30:48.286 [2024-11-20 10:04:19.191550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.286 [2024-11-20 10:04:19.191566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.286 qpair failed and we were unable to recover it. 00:30:48.286 [2024-11-20 10:04:19.191882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.286 [2024-11-20 10:04:19.191898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.286 qpair failed and we were unable to recover it. 00:30:48.286 [2024-11-20 10:04:19.192216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.286 [2024-11-20 10:04:19.192234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.286 qpair failed and we were unable to recover it. 00:30:48.286 [2024-11-20 10:04:19.192562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.286 [2024-11-20 10:04:19.192579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.286 qpair failed and we were unable to recover it. 00:30:48.558 [2024-11-20 10:04:19.193210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.558 [2024-11-20 10:04:19.193241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.558 qpair failed and we were unable to recover it. 00:30:48.558 [2024-11-20 10:04:19.193593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.558 [2024-11-20 10:04:19.193621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.558 qpair failed and we were unable to recover it. 00:30:48.558 [2024-11-20 10:04:19.193942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.558 [2024-11-20 10:04:19.193965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.558 qpair failed and we were unable to recover it. 00:30:48.558 [2024-11-20 10:04:19.194314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.558 [2024-11-20 10:04:19.194336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.558 qpair failed and we were unable to recover it. 00:30:48.558 [2024-11-20 10:04:19.194543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.558 [2024-11-20 10:04:19.194560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.558 qpair failed and we were unable to recover it. 00:30:48.558 [2024-11-20 10:04:19.194915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.558 [2024-11-20 10:04:19.194932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.558 qpair failed and we were unable to recover it. 00:30:48.558 [2024-11-20 10:04:19.195230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.558 [2024-11-20 10:04:19.195250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.558 qpair failed and we were unable to recover it. 00:30:48.558 [2024-11-20 10:04:19.195582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.558 [2024-11-20 10:04:19.195600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.558 qpair failed and we were unable to recover it. 00:30:48.558 [2024-11-20 10:04:19.195909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.558 [2024-11-20 10:04:19.195926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.558 qpair failed and we were unable to recover it. 00:30:48.558 [2024-11-20 10:04:19.196255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.558 [2024-11-20 10:04:19.196274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.558 qpair failed and we were unable to recover it. 00:30:48.558 [2024-11-20 10:04:19.196603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.558 [2024-11-20 10:04:19.196620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.558 qpair failed and we were unable to recover it. 00:30:48.558 [2024-11-20 10:04:19.197001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.558 [2024-11-20 10:04:19.197018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.558 qpair failed and we were unable to recover it. 00:30:48.558 [2024-11-20 10:04:19.197334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.558 [2024-11-20 10:04:19.197352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.558 qpair failed and we were unable to recover it. 00:30:48.558 [2024-11-20 10:04:19.197677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.558 [2024-11-20 10:04:19.197695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.558 qpair failed and we were unable to recover it. 00:30:48.558 [2024-11-20 10:04:19.198036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.558 [2024-11-20 10:04:19.198055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.558 qpair failed and we were unable to recover it. 00:30:48.558 [2024-11-20 10:04:19.198285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.558 [2024-11-20 10:04:19.198302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.558 qpair failed and we were unable to recover it. 00:30:48.558 [2024-11-20 10:04:19.198689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.558 [2024-11-20 10:04:19.198707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.558 qpair failed and we were unable to recover it. 00:30:48.558 [2024-11-20 10:04:19.199035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.558 [2024-11-20 10:04:19.199054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.558 qpair failed and we were unable to recover it. 00:30:48.558 [2024-11-20 10:04:19.199416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.558 [2024-11-20 10:04:19.199433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.558 qpair failed and we were unable to recover it. 00:30:48.558 [2024-11-20 10:04:19.199779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.558 [2024-11-20 10:04:19.199797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.558 qpair failed and we were unable to recover it. 00:30:48.558 [2024-11-20 10:04:19.200100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.558 [2024-11-20 10:04:19.200118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.558 qpair failed and we were unable to recover it. 00:30:48.558 [2024-11-20 10:04:19.200453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.558 [2024-11-20 10:04:19.200471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.558 qpair failed and we were unable to recover it. 00:30:48.558 [2024-11-20 10:04:19.200691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.558 [2024-11-20 10:04:19.200710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.558 qpair failed and we were unable to recover it. 00:30:48.558 [2024-11-20 10:04:19.201024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.558 [2024-11-20 10:04:19.201042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.558 qpair failed and we were unable to recover it. 00:30:48.558 [2024-11-20 10:04:19.201344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.558 [2024-11-20 10:04:19.201362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.558 qpair failed and we were unable to recover it. 00:30:48.558 [2024-11-20 10:04:19.201589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.558 [2024-11-20 10:04:19.201605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.558 qpair failed and we were unable to recover it. 00:30:48.558 [2024-11-20 10:04:19.201992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.558 [2024-11-20 10:04:19.202009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.558 qpair failed and we were unable to recover it. 00:30:48.558 [2024-11-20 10:04:19.202327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.558 [2024-11-20 10:04:19.202345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.558 qpair failed and we were unable to recover it. 00:30:48.558 [2024-11-20 10:04:19.202687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.558 [2024-11-20 10:04:19.202703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.558 qpair failed and we were unable to recover it. 00:30:48.558 [2024-11-20 10:04:19.203061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.558 [2024-11-20 10:04:19.203078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.558 qpair failed and we were unable to recover it. 00:30:48.558 [2024-11-20 10:04:19.203455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.558 [2024-11-20 10:04:19.203478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.558 qpair failed and we were unable to recover it. 00:30:48.559 [2024-11-20 10:04:19.203723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.559 [2024-11-20 10:04:19.203740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.559 qpair failed and we were unable to recover it. 00:30:48.559 [2024-11-20 10:04:19.204086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.559 [2024-11-20 10:04:19.204103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.559 qpair failed and we were unable to recover it. 00:30:48.559 [2024-11-20 10:04:19.204442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.559 [2024-11-20 10:04:19.204459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.559 qpair failed and we were unable to recover it. 00:30:48.559 [2024-11-20 10:04:19.204790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.559 [2024-11-20 10:04:19.204807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.559 qpair failed and we were unable to recover it. 00:30:48.559 [2024-11-20 10:04:19.205146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.559 [2024-11-20 10:04:19.205171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.559 qpair failed and we were unable to recover it. 00:30:48.559 [2024-11-20 10:04:19.205534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.559 [2024-11-20 10:04:19.205550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.559 qpair failed and we were unable to recover it. 00:30:48.559 [2024-11-20 10:04:19.205918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.559 [2024-11-20 10:04:19.205934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.559 qpair failed and we were unable to recover it. 00:30:48.559 [2024-11-20 10:04:19.206144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.559 [2024-11-20 10:04:19.206166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.559 qpair failed and we were unable to recover it. 00:30:48.559 [2024-11-20 10:04:19.206514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.559 [2024-11-20 10:04:19.206531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.559 qpair failed and we were unable to recover it. 00:30:48.559 [2024-11-20 10:04:19.206871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.559 [2024-11-20 10:04:19.206889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.559 qpair failed and we were unable to recover it. 00:30:48.559 [2024-11-20 10:04:19.207228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.559 [2024-11-20 10:04:19.207246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.559 qpair failed and we were unable to recover it. 00:30:48.559 [2024-11-20 10:04:19.207587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.559 [2024-11-20 10:04:19.207606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.559 qpair failed and we were unable to recover it. 00:30:48.559 [2024-11-20 10:04:19.207941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.559 [2024-11-20 10:04:19.207958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.559 qpair failed and we were unable to recover it. 00:30:48.559 [2024-11-20 10:04:19.208267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.559 [2024-11-20 10:04:19.208284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.559 qpair failed and we were unable to recover it. 00:30:48.559 [2024-11-20 10:04:19.208569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.559 [2024-11-20 10:04:19.208585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.559 qpair failed and we were unable to recover it. 00:30:48.559 [2024-11-20 10:04:19.208922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.559 [2024-11-20 10:04:19.208941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.559 qpair failed and we were unable to recover it. 00:30:48.559 [2024-11-20 10:04:19.209214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.559 [2024-11-20 10:04:19.209231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.559 qpair failed and we were unable to recover it. 00:30:48.559 [2024-11-20 10:04:19.209558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.559 [2024-11-20 10:04:19.209574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.559 qpair failed and we were unable to recover it. 00:30:48.559 [2024-11-20 10:04:19.209909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.559 [2024-11-20 10:04:19.209926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.559 qpair failed and we were unable to recover it. 00:30:48.559 [2024-11-20 10:04:19.210268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.559 [2024-11-20 10:04:19.210287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.559 qpair failed and we were unable to recover it. 00:30:48.559 [2024-11-20 10:04:19.210643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.559 [2024-11-20 10:04:19.210659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.559 qpair failed and we were unable to recover it. 00:30:48.559 [2024-11-20 10:04:19.210981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.559 [2024-11-20 10:04:19.210999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.559 qpair failed and we were unable to recover it. 00:30:48.559 [2024-11-20 10:04:19.211288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.559 [2024-11-20 10:04:19.211305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.559 qpair failed and we were unable to recover it. 00:30:48.559 [2024-11-20 10:04:19.211481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.559 [2024-11-20 10:04:19.211497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.559 qpair failed and we were unable to recover it. 00:30:48.559 [2024-11-20 10:04:19.211844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.559 [2024-11-20 10:04:19.211862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.559 qpair failed and we were unable to recover it. 00:30:48.559 [2024-11-20 10:04:19.212155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.559 [2024-11-20 10:04:19.212181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.559 qpair failed and we were unable to recover it. 00:30:48.559 [2024-11-20 10:04:19.212544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.559 [2024-11-20 10:04:19.212561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.559 qpair failed and we were unable to recover it. 00:30:48.559 [2024-11-20 10:04:19.212821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.559 [2024-11-20 10:04:19.212838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.559 qpair failed and we were unable to recover it. 00:30:48.559 [2024-11-20 10:04:19.213051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.559 [2024-11-20 10:04:19.213067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.559 qpair failed and we were unable to recover it. 00:30:48.559 [2024-11-20 10:04:19.213466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.559 [2024-11-20 10:04:19.213484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.559 qpair failed and we were unable to recover it. 00:30:48.559 [2024-11-20 10:04:19.213804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.559 [2024-11-20 10:04:19.213820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.559 qpair failed and we were unable to recover it. 00:30:48.559 [2024-11-20 10:04:19.214132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.559 [2024-11-20 10:04:19.214149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.559 qpair failed and we were unable to recover it. 00:30:48.559 [2024-11-20 10:04:19.214468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.559 [2024-11-20 10:04:19.214487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.559 qpair failed and we were unable to recover it. 00:30:48.559 [2024-11-20 10:04:19.214813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.559 [2024-11-20 10:04:19.214830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.559 qpair failed and we were unable to recover it. 00:30:48.559 [2024-11-20 10:04:19.215184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.559 [2024-11-20 10:04:19.215202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.559 qpair failed and we were unable to recover it. 00:30:48.559 [2024-11-20 10:04:19.215431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.559 [2024-11-20 10:04:19.215447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.559 qpair failed and we were unable to recover it. 00:30:48.559 [2024-11-20 10:04:19.215803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.560 [2024-11-20 10:04:19.215821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.560 qpair failed and we were unable to recover it. 00:30:48.560 [2024-11-20 10:04:19.216131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.560 [2024-11-20 10:04:19.216149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.560 qpair failed and we were unable to recover it. 00:30:48.560 [2024-11-20 10:04:19.216523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.560 [2024-11-20 10:04:19.216541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.560 qpair failed and we were unable to recover it. 00:30:48.560 [2024-11-20 10:04:19.216744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.560 [2024-11-20 10:04:19.216764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.560 qpair failed and we were unable to recover it. 00:30:48.560 [2024-11-20 10:04:19.217028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.560 [2024-11-20 10:04:19.217045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.560 qpair failed and we were unable to recover it. 00:30:48.560 [2024-11-20 10:04:19.217389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.560 [2024-11-20 10:04:19.217407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.560 qpair failed and we were unable to recover it. 00:30:48.560 [2024-11-20 10:04:19.217742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.560 [2024-11-20 10:04:19.217760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.560 qpair failed and we were unable to recover it. 00:30:48.560 [2024-11-20 10:04:19.217991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.560 [2024-11-20 10:04:19.218010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.560 qpair failed and we were unable to recover it. 00:30:48.560 [2024-11-20 10:04:19.218335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.560 [2024-11-20 10:04:19.218352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.560 qpair failed and we were unable to recover it. 00:30:48.560 [2024-11-20 10:04:19.218702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.560 [2024-11-20 10:04:19.218720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.560 qpair failed and we were unable to recover it. 00:30:48.560 [2024-11-20 10:04:19.219063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.560 [2024-11-20 10:04:19.219080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.560 qpair failed and we were unable to recover it. 00:30:48.560 [2024-11-20 10:04:19.219409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.560 [2024-11-20 10:04:19.219426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.560 qpair failed and we were unable to recover it. 00:30:48.560 [2024-11-20 10:04:19.219748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.560 [2024-11-20 10:04:19.219766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.560 qpair failed and we were unable to recover it. 00:30:48.560 [2024-11-20 10:04:19.220093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.560 [2024-11-20 10:04:19.220111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.560 qpair failed and we were unable to recover it. 00:30:48.560 [2024-11-20 10:04:19.220322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.560 [2024-11-20 10:04:19.220341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.560 qpair failed and we were unable to recover it. 00:30:48.560 [2024-11-20 10:04:19.220666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.560 [2024-11-20 10:04:19.220684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.560 qpair failed and we were unable to recover it. 00:30:48.560 [2024-11-20 10:04:19.221022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.560 [2024-11-20 10:04:19.221041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.560 qpair failed and we were unable to recover it. 00:30:48.560 [2024-11-20 10:04:19.221264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.560 [2024-11-20 10:04:19.221282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.560 qpair failed and we were unable to recover it. 00:30:48.560 [2024-11-20 10:04:19.221647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.560 [2024-11-20 10:04:19.221663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.560 qpair failed and we were unable to recover it. 00:30:48.560 [2024-11-20 10:04:19.221969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.560 [2024-11-20 10:04:19.221986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.560 qpair failed and we were unable to recover it. 00:30:48.560 [2024-11-20 10:04:19.222305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.560 [2024-11-20 10:04:19.222325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.560 qpair failed and we were unable to recover it. 00:30:48.560 [2024-11-20 10:04:19.222682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.560 [2024-11-20 10:04:19.222699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.560 qpair failed and we were unable to recover it. 00:30:48.560 [2024-11-20 10:04:19.223044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.560 [2024-11-20 10:04:19.223061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.560 qpair failed and we were unable to recover it. 00:30:48.560 [2024-11-20 10:04:19.223362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.560 [2024-11-20 10:04:19.223379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.560 qpair failed and we were unable to recover it. 00:30:48.560 [2024-11-20 10:04:19.223602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.560 [2024-11-20 10:04:19.223619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.560 qpair failed and we were unable to recover it. 00:30:48.560 [2024-11-20 10:04:19.223961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.560 [2024-11-20 10:04:19.223979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.560 qpair failed and we were unable to recover it. 00:30:48.560 [2024-11-20 10:04:19.224214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.560 [2024-11-20 10:04:19.224231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.560 qpair failed and we were unable to recover it. 00:30:48.560 [2024-11-20 10:04:19.224571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.560 [2024-11-20 10:04:19.224589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.560 qpair failed and we were unable to recover it. 00:30:48.560 [2024-11-20 10:04:19.224893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.560 [2024-11-20 10:04:19.224912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.560 qpair failed and we were unable to recover it. 00:30:48.560 [2024-11-20 10:04:19.225119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.560 [2024-11-20 10:04:19.225139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.560 qpair failed and we were unable to recover it. 00:30:48.560 [2024-11-20 10:04:19.225472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.560 [2024-11-20 10:04:19.225491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.560 qpair failed and we were unable to recover it. 00:30:48.560 [2024-11-20 10:04:19.225835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.560 [2024-11-20 10:04:19.225854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.560 qpair failed and we were unable to recover it. 00:30:48.560 [2024-11-20 10:04:19.226201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.560 [2024-11-20 10:04:19.226219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.560 qpair failed and we were unable to recover it. 00:30:48.560 [2024-11-20 10:04:19.226553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.560 [2024-11-20 10:04:19.226570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.560 qpair failed and we were unable to recover it. 00:30:48.560 [2024-11-20 10:04:19.226878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.560 [2024-11-20 10:04:19.226897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.560 qpair failed and we were unable to recover it. 00:30:48.560 [2024-11-20 10:04:19.227136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.560 [2024-11-20 10:04:19.227153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.560 qpair failed and we were unable to recover it. 00:30:48.560 [2024-11-20 10:04:19.227522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.560 [2024-11-20 10:04:19.227539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.560 qpair failed and we were unable to recover it. 00:30:48.560 [2024-11-20 10:04:19.227961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.561 [2024-11-20 10:04:19.227979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.561 qpair failed and we were unable to recover it. 00:30:48.561 [2024-11-20 10:04:19.228280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.561 [2024-11-20 10:04:19.228297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.561 qpair failed and we were unable to recover it. 00:30:48.561 [2024-11-20 10:04:19.228556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.561 [2024-11-20 10:04:19.228573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.561 qpair failed and we were unable to recover it. 00:30:48.561 [2024-11-20 10:04:19.228924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.561 [2024-11-20 10:04:19.228941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.561 qpair failed and we were unable to recover it. 00:30:48.561 [2024-11-20 10:04:19.229262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.561 [2024-11-20 10:04:19.229279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.561 qpair failed and we were unable to recover it. 00:30:48.561 [2024-11-20 10:04:19.229602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.561 [2024-11-20 10:04:19.229619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.561 qpair failed and we were unable to recover it. 00:30:48.561 [2024-11-20 10:04:19.229940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.561 [2024-11-20 10:04:19.229961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.561 qpair failed and we were unable to recover it. 00:30:48.561 [2024-11-20 10:04:19.230304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.561 [2024-11-20 10:04:19.230322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.561 qpair failed and we were unable to recover it. 00:30:48.561 [2024-11-20 10:04:19.230676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.561 [2024-11-20 10:04:19.230693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.561 qpair failed and we were unable to recover it. 00:30:48.561 [2024-11-20 10:04:19.231006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.561 [2024-11-20 10:04:19.231023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.561 qpair failed and we were unable to recover it. 00:30:48.561 [2024-11-20 10:04:19.231347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.561 [2024-11-20 10:04:19.231365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.561 qpair failed and we were unable to recover it. 00:30:48.561 [2024-11-20 10:04:19.231716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.561 [2024-11-20 10:04:19.231732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.561 qpair failed and we were unable to recover it. 00:30:48.561 [2024-11-20 10:04:19.232049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.561 [2024-11-20 10:04:19.232066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.561 qpair failed and we were unable to recover it. 00:30:48.561 [2024-11-20 10:04:19.232489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.561 [2024-11-20 10:04:19.232506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.561 qpair failed and we were unable to recover it. 00:30:48.561 [2024-11-20 10:04:19.232715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.561 [2024-11-20 10:04:19.232731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.561 qpair failed and we were unable to recover it. 00:30:48.561 [2024-11-20 10:04:19.233070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.561 [2024-11-20 10:04:19.233087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.561 qpair failed and we were unable to recover it. 00:30:48.561 [2024-11-20 10:04:19.233406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.561 [2024-11-20 10:04:19.233424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.561 qpair failed and we were unable to recover it. 00:30:48.561 [2024-11-20 10:04:19.233754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.561 [2024-11-20 10:04:19.233772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.561 qpair failed and we were unable to recover it. 00:30:48.561 [2024-11-20 10:04:19.234090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.561 [2024-11-20 10:04:19.234107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.561 qpair failed and we were unable to recover it. 00:30:48.561 [2024-11-20 10:04:19.234449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.561 [2024-11-20 10:04:19.234467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.561 qpair failed and we were unable to recover it. 00:30:48.561 [2024-11-20 10:04:19.234780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.561 [2024-11-20 10:04:19.234798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.561 qpair failed and we were unable to recover it. 00:30:48.561 [2024-11-20 10:04:19.235140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.561 [2024-11-20 10:04:19.235164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.561 qpair failed and we were unable to recover it. 00:30:48.561 [2024-11-20 10:04:19.235492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.561 [2024-11-20 10:04:19.235510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.561 qpair failed and we were unable to recover it. 00:30:48.561 [2024-11-20 10:04:19.235711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.561 [2024-11-20 10:04:19.235731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.561 qpair failed and we were unable to recover it. 00:30:48.561 [2024-11-20 10:04:19.236054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.561 [2024-11-20 10:04:19.236071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.561 qpair failed and we were unable to recover it. 00:30:48.561 [2024-11-20 10:04:19.236431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.561 [2024-11-20 10:04:19.236449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.561 qpair failed and we were unable to recover it. 00:30:48.561 [2024-11-20 10:04:19.236785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.561 [2024-11-20 10:04:19.236804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.561 qpair failed and we were unable to recover it. 00:30:48.561 [2024-11-20 10:04:19.237143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.561 [2024-11-20 10:04:19.237169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.561 qpair failed and we were unable to recover it. 00:30:48.561 [2024-11-20 10:04:19.237494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.561 [2024-11-20 10:04:19.237510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.561 qpair failed and we were unable to recover it. 00:30:48.561 [2024-11-20 10:04:19.237744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.561 [2024-11-20 10:04:19.237760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.561 qpair failed and we were unable to recover it. 00:30:48.561 [2024-11-20 10:04:19.238047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.561 [2024-11-20 10:04:19.238065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.561 qpair failed and we were unable to recover it. 00:30:48.561 [2024-11-20 10:04:19.238304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.561 [2024-11-20 10:04:19.238321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.561 qpair failed and we were unable to recover it. 00:30:48.561 [2024-11-20 10:04:19.238612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.561 [2024-11-20 10:04:19.238629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.561 qpair failed and we were unable to recover it. 00:30:48.561 [2024-11-20 10:04:19.238963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.561 [2024-11-20 10:04:19.238982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.561 qpair failed and we were unable to recover it. 00:30:48.561 [2024-11-20 10:04:19.239236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.561 [2024-11-20 10:04:19.239254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.561 qpair failed and we were unable to recover it. 00:30:48.561 [2024-11-20 10:04:19.239575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.561 [2024-11-20 10:04:19.239593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.561 qpair failed and we were unable to recover it. 00:30:48.561 [2024-11-20 10:04:19.239886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.561 [2024-11-20 10:04:19.239902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.561 qpair failed and we were unable to recover it. 00:30:48.561 [2024-11-20 10:04:19.240231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.561 [2024-11-20 10:04:19.240248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.561 qpair failed and we were unable to recover it. 00:30:48.561 [2024-11-20 10:04:19.240579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.562 [2024-11-20 10:04:19.240596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.562 qpair failed and we were unable to recover it. 00:30:48.562 [2024-11-20 10:04:19.240954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.562 [2024-11-20 10:04:19.240971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.562 qpair failed and we were unable to recover it. 00:30:48.562 [2024-11-20 10:04:19.241258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.562 [2024-11-20 10:04:19.241275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.562 qpair failed and we were unable to recover it. 00:30:48.562 [2024-11-20 10:04:19.241661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.562 [2024-11-20 10:04:19.241677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.562 qpair failed and we were unable to recover it. 00:30:48.562 [2024-11-20 10:04:19.242005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.562 [2024-11-20 10:04:19.242023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.562 qpair failed and we were unable to recover it. 00:30:48.562 [2024-11-20 10:04:19.242347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.562 [2024-11-20 10:04:19.242364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.562 qpair failed and we were unable to recover it. 00:30:48.562 [2024-11-20 10:04:19.242694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.562 [2024-11-20 10:04:19.242713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.562 qpair failed and we were unable to recover it. 00:30:48.562 [2024-11-20 10:04:19.242898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.562 [2024-11-20 10:04:19.242919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.562 qpair failed and we were unable to recover it. 00:30:48.562 [2024-11-20 10:04:19.243183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.562 [2024-11-20 10:04:19.243205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.562 qpair failed and we were unable to recover it. 00:30:48.562 [2024-11-20 10:04:19.243450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.562 [2024-11-20 10:04:19.243467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.562 qpair failed and we were unable to recover it. 00:30:48.562 [2024-11-20 10:04:19.243795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.562 [2024-11-20 10:04:19.243813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.562 qpair failed and we were unable to recover it. 00:30:48.562 [2024-11-20 10:04:19.244137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.562 [2024-11-20 10:04:19.244155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.562 qpair failed and we were unable to recover it. 00:30:48.562 [2024-11-20 10:04:19.244432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.562 [2024-11-20 10:04:19.244448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.562 qpair failed and we were unable to recover it. 00:30:48.562 [2024-11-20 10:04:19.244670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.562 [2024-11-20 10:04:19.244686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.562 qpair failed and we were unable to recover it. 00:30:48.562 [2024-11-20 10:04:19.245025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.562 [2024-11-20 10:04:19.245043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.562 qpair failed and we were unable to recover it. 00:30:48.562 [2024-11-20 10:04:19.245258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.562 [2024-11-20 10:04:19.245277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.562 qpair failed and we were unable to recover it. 00:30:48.562 [2024-11-20 10:04:19.245642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.562 [2024-11-20 10:04:19.245659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.562 qpair failed and we were unable to recover it. 00:30:48.562 [2024-11-20 10:04:19.246007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.562 [2024-11-20 10:04:19.246023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.562 qpair failed and we were unable to recover it. 00:30:48.562 [2024-11-20 10:04:19.246334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.562 [2024-11-20 10:04:19.246351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.562 qpair failed and we were unable to recover it. 00:30:48.562 [2024-11-20 10:04:19.246575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.562 [2024-11-20 10:04:19.246592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.562 qpair failed and we were unable to recover it. 00:30:48.562 [2024-11-20 10:04:19.246932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.562 [2024-11-20 10:04:19.246949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.562 qpair failed and we were unable to recover it. 00:30:48.562 [2024-11-20 10:04:19.247169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.562 [2024-11-20 10:04:19.247187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.562 qpair failed and we were unable to recover it. 00:30:48.562 [2024-11-20 10:04:19.247492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.562 [2024-11-20 10:04:19.247508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.562 qpair failed and we were unable to recover it. 00:30:48.562 [2024-11-20 10:04:19.247834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.562 [2024-11-20 10:04:19.247851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.562 qpair failed and we were unable to recover it. 00:30:48.562 [2024-11-20 10:04:19.248201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.562 [2024-11-20 10:04:19.248220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.562 qpair failed and we were unable to recover it. 00:30:48.562 [2024-11-20 10:04:19.248563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.562 [2024-11-20 10:04:19.248580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.562 qpair failed and we were unable to recover it. 00:30:48.562 [2024-11-20 10:04:19.248909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.562 [2024-11-20 10:04:19.248927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.562 qpair failed and we were unable to recover it. 00:30:48.562 [2024-11-20 10:04:19.249183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.562 [2024-11-20 10:04:19.249202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.562 qpair failed and we were unable to recover it. 00:30:48.562 [2024-11-20 10:04:19.249534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.562 [2024-11-20 10:04:19.249552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.562 qpair failed and we were unable to recover it. 00:30:48.562 [2024-11-20 10:04:19.249770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.562 [2024-11-20 10:04:19.249786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.562 qpair failed and we were unable to recover it. 00:30:48.562 [2024-11-20 10:04:19.250132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.562 [2024-11-20 10:04:19.250149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.562 qpair failed and we were unable to recover it. 00:30:48.562 [2024-11-20 10:04:19.250516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.562 [2024-11-20 10:04:19.250534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.562 qpair failed and we were unable to recover it. 00:30:48.562 [2024-11-20 10:04:19.250881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.563 [2024-11-20 10:04:19.250899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.563 qpair failed and we were unable to recover it. 00:30:48.563 [2024-11-20 10:04:19.251186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.563 [2024-11-20 10:04:19.251203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.563 qpair failed and we were unable to recover it. 00:30:48.563 [2024-11-20 10:04:19.251580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.563 [2024-11-20 10:04:19.251598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.563 qpair failed and we were unable to recover it. 00:30:48.563 [2024-11-20 10:04:19.251911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.563 [2024-11-20 10:04:19.251928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.563 qpair failed and we were unable to recover it. 00:30:48.563 [2024-11-20 10:04:19.252142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.563 [2024-11-20 10:04:19.252166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.563 qpair failed and we were unable to recover it. 00:30:48.563 [2024-11-20 10:04:19.252539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.563 [2024-11-20 10:04:19.252556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.563 qpair failed and we were unable to recover it. 00:30:48.563 [2024-11-20 10:04:19.252761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.563 [2024-11-20 10:04:19.252777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.563 qpair failed and we were unable to recover it. 00:30:48.563 [2024-11-20 10:04:19.253123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.563 [2024-11-20 10:04:19.253139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.563 qpair failed and we were unable to recover it. 00:30:48.563 [2024-11-20 10:04:19.253491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.563 [2024-11-20 10:04:19.253510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.563 qpair failed and we were unable to recover it. 00:30:48.563 [2024-11-20 10:04:19.253837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.563 [2024-11-20 10:04:19.253853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.563 qpair failed and we were unable to recover it. 00:30:48.563 [2024-11-20 10:04:19.254190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.563 [2024-11-20 10:04:19.254207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.563 qpair failed and we were unable to recover it. 00:30:48.563 [2024-11-20 10:04:19.254512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.563 [2024-11-20 10:04:19.254529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.563 qpair failed and we were unable to recover it. 00:30:48.563 [2024-11-20 10:04:19.254872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.563 [2024-11-20 10:04:19.254889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.563 qpair failed and we were unable to recover it. 00:30:48.563 [2024-11-20 10:04:19.255150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.563 [2024-11-20 10:04:19.255173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.563 qpair failed and we were unable to recover it. 00:30:48.563 [2024-11-20 10:04:19.255439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.563 [2024-11-20 10:04:19.255456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.563 qpair failed and we were unable to recover it. 00:30:48.563 [2024-11-20 10:04:19.255784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.563 [2024-11-20 10:04:19.255803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.563 qpair failed and we were unable to recover it. 00:30:48.563 [2024-11-20 10:04:19.256148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.563 [2024-11-20 10:04:19.256186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.563 qpair failed and we were unable to recover it. 00:30:48.563 [2024-11-20 10:04:19.256619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.563 [2024-11-20 10:04:19.256636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.563 qpair failed and we were unable to recover it. 00:30:48.563 [2024-11-20 10:04:19.256980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.563 [2024-11-20 10:04:19.256998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.563 qpair failed and we were unable to recover it. 00:30:48.563 [2024-11-20 10:04:19.257250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.563 [2024-11-20 10:04:19.257267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.563 qpair failed and we were unable to recover it. 00:30:48.563 [2024-11-20 10:04:19.257623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.563 [2024-11-20 10:04:19.257639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.563 qpair failed and we were unable to recover it. 00:30:48.563 [2024-11-20 10:04:19.258059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.563 [2024-11-20 10:04:19.258077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.563 qpair failed and we were unable to recover it. 00:30:48.563 [2024-11-20 10:04:19.258454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.563 [2024-11-20 10:04:19.258472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.563 qpair failed and we were unable to recover it. 00:30:48.563 [2024-11-20 10:04:19.258811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.563 [2024-11-20 10:04:19.258829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.563 qpair failed and we were unable to recover it. 00:30:48.563 [2024-11-20 10:04:19.259170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.563 [2024-11-20 10:04:19.259187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.563 qpair failed and we were unable to recover it. 00:30:48.563 [2024-11-20 10:04:19.259435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.563 [2024-11-20 10:04:19.259452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.563 qpair failed and we were unable to recover it. 00:30:48.563 [2024-11-20 10:04:19.259783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.563 [2024-11-20 10:04:19.259799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.563 qpair failed and we were unable to recover it. 00:30:48.563 [2024-11-20 10:04:19.260142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.563 [2024-11-20 10:04:19.260182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.563 qpair failed and we were unable to recover it. 00:30:48.563 [2024-11-20 10:04:19.260537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.563 [2024-11-20 10:04:19.260556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.563 qpair failed and we were unable to recover it. 00:30:48.563 [2024-11-20 10:04:19.260899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.563 [2024-11-20 10:04:19.260916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.563 qpair failed and we were unable to recover it. 00:30:48.563 [2024-11-20 10:04:19.261197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.563 [2024-11-20 10:04:19.261215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.563 qpair failed and we were unable to recover it. 00:30:48.563 [2024-11-20 10:04:19.261413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.563 [2024-11-20 10:04:19.261430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.563 qpair failed and we were unable to recover it. 00:30:48.563 [2024-11-20 10:04:19.261757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.563 [2024-11-20 10:04:19.261774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.563 qpair failed and we were unable to recover it. 00:30:48.563 [2024-11-20 10:04:19.262109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.563 [2024-11-20 10:04:19.262126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.563 qpair failed and we were unable to recover it. 00:30:48.563 [2024-11-20 10:04:19.262373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.563 [2024-11-20 10:04:19.262390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.563 qpair failed and we were unable to recover it. 00:30:48.563 [2024-11-20 10:04:19.262748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.563 [2024-11-20 10:04:19.262764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.563 qpair failed and we were unable to recover it. 00:30:48.563 [2024-11-20 10:04:19.263133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.563 [2024-11-20 10:04:19.263151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.563 qpair failed and we were unable to recover it. 00:30:48.563 [2024-11-20 10:04:19.263490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.563 [2024-11-20 10:04:19.263506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.563 qpair failed and we were unable to recover it. 00:30:48.563 [2024-11-20 10:04:19.263823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.563 [2024-11-20 10:04:19.263840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.563 qpair failed and we were unable to recover it. 00:30:48.563 [2024-11-20 10:04:19.264188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.564 [2024-11-20 10:04:19.264206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.564 qpair failed and we were unable to recover it. 00:30:48.564 [2024-11-20 10:04:19.264568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.564 [2024-11-20 10:04:19.264586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.564 qpair failed and we were unable to recover it. 00:30:48.564 [2024-11-20 10:04:19.264971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.564 [2024-11-20 10:04:19.264987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.564 qpair failed and we were unable to recover it. 00:30:48.564 [2024-11-20 10:04:19.265303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.564 [2024-11-20 10:04:19.265320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.564 qpair failed and we were unable to recover it. 00:30:48.564 [2024-11-20 10:04:19.265672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.564 [2024-11-20 10:04:19.265689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.564 qpair failed and we were unable to recover it. 00:30:48.564 [2024-11-20 10:04:19.266038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.564 [2024-11-20 10:04:19.266057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.564 qpair failed and we were unable to recover it. 00:30:48.564 [2024-11-20 10:04:19.266335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.564 [2024-11-20 10:04:19.266352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.564 qpair failed and we were unable to recover it. 00:30:48.564 [2024-11-20 10:04:19.266696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.564 [2024-11-20 10:04:19.266713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.564 qpair failed and we were unable to recover it. 00:30:48.564 [2024-11-20 10:04:19.267070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.564 [2024-11-20 10:04:19.267087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.564 qpair failed and we were unable to recover it. 00:30:48.564 [2024-11-20 10:04:19.267450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.564 [2024-11-20 10:04:19.267467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.564 qpair failed and we were unable to recover it. 00:30:48.564 [2024-11-20 10:04:19.267691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.564 [2024-11-20 10:04:19.267707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.564 qpair failed and we were unable to recover it. 00:30:48.564 [2024-11-20 10:04:19.268028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.564 [2024-11-20 10:04:19.268045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.564 qpair failed and we were unable to recover it. 00:30:48.564 [2024-11-20 10:04:19.268406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.564 [2024-11-20 10:04:19.268425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.564 qpair failed and we were unable to recover it. 00:30:48.564 [2024-11-20 10:04:19.268758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.564 [2024-11-20 10:04:19.268775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.564 qpair failed and we were unable to recover it. 00:30:48.564 [2024-11-20 10:04:19.268973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.564 [2024-11-20 10:04:19.268991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.564 qpair failed and we were unable to recover it. 00:30:48.564 [2024-11-20 10:04:19.269306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.564 [2024-11-20 10:04:19.269324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.564 qpair failed and we were unable to recover it. 00:30:48.564 [2024-11-20 10:04:19.269705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.564 [2024-11-20 10:04:19.269723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.564 qpair failed and we were unable to recover it. 00:30:48.564 [2024-11-20 10:04:19.270057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.564 [2024-11-20 10:04:19.270075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.564 qpair failed and we were unable to recover it. 00:30:48.564 [2024-11-20 10:04:19.270512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.564 [2024-11-20 10:04:19.270531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.564 qpair failed and we were unable to recover it. 00:30:48.564 [2024-11-20 10:04:19.270865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.564 [2024-11-20 10:04:19.270883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.564 qpair failed and we were unable to recover it. 00:30:48.564 [2024-11-20 10:04:19.271237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.564 [2024-11-20 10:04:19.271254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.564 qpair failed and we were unable to recover it. 00:30:48.564 [2024-11-20 10:04:19.271468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.564 [2024-11-20 10:04:19.271486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.564 qpair failed and we were unable to recover it. 00:30:48.564 [2024-11-20 10:04:19.271814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.564 [2024-11-20 10:04:19.271832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.564 qpair failed and we were unable to recover it. 00:30:48.564 [2024-11-20 10:04:19.272051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.564 [2024-11-20 10:04:19.272066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.564 qpair failed and we were unable to recover it. 00:30:48.564 [2024-11-20 10:04:19.272404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.564 [2024-11-20 10:04:19.272422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.564 qpair failed and we were unable to recover it. 00:30:48.564 [2024-11-20 10:04:19.272739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.564 [2024-11-20 10:04:19.272757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.564 qpair failed and we were unable to recover it. 00:30:48.564 [2024-11-20 10:04:19.272973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.564 [2024-11-20 10:04:19.272990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.564 qpair failed and we were unable to recover it. 00:30:48.564 [2024-11-20 10:04:19.273332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.564 [2024-11-20 10:04:19.273350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.564 qpair failed and we were unable to recover it. 00:30:48.564 [2024-11-20 10:04:19.273671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.564 [2024-11-20 10:04:19.273689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.564 qpair failed and we were unable to recover it. 00:30:48.564 [2024-11-20 10:04:19.273926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.564 [2024-11-20 10:04:19.273942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.564 qpair failed and we were unable to recover it. 00:30:48.564 [2024-11-20 10:04:19.274189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.564 [2024-11-20 10:04:19.274206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.564 qpair failed and we were unable to recover it. 00:30:48.564 [2024-11-20 10:04:19.274630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.564 [2024-11-20 10:04:19.274648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.564 qpair failed and we were unable to recover it. 00:30:48.564 [2024-11-20 10:04:19.274983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.564 [2024-11-20 10:04:19.275001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.564 qpair failed and we were unable to recover it. 00:30:48.564 [2024-11-20 10:04:19.275277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.564 [2024-11-20 10:04:19.275294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.564 qpair failed and we were unable to recover it. 00:30:48.564 [2024-11-20 10:04:19.275638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.564 [2024-11-20 10:04:19.275657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.564 qpair failed and we were unable to recover it. 00:30:48.564 [2024-11-20 10:04:19.275885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.564 [2024-11-20 10:04:19.275902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.564 qpair failed and we were unable to recover it. 00:30:48.564 [2024-11-20 10:04:19.276252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.564 [2024-11-20 10:04:19.276271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.564 qpair failed and we were unable to recover it. 00:30:48.564 [2024-11-20 10:04:19.276523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.564 [2024-11-20 10:04:19.276540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.564 qpair failed and we were unable to recover it. 00:30:48.564 [2024-11-20 10:04:19.276859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.564 [2024-11-20 10:04:19.276879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.564 qpair failed and we were unable to recover it. 00:30:48.564 [2024-11-20 10:04:19.277092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.564 [2024-11-20 10:04:19.277110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.564 qpair failed and we were unable to recover it. 00:30:48.564 [2024-11-20 10:04:19.277446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.564 [2024-11-20 10:04:19.277464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.564 qpair failed and we were unable to recover it. 00:30:48.565 [2024-11-20 10:04:19.277803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.565 [2024-11-20 10:04:19.277821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.565 qpair failed and we were unable to recover it. 00:30:48.565 [2024-11-20 10:04:19.278183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.565 [2024-11-20 10:04:19.278202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.565 qpair failed and we were unable to recover it. 00:30:48.565 [2024-11-20 10:04:19.278587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.565 [2024-11-20 10:04:19.278605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.565 qpair failed and we were unable to recover it. 00:30:48.565 [2024-11-20 10:04:19.278934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.565 [2024-11-20 10:04:19.278955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.565 qpair failed and we were unable to recover it. 00:30:48.565 [2024-11-20 10:04:19.279285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.565 [2024-11-20 10:04:19.279303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.565 qpair failed and we were unable to recover it. 00:30:48.565 [2024-11-20 10:04:19.279640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.565 [2024-11-20 10:04:19.279659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.565 qpair failed and we were unable to recover it. 00:30:48.565 [2024-11-20 10:04:19.279985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.565 [2024-11-20 10:04:19.280003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.565 qpair failed and we were unable to recover it. 00:30:48.565 [2024-11-20 10:04:19.280338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.565 [2024-11-20 10:04:19.280357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.565 qpair failed and we were unable to recover it. 00:30:48.565 [2024-11-20 10:04:19.280707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.565 [2024-11-20 10:04:19.280724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.565 qpair failed and we were unable to recover it. 00:30:48.565 [2024-11-20 10:04:19.281034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.565 [2024-11-20 10:04:19.281052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.565 qpair failed and we were unable to recover it. 00:30:48.565 [2024-11-20 10:04:19.281424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.565 [2024-11-20 10:04:19.281442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.565 qpair failed and we were unable to recover it. 00:30:48.565 [2024-11-20 10:04:19.281779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.565 [2024-11-20 10:04:19.281797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.565 qpair failed and we were unable to recover it. 00:30:48.565 [2024-11-20 10:04:19.282127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.565 [2024-11-20 10:04:19.282144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.565 qpair failed and we were unable to recover it. 00:30:48.565 [2024-11-20 10:04:19.282502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.565 [2024-11-20 10:04:19.282520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.565 qpair failed and we were unable to recover it. 00:30:48.565 [2024-11-20 10:04:19.282865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.565 [2024-11-20 10:04:19.282883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.565 qpair failed and we were unable to recover it. 00:30:48.565 [2024-11-20 10:04:19.283184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.565 [2024-11-20 10:04:19.283202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.565 qpair failed and we were unable to recover it. 00:30:48.565 [2024-11-20 10:04:19.283438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.565 [2024-11-20 10:04:19.283454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.565 qpair failed and we were unable to recover it. 00:30:48.565 [2024-11-20 10:04:19.283794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.565 [2024-11-20 10:04:19.283813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.565 qpair failed and we were unable to recover it. 00:30:48.565 [2024-11-20 10:04:19.284153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.565 [2024-11-20 10:04:19.284178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.565 qpair failed and we were unable to recover it. 00:30:48.565 [2024-11-20 10:04:19.284496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.565 [2024-11-20 10:04:19.284513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.565 qpair failed and we were unable to recover it. 00:30:48.565 [2024-11-20 10:04:19.284862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.565 [2024-11-20 10:04:19.284879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.565 qpair failed and we were unable to recover it. 00:30:48.565 [2024-11-20 10:04:19.285194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.565 [2024-11-20 10:04:19.285212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.565 qpair failed and we were unable to recover it. 00:30:48.565 [2024-11-20 10:04:19.285555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.565 [2024-11-20 10:04:19.285572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.565 qpair failed and we were unable to recover it. 00:30:48.565 [2024-11-20 10:04:19.285909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.565 [2024-11-20 10:04:19.285927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.565 qpair failed and we were unable to recover it. 00:30:48.565 [2024-11-20 10:04:19.286227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.565 [2024-11-20 10:04:19.286245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.565 qpair failed and we were unable to recover it. 00:30:48.565 [2024-11-20 10:04:19.286560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.565 [2024-11-20 10:04:19.286579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.565 qpair failed and we were unable to recover it. 00:30:48.565 [2024-11-20 10:04:19.286795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.565 [2024-11-20 10:04:19.286812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.565 qpair failed and we were unable to recover it. 00:30:48.565 [2024-11-20 10:04:19.287130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.565 [2024-11-20 10:04:19.287149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.565 qpair failed and we were unable to recover it. 00:30:48.565 [2024-11-20 10:04:19.287507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.565 [2024-11-20 10:04:19.287525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.565 qpair failed and we were unable to recover it. 00:30:48.565 [2024-11-20 10:04:19.287841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.565 [2024-11-20 10:04:19.287860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.565 qpair failed and we were unable to recover it. 00:30:48.565 [2024-11-20 10:04:19.288084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.565 [2024-11-20 10:04:19.288101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.565 qpair failed and we were unable to recover it. 00:30:48.565 [2024-11-20 10:04:19.288441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.565 [2024-11-20 10:04:19.288460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.565 qpair failed and we were unable to recover it. 00:30:48.565 [2024-11-20 10:04:19.288768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.565 [2024-11-20 10:04:19.288785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.565 qpair failed and we were unable to recover it. 00:30:48.565 [2024-11-20 10:04:19.289093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.565 [2024-11-20 10:04:19.289112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.565 qpair failed and we were unable to recover it. 00:30:48.565 [2024-11-20 10:04:19.289377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.565 [2024-11-20 10:04:19.289393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.565 qpair failed and we were unable to recover it. 00:30:48.565 [2024-11-20 10:04:19.289728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.565 [2024-11-20 10:04:19.289746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.565 qpair failed and we were unable to recover it. 00:30:48.565 [2024-11-20 10:04:19.290087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.565 [2024-11-20 10:04:19.290106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.565 qpair failed and we were unable to recover it. 00:30:48.565 [2024-11-20 10:04:19.290412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.565 [2024-11-20 10:04:19.290430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.565 qpair failed and we were unable to recover it. 00:30:48.565 [2024-11-20 10:04:19.290724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.565 [2024-11-20 10:04:19.290741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.565 qpair failed and we were unable to recover it. 00:30:48.565 [2024-11-20 10:04:19.291079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.565 [2024-11-20 10:04:19.291097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.565 qpair failed and we were unable to recover it. 00:30:48.565 [2024-11-20 10:04:19.291404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.566 [2024-11-20 10:04:19.291422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.566 qpair failed and we were unable to recover it. 00:30:48.566 [2024-11-20 10:04:19.291499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.566 [2024-11-20 10:04:19.291517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.566 qpair failed and we were unable to recover it. 00:30:48.566 [2024-11-20 10:04:19.291807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.566 [2024-11-20 10:04:19.291823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.566 qpair failed and we were unable to recover it. 00:30:48.566 [2024-11-20 10:04:19.292122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.566 [2024-11-20 10:04:19.292143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.566 qpair failed and we were unable to recover it. 00:30:48.566 [2024-11-20 10:04:19.292395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.566 [2024-11-20 10:04:19.292413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.566 qpair failed and we were unable to recover it. 00:30:48.566 [2024-11-20 10:04:19.292634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.566 [2024-11-20 10:04:19.292653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.566 qpair failed and we were unable to recover it. 00:30:48.566 [2024-11-20 10:04:19.293017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.566 [2024-11-20 10:04:19.293035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.566 qpair failed and we were unable to recover it. 00:30:48.566 [2024-11-20 10:04:19.293264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.566 [2024-11-20 10:04:19.293284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.566 qpair failed and we were unable to recover it. 00:30:48.566 [2024-11-20 10:04:19.293620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.566 [2024-11-20 10:04:19.293638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.566 qpair failed and we were unable to recover it. 00:30:48.566 [2024-11-20 10:04:19.293854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.566 [2024-11-20 10:04:19.293872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.566 qpair failed and we were unable to recover it. 00:30:48.566 [2024-11-20 10:04:19.294229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.566 [2024-11-20 10:04:19.294246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.566 qpair failed and we were unable to recover it. 00:30:48.566 [2024-11-20 10:04:19.294578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.566 [2024-11-20 10:04:19.294595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.566 qpair failed and we were unable to recover it. 00:30:48.566 [2024-11-20 10:04:19.294931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.566 [2024-11-20 10:04:19.294949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.566 qpair failed and we were unable to recover it. 00:30:48.566 [2024-11-20 10:04:19.295266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.566 [2024-11-20 10:04:19.295283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.566 qpair failed and we were unable to recover it. 00:30:48.566 [2024-11-20 10:04:19.295609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.566 [2024-11-20 10:04:19.295627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.566 qpair failed and we were unable to recover it. 00:30:48.566 [2024-11-20 10:04:19.295965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.566 [2024-11-20 10:04:19.295982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.566 qpair failed and we were unable to recover it. 00:30:48.566 [2024-11-20 10:04:19.296312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.566 [2024-11-20 10:04:19.296331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.566 qpair failed and we were unable to recover it. 00:30:48.566 [2024-11-20 10:04:19.296679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.566 [2024-11-20 10:04:19.296696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.566 qpair failed and we were unable to recover it. 00:30:48.566 [2024-11-20 10:04:19.297074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.566 [2024-11-20 10:04:19.297090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.566 qpair failed and we were unable to recover it. 00:30:48.566 [2024-11-20 10:04:19.297407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.566 [2024-11-20 10:04:19.297424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.566 qpair failed and we were unable to recover it. 00:30:48.566 [2024-11-20 10:04:19.297802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.566 [2024-11-20 10:04:19.297819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.566 qpair failed and we were unable to recover it. 00:30:48.566 [2024-11-20 10:04:19.298148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.566 [2024-11-20 10:04:19.298178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.566 qpair failed and we were unable to recover it. 00:30:48.566 [2024-11-20 10:04:19.298441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.566 [2024-11-20 10:04:19.298457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.566 qpair failed and we were unable to recover it. 00:30:48.566 [2024-11-20 10:04:19.298786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.566 [2024-11-20 10:04:19.298804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.566 qpair failed and we were unable to recover it. 00:30:48.566 [2024-11-20 10:04:19.299123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.566 [2024-11-20 10:04:19.299141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.566 qpair failed and we were unable to recover it. 00:30:48.566 [2024-11-20 10:04:19.299451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.566 [2024-11-20 10:04:19.299468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.566 qpair failed and we were unable to recover it. 00:30:48.566 [2024-11-20 10:04:19.299818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.566 [2024-11-20 10:04:19.299836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.566 qpair failed and we were unable to recover it. 00:30:48.566 [2024-11-20 10:04:19.300045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.566 [2024-11-20 10:04:19.300064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.566 qpair failed and we were unable to recover it. 00:30:48.566 [2024-11-20 10:04:19.300404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.566 [2024-11-20 10:04:19.300422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.566 qpair failed and we were unable to recover it. 00:30:48.566 [2024-11-20 10:04:19.300759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.566 [2024-11-20 10:04:19.300779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.566 qpair failed and we were unable to recover it. 00:30:48.566 [2024-11-20 10:04:19.301102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.566 [2024-11-20 10:04:19.301121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.566 qpair failed and we were unable to recover it. 00:30:48.566 [2024-11-20 10:04:19.301412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.566 [2024-11-20 10:04:19.301431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.566 qpair failed and we were unable to recover it. 00:30:48.566 [2024-11-20 10:04:19.301745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.566 [2024-11-20 10:04:19.301764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.566 qpair failed and we were unable to recover it. 00:30:48.566 [2024-11-20 10:04:19.302082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.566 [2024-11-20 10:04:19.302099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.566 qpair failed and we were unable to recover it. 00:30:48.566 [2024-11-20 10:04:19.302497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.566 [2024-11-20 10:04:19.302516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.566 qpair failed and we were unable to recover it. 00:30:48.566 [2024-11-20 10:04:19.302857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.566 [2024-11-20 10:04:19.302875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.566 qpair failed and we were unable to recover it. 00:30:48.566 [2024-11-20 10:04:19.303112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.566 [2024-11-20 10:04:19.303131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.566 qpair failed and we were unable to recover it. 00:30:48.566 [2024-11-20 10:04:19.303517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.566 [2024-11-20 10:04:19.303536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.566 qpair failed and we were unable to recover it. 00:30:48.566 [2024-11-20 10:04:19.303872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.566 [2024-11-20 10:04:19.303890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.566 qpair failed and we were unable to recover it. 00:30:48.566 [2024-11-20 10:04:19.304236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.566 [2024-11-20 10:04:19.304254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.566 qpair failed and we were unable to recover it. 00:30:48.566 [2024-11-20 10:04:19.304602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.567 [2024-11-20 10:04:19.304622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.567 qpair failed and we were unable to recover it. 00:30:48.567 [2024-11-20 10:04:19.304967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.567 [2024-11-20 10:04:19.304985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.567 qpair failed and we were unable to recover it. 00:30:48.567 [2024-11-20 10:04:19.305230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.567 [2024-11-20 10:04:19.305247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.567 qpair failed and we were unable to recover it. 00:30:48.567 [2024-11-20 10:04:19.305618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.567 [2024-11-20 10:04:19.305638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.567 qpair failed and we were unable to recover it. 00:30:48.567 [2024-11-20 10:04:19.305968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.567 [2024-11-20 10:04:19.305986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.567 qpair failed and we were unable to recover it. 00:30:48.567 [2024-11-20 10:04:19.306338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.567 [2024-11-20 10:04:19.306355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.567 qpair failed and we were unable to recover it. 00:30:48.567 [2024-11-20 10:04:19.306727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.567 [2024-11-20 10:04:19.306744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.567 qpair failed and we were unable to recover it. 00:30:48.567 [2024-11-20 10:04:19.307083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.567 [2024-11-20 10:04:19.307103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.567 qpair failed and we were unable to recover it. 00:30:48.567 [2024-11-20 10:04:19.307448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.567 [2024-11-20 10:04:19.307466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.567 qpair failed and we were unable to recover it. 00:30:48.567 [2024-11-20 10:04:19.307805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.567 [2024-11-20 10:04:19.307824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.567 qpair failed and we were unable to recover it. 00:30:48.567 [2024-11-20 10:04:19.308172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.567 [2024-11-20 10:04:19.308191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.567 qpair failed and we were unable to recover it. 00:30:48.567 [2024-11-20 10:04:19.308525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.567 [2024-11-20 10:04:19.308543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.567 qpair failed and we were unable to recover it. 00:30:48.567 [2024-11-20 10:04:19.308875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.567 [2024-11-20 10:04:19.308893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.567 qpair failed and we were unable to recover it. 00:30:48.567 [2024-11-20 10:04:19.309120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.567 [2024-11-20 10:04:19.309136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.567 qpair failed and we were unable to recover it. 00:30:48.567 [2024-11-20 10:04:19.309484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.567 [2024-11-20 10:04:19.309501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.567 qpair failed and we were unable to recover it. 00:30:48.567 [2024-11-20 10:04:19.309845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.567 [2024-11-20 10:04:19.309863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.567 qpair failed and we were unable to recover it. 00:30:48.567 [2024-11-20 10:04:19.310285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.567 [2024-11-20 10:04:19.310303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.567 qpair failed and we were unable to recover it. 00:30:48.567 [2024-11-20 10:04:19.310669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.567 [2024-11-20 10:04:19.310687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.567 qpair failed and we were unable to recover it. 00:30:48.567 [2024-11-20 10:04:19.311002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.567 [2024-11-20 10:04:19.311019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.567 qpair failed and we were unable to recover it. 00:30:48.567 [2024-11-20 10:04:19.311243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.567 [2024-11-20 10:04:19.311260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.567 qpair failed and we were unable to recover it. 00:30:48.567 [2024-11-20 10:04:19.311505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.567 [2024-11-20 10:04:19.311522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.567 qpair failed and we were unable to recover it. 00:30:48.567 [2024-11-20 10:04:19.311753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.567 [2024-11-20 10:04:19.311770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.567 qpair failed and we were unable to recover it. 00:30:48.567 [2024-11-20 10:04:19.312074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.567 [2024-11-20 10:04:19.312092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.567 qpair failed and we were unable to recover it. 00:30:48.567 [2024-11-20 10:04:19.312475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.567 [2024-11-20 10:04:19.312492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.567 qpair failed and we were unable to recover it. 00:30:48.567 [2024-11-20 10:04:19.312751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.567 [2024-11-20 10:04:19.312769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.567 qpair failed and we were unable to recover it. 00:30:48.567 [2024-11-20 10:04:19.313119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.567 [2024-11-20 10:04:19.313136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.567 qpair failed and we were unable to recover it. 00:30:48.567 [2024-11-20 10:04:19.313383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.567 [2024-11-20 10:04:19.313399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.567 qpair failed and we were unable to recover it. 00:30:48.567 [2024-11-20 10:04:19.313541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.567 [2024-11-20 10:04:19.313559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.567 qpair failed and we were unable to recover it. 00:30:48.567 [2024-11-20 10:04:19.313906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.567 [2024-11-20 10:04:19.313923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.567 qpair failed and we were unable to recover it. 00:30:48.567 [2024-11-20 10:04:19.314240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.567 [2024-11-20 10:04:19.314260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.567 qpair failed and we were unable to recover it. 00:30:48.567 [2024-11-20 10:04:19.314608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.567 [2024-11-20 10:04:19.314626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.567 qpair failed and we were unable to recover it. 00:30:48.567 [2024-11-20 10:04:19.314961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.567 [2024-11-20 10:04:19.314980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.567 qpair failed and we were unable to recover it. 00:30:48.567 [2024-11-20 10:04:19.315305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.567 [2024-11-20 10:04:19.315323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.567 qpair failed and we were unable to recover it. 00:30:48.567 [2024-11-20 10:04:19.315725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.567 [2024-11-20 10:04:19.315743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.567 qpair failed and we were unable to recover it. 00:30:48.567 [2024-11-20 10:04:19.316071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.567 [2024-11-20 10:04:19.316090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.567 qpair failed and we were unable to recover it. 00:30:48.567 [2024-11-20 10:04:19.316418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.567 [2024-11-20 10:04:19.316436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.567 qpair failed and we were unable to recover it. 00:30:48.567 [2024-11-20 10:04:19.316766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.567 [2024-11-20 10:04:19.316785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.568 qpair failed and we were unable to recover it. 00:30:48.568 [2024-11-20 10:04:19.317136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.568 [2024-11-20 10:04:19.317156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.568 qpair failed and we were unable to recover it. 00:30:48.568 [2024-11-20 10:04:19.317466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.568 [2024-11-20 10:04:19.317482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.568 qpair failed and we were unable to recover it. 00:30:48.568 [2024-11-20 10:04:19.317818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.568 [2024-11-20 10:04:19.317838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.568 qpair failed and we were unable to recover it. 00:30:48.568 [2024-11-20 10:04:19.318177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.568 [2024-11-20 10:04:19.318195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.568 qpair failed and we were unable to recover it. 00:30:48.568 [2024-11-20 10:04:19.318526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.568 [2024-11-20 10:04:19.318545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.568 qpair failed and we were unable to recover it. 00:30:48.568 [2024-11-20 10:04:19.318725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.568 [2024-11-20 10:04:19.318744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.568 qpair failed and we were unable to recover it. 00:30:48.568 [2024-11-20 10:04:19.319123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.568 [2024-11-20 10:04:19.319143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.568 qpair failed and we were unable to recover it. 00:30:48.568 [2024-11-20 10:04:19.319463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.568 [2024-11-20 10:04:19.319482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.568 qpair failed and we were unable to recover it. 00:30:48.568 [2024-11-20 10:04:19.319698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.568 [2024-11-20 10:04:19.319716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.568 qpair failed and we were unable to recover it. 00:30:48.568 [2024-11-20 10:04:19.320089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.568 [2024-11-20 10:04:19.320106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.568 qpair failed and we were unable to recover it. 00:30:48.568 [2024-11-20 10:04:19.320413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.568 [2024-11-20 10:04:19.320430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.568 qpair failed and we were unable to recover it. 00:30:48.568 [2024-11-20 10:04:19.320814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.568 [2024-11-20 10:04:19.320831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.568 qpair failed and we were unable to recover it. 00:30:48.568 [2024-11-20 10:04:19.321151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.568 [2024-11-20 10:04:19.321178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.568 qpair failed and we were unable to recover it. 00:30:48.568 [2024-11-20 10:04:19.321514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.568 [2024-11-20 10:04:19.321532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.568 qpair failed and we were unable to recover it. 00:30:48.568 [2024-11-20 10:04:19.321890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.568 [2024-11-20 10:04:19.321907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.568 qpair failed and we were unable to recover it. 00:30:48.568 [2024-11-20 10:04:19.322132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.568 [2024-11-20 10:04:19.322149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.568 qpair failed and we were unable to recover it. 00:30:48.568 [2024-11-20 10:04:19.322354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.568 [2024-11-20 10:04:19.322373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.568 qpair failed and we were unable to recover it. 00:30:48.568 [2024-11-20 10:04:19.322704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.568 [2024-11-20 10:04:19.322721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.568 qpair failed and we were unable to recover it. 00:30:48.568 [2024-11-20 10:04:19.323054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.568 [2024-11-20 10:04:19.323072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.568 qpair failed and we were unable to recover it. 00:30:48.568 [2024-11-20 10:04:19.323402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.568 [2024-11-20 10:04:19.323421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.568 qpair failed and we were unable to recover it. 00:30:48.568 [2024-11-20 10:04:19.323764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.568 [2024-11-20 10:04:19.323781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.568 qpair failed and we were unable to recover it. 00:30:48.568 [2024-11-20 10:04:19.324103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.568 [2024-11-20 10:04:19.324119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.568 qpair failed and we were unable to recover it. 00:30:48.568 [2024-11-20 10:04:19.324548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.568 [2024-11-20 10:04:19.324566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.568 qpair failed and we were unable to recover it. 00:30:48.568 [2024-11-20 10:04:19.324898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.568 [2024-11-20 10:04:19.324916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.568 qpair failed and we were unable to recover it. 00:30:48.568 [2024-11-20 10:04:19.325233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.568 [2024-11-20 10:04:19.325251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.568 qpair failed and we were unable to recover it. 00:30:48.568 [2024-11-20 10:04:19.325630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.568 [2024-11-20 10:04:19.325648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.568 qpair failed and we were unable to recover it. 00:30:48.568 [2024-11-20 10:04:19.325989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.568 [2024-11-20 10:04:19.326008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.568 qpair failed and we were unable to recover it. 00:30:48.568 [2024-11-20 10:04:19.326332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.568 [2024-11-20 10:04:19.326351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.568 qpair failed and we were unable to recover it. 00:30:48.568 [2024-11-20 10:04:19.326714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.568 [2024-11-20 10:04:19.326732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.568 qpair failed and we were unable to recover it. 00:30:48.568 [2024-11-20 10:04:19.327043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.568 [2024-11-20 10:04:19.327060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.568 qpair failed and we were unable to recover it. 00:30:48.568 [2024-11-20 10:04:19.327392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.568 [2024-11-20 10:04:19.327410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.568 qpair failed and we were unable to recover it. 00:30:48.568 [2024-11-20 10:04:19.327773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.568 [2024-11-20 10:04:19.327790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.568 qpair failed and we were unable to recover it. 00:30:48.568 [2024-11-20 10:04:19.328053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.568 [2024-11-20 10:04:19.328069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.568 qpair failed and we were unable to recover it. 00:30:48.568 [2024-11-20 10:04:19.328487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.568 [2024-11-20 10:04:19.328504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.568 qpair failed and we were unable to recover it. 00:30:48.568 [2024-11-20 10:04:19.328857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.568 [2024-11-20 10:04:19.328875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.568 qpair failed and we were unable to recover it. 00:30:48.568 [2024-11-20 10:04:19.329278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.568 [2024-11-20 10:04:19.329297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.568 qpair failed and we were unable to recover it. 00:30:48.568 [2024-11-20 10:04:19.329506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.568 [2024-11-20 10:04:19.329524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.568 qpair failed and we were unable to recover it. 00:30:48.568 [2024-11-20 10:04:19.329865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.568 [2024-11-20 10:04:19.329882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.568 qpair failed and we were unable to recover it. 00:30:48.568 [2024-11-20 10:04:19.330223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.568 [2024-11-20 10:04:19.330240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.568 qpair failed and we were unable to recover it. 00:30:48.568 [2024-11-20 10:04:19.330606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.568 [2024-11-20 10:04:19.330623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.568 qpair failed and we were unable to recover it. 00:30:48.568 [2024-11-20 10:04:19.330867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.568 [2024-11-20 10:04:19.330885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.568 qpair failed and we were unable to recover it. 00:30:48.568 [2024-11-20 10:04:19.331210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.568 [2024-11-20 10:04:19.331229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.568 qpair failed and we were unable to recover it. 00:30:48.568 [2024-11-20 10:04:19.331620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.568 [2024-11-20 10:04:19.331637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.568 qpair failed and we were unable to recover it. 00:30:48.568 [2024-11-20 10:04:19.331855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.568 [2024-11-20 10:04:19.331871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.568 qpair failed and we were unable to recover it. 00:30:48.568 [2024-11-20 10:04:19.332295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.568 [2024-11-20 10:04:19.332312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.568 qpair failed and we were unable to recover it. 00:30:48.568 [2024-11-20 10:04:19.332674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.568 [2024-11-20 10:04:19.332691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.568 qpair failed and we were unable to recover it. 00:30:48.568 [2024-11-20 10:04:19.333029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.568 [2024-11-20 10:04:19.333051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.568 qpair failed and we were unable to recover it. 00:30:48.568 [2024-11-20 10:04:19.333295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.568 [2024-11-20 10:04:19.333314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.568 qpair failed and we were unable to recover it. 00:30:48.568 [2024-11-20 10:04:19.333672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.568 [2024-11-20 10:04:19.333692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.568 qpair failed and we were unable to recover it. 00:30:48.568 [2024-11-20 10:04:19.334058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.568 [2024-11-20 10:04:19.334077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.568 qpair failed and we were unable to recover it. 00:30:48.568 [2024-11-20 10:04:19.334456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.568 [2024-11-20 10:04:19.334474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.569 qpair failed and we were unable to recover it. 00:30:48.569 [2024-11-20 10:04:19.334820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.569 [2024-11-20 10:04:19.334839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.569 qpair failed and we were unable to recover it. 00:30:48.569 [2024-11-20 10:04:19.335179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.569 [2024-11-20 10:04:19.335198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.569 qpair failed and we were unable to recover it. 00:30:48.569 [2024-11-20 10:04:19.335518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.569 [2024-11-20 10:04:19.335535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.569 qpair failed and we were unable to recover it. 00:30:48.569 [2024-11-20 10:04:19.335871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.569 [2024-11-20 10:04:19.335890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.569 qpair failed and we were unable to recover it. 00:30:48.569 [2024-11-20 10:04:19.336141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.569 [2024-11-20 10:04:19.336169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.569 qpair failed and we were unable to recover it. 00:30:48.569 [2024-11-20 10:04:19.336604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.569 [2024-11-20 10:04:19.336623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.569 qpair failed and we were unable to recover it. 00:30:48.569 [2024-11-20 10:04:19.336941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.569 [2024-11-20 10:04:19.336959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.569 qpair failed and we were unable to recover it. 00:30:48.569 [2024-11-20 10:04:19.337318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.569 [2024-11-20 10:04:19.337336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.569 qpair failed and we were unable to recover it. 00:30:48.569 [2024-11-20 10:04:19.337683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.569 [2024-11-20 10:04:19.337703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.569 qpair failed and we were unable to recover it. 00:30:48.569 [2024-11-20 10:04:19.338041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.569 [2024-11-20 10:04:19.338059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.569 qpair failed and we were unable to recover it. 00:30:48.569 [2024-11-20 10:04:19.338390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.569 [2024-11-20 10:04:19.338408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.569 qpair failed and we were unable to recover it. 00:30:48.569 [2024-11-20 10:04:19.338749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.569 [2024-11-20 10:04:19.338766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.569 qpair failed and we were unable to recover it. 00:30:48.569 [2024-11-20 10:04:19.339110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.569 [2024-11-20 10:04:19.339127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.569 qpair failed and we were unable to recover it. 00:30:48.569 [2024-11-20 10:04:19.339412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.569 [2024-11-20 10:04:19.339429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.569 qpair failed and we were unable to recover it. 00:30:48.569 [2024-11-20 10:04:19.339745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.569 [2024-11-20 10:04:19.339762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.569 qpair failed and we were unable to recover it. 00:30:48.569 [2024-11-20 10:04:19.339958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.569 [2024-11-20 10:04:19.339976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.569 qpair failed and we were unable to recover it. 00:30:48.569 [2024-11-20 10:04:19.340327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.569 [2024-11-20 10:04:19.340345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.569 qpair failed and we were unable to recover it. 00:30:48.569 [2024-11-20 10:04:19.340692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.569 [2024-11-20 10:04:19.340710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.569 qpair failed and we were unable to recover it. 00:30:48.569 [2024-11-20 10:04:19.341066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.569 [2024-11-20 10:04:19.341084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.569 qpair failed and we were unable to recover it. 00:30:48.569 [2024-11-20 10:04:19.341347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.569 [2024-11-20 10:04:19.341364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.569 qpair failed and we were unable to recover it. 00:30:48.569 [2024-11-20 10:04:19.341709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.569 [2024-11-20 10:04:19.341727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.569 qpair failed and we were unable to recover it. 00:30:48.569 [2024-11-20 10:04:19.342068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.569 [2024-11-20 10:04:19.342087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.569 qpair failed and we were unable to recover it. 00:30:48.569 [2024-11-20 10:04:19.342414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.569 [2024-11-20 10:04:19.342433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.569 qpair failed and we were unable to recover it. 00:30:48.569 [2024-11-20 10:04:19.342774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.569 [2024-11-20 10:04:19.342792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.569 qpair failed and we were unable to recover it. 00:30:48.569 [2024-11-20 10:04:19.343127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.569 [2024-11-20 10:04:19.343144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.569 qpair failed and we were unable to recover it. 00:30:48.569 [2024-11-20 10:04:19.343469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.569 [2024-11-20 10:04:19.343487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.569 qpair failed and we were unable to recover it. 00:30:48.569 [2024-11-20 10:04:19.343817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.569 [2024-11-20 10:04:19.343834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.569 qpair failed and we were unable to recover it. 00:30:48.569 [2024-11-20 10:04:19.344182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.569 [2024-11-20 10:04:19.344201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.569 qpair failed and we were unable to recover it. 00:30:48.569 [2024-11-20 10:04:19.344398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.569 [2024-11-20 10:04:19.344418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.569 qpair failed and we were unable to recover it. 00:30:48.569 [2024-11-20 10:04:19.344761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.569 [2024-11-20 10:04:19.344780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.569 qpair failed and we were unable to recover it. 00:30:48.569 [2024-11-20 10:04:19.345090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.569 [2024-11-20 10:04:19.345108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.569 qpair failed and we were unable to recover it. 00:30:48.569 [2024-11-20 10:04:19.345425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.569 [2024-11-20 10:04:19.345442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.569 qpair failed and we were unable to recover it. 00:30:48.569 [2024-11-20 10:04:19.345832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.569 [2024-11-20 10:04:19.345851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.569 qpair failed and we were unable to recover it. 00:30:48.569 [2024-11-20 10:04:19.346193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.569 [2024-11-20 10:04:19.346211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.569 qpair failed and we were unable to recover it. 00:30:48.569 [2024-11-20 10:04:19.346609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.569 [2024-11-20 10:04:19.346625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.569 qpair failed and we were unable to recover it. 00:30:48.569 [2024-11-20 10:04:19.346965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.569 [2024-11-20 10:04:19.346987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.569 qpair failed and we were unable to recover it. 00:30:48.569 [2024-11-20 10:04:19.347321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.569 [2024-11-20 10:04:19.347339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.569 qpair failed and we were unable to recover it. 00:30:48.569 [2024-11-20 10:04:19.347717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.569 [2024-11-20 10:04:19.347735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.569 qpair failed and we were unable to recover it. 00:30:48.569 [2024-11-20 10:04:19.347906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.569 [2024-11-20 10:04:19.347923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.569 qpair failed and we were unable to recover it. 00:30:48.569 [2024-11-20 10:04:19.348232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.569 [2024-11-20 10:04:19.348251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.569 qpair failed and we were unable to recover it. 00:30:48.569 [2024-11-20 10:04:19.348588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.569 [2024-11-20 10:04:19.348606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.569 qpair failed and we were unable to recover it. 00:30:48.569 [2024-11-20 10:04:19.348943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.569 [2024-11-20 10:04:19.348960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.569 qpair failed and we were unable to recover it. 00:30:48.569 [2024-11-20 10:04:19.349284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.569 [2024-11-20 10:04:19.349301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.569 qpair failed and we were unable to recover it. 00:30:48.569 [2024-11-20 10:04:19.349649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.569 [2024-11-20 10:04:19.349666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.569 qpair failed and we were unable to recover it. 00:30:48.569 [2024-11-20 10:04:19.349995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.569 [2024-11-20 10:04:19.350014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.569 qpair failed and we were unable to recover it. 00:30:48.569 [2024-11-20 10:04:19.350355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.569 [2024-11-20 10:04:19.350374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.569 qpair failed and we were unable to recover it. 00:30:48.569 [2024-11-20 10:04:19.350691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.569 [2024-11-20 10:04:19.350709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.569 qpair failed and we were unable to recover it. 00:30:48.569 [2024-11-20 10:04:19.351039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.569 [2024-11-20 10:04:19.351058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.569 qpair failed and we were unable to recover it. 00:30:48.569 [2024-11-20 10:04:19.351410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.569 [2024-11-20 10:04:19.351427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.569 qpair failed and we were unable to recover it. 00:30:48.569 [2024-11-20 10:04:19.351767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.569 [2024-11-20 10:04:19.351787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.569 qpair failed and we were unable to recover it. 00:30:48.569 [2024-11-20 10:04:19.352111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.569 [2024-11-20 10:04:19.352130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.569 qpair failed and we were unable to recover it. 00:30:48.569 [2024-11-20 10:04:19.352473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.569 [2024-11-20 10:04:19.352493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.569 qpair failed and we were unable to recover it. 00:30:48.569 [2024-11-20 10:04:19.352823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.569 [2024-11-20 10:04:19.352840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.569 qpair failed and we were unable to recover it. 00:30:48.569 [2024-11-20 10:04:19.353174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.569 [2024-11-20 10:04:19.353194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.569 qpair failed and we were unable to recover it. 00:30:48.569 [2024-11-20 10:04:19.353427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.569 [2024-11-20 10:04:19.353445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.569 qpair failed and we were unable to recover it. 00:30:48.569 [2024-11-20 10:04:19.353743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.570 [2024-11-20 10:04:19.353761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.570 qpair failed and we were unable to recover it. 00:30:48.570 [2024-11-20 10:04:19.354039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.570 [2024-11-20 10:04:19.354056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.570 qpair failed and we were unable to recover it. 00:30:48.570 [2024-11-20 10:04:19.355240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.570 [2024-11-20 10:04:19.355282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.570 qpair failed and we were unable to recover it. 00:30:48.570 [2024-11-20 10:04:19.355666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.570 [2024-11-20 10:04:19.355685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.570 qpair failed and we were unable to recover it. 00:30:48.570 [2024-11-20 10:04:19.356021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.570 [2024-11-20 10:04:19.356042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.570 qpair failed and we were unable to recover it. 00:30:48.570 [2024-11-20 10:04:19.356281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.570 [2024-11-20 10:04:19.356300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.570 qpair failed and we were unable to recover it. 00:30:48.570 [2024-11-20 10:04:19.356640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.570 [2024-11-20 10:04:19.356657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.570 qpair failed and we were unable to recover it. 00:30:48.570 [2024-11-20 10:04:19.356987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.570 [2024-11-20 10:04:19.357004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.570 qpair failed and we were unable to recover it. 00:30:48.570 [2024-11-20 10:04:19.357235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.570 [2024-11-20 10:04:19.357252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.570 qpair failed and we were unable to recover it. 00:30:48.570 [2024-11-20 10:04:19.357604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.570 [2024-11-20 10:04:19.357622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.570 qpair failed and we were unable to recover it. 00:30:48.570 [2024-11-20 10:04:19.357955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.570 [2024-11-20 10:04:19.357973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.570 qpair failed and we were unable to recover it. 00:30:48.570 [2024-11-20 10:04:19.358314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.570 [2024-11-20 10:04:19.358333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.570 qpair failed and we were unable to recover it. 00:30:48.570 [2024-11-20 10:04:19.358671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.570 [2024-11-20 10:04:19.358689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.570 qpair failed and we were unable to recover it. 00:30:48.570 [2024-11-20 10:04:19.359029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.570 [2024-11-20 10:04:19.359047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.570 qpair failed and we were unable to recover it. 00:30:48.570 [2024-11-20 10:04:19.359370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.570 [2024-11-20 10:04:19.359387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.570 qpair failed and we were unable to recover it. 00:30:48.570 [2024-11-20 10:04:19.359695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.570 [2024-11-20 10:04:19.359713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.570 qpair failed and we were unable to recover it. 00:30:48.570 [2024-11-20 10:04:19.360055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.570 [2024-11-20 10:04:19.360074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.570 qpair failed and we were unable to recover it. 00:30:48.570 [2024-11-20 10:04:19.360406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.570 [2024-11-20 10:04:19.360424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.570 qpair failed and we were unable to recover it. 00:30:48.570 [2024-11-20 10:04:19.360650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.570 [2024-11-20 10:04:19.360666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.570 qpair failed and we were unable to recover it. 00:30:48.570 [2024-11-20 10:04:19.360992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.570 [2024-11-20 10:04:19.361011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.570 qpair failed and we were unable to recover it. 00:30:48.570 [2024-11-20 10:04:19.361336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.570 [2024-11-20 10:04:19.361358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.570 qpair failed and we were unable to recover it. 00:30:48.570 [2024-11-20 10:04:19.361770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.570 [2024-11-20 10:04:19.361788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.570 qpair failed and we were unable to recover it. 00:30:48.570 [2024-11-20 10:04:19.362131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.570 [2024-11-20 10:04:19.362148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.570 qpair failed and we were unable to recover it. 00:30:48.570 [2024-11-20 10:04:19.362525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.570 [2024-11-20 10:04:19.362543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.570 qpair failed and we were unable to recover it. 00:30:48.570 [2024-11-20 10:04:19.362885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.570 [2024-11-20 10:04:19.362903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.570 qpair failed and we were unable to recover it. 00:30:48.570 [2024-11-20 10:04:19.363232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.570 [2024-11-20 10:04:19.363250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.570 qpair failed and we were unable to recover it. 00:30:48.570 [2024-11-20 10:04:19.363609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.570 [2024-11-20 10:04:19.363628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.570 qpair failed and we were unable to recover it. 00:30:48.570 [2024-11-20 10:04:19.363852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.570 [2024-11-20 10:04:19.363869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.570 qpair failed and we were unable to recover it. 00:30:48.570 [2024-11-20 10:04:19.364225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.570 [2024-11-20 10:04:19.364243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.570 qpair failed and we were unable to recover it. 00:30:48.570 [2024-11-20 10:04:19.364471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.570 [2024-11-20 10:04:19.364489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.570 qpair failed and we were unable to recover it. 00:30:48.570 [2024-11-20 10:04:19.364826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.570 [2024-11-20 10:04:19.364845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.570 qpair failed and we were unable to recover it. 00:30:48.570 [2024-11-20 10:04:19.365188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.570 [2024-11-20 10:04:19.365206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.570 qpair failed and we were unable to recover it. 00:30:48.570 [2024-11-20 10:04:19.365552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.570 [2024-11-20 10:04:19.365570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.570 qpair failed and we were unable to recover it. 00:30:48.570 [2024-11-20 10:04:19.365787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.570 [2024-11-20 10:04:19.365803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.570 qpair failed and we were unable to recover it. 00:30:48.570 [2024-11-20 10:04:19.366142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.570 [2024-11-20 10:04:19.366167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.570 qpair failed and we were unable to recover it. 00:30:48.570 [2024-11-20 10:04:19.366507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.570 [2024-11-20 10:04:19.366525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.570 qpair failed and we were unable to recover it. 00:30:48.570 [2024-11-20 10:04:19.366863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.570 [2024-11-20 10:04:19.366881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.570 qpair failed and we were unable to recover it. 00:30:48.570 [2024-11-20 10:04:19.367216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.570 [2024-11-20 10:04:19.367235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.570 qpair failed and we were unable to recover it. 00:30:48.570 [2024-11-20 10:04:19.367459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.570 [2024-11-20 10:04:19.367475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.570 qpair failed and we were unable to recover it. 00:30:48.570 [2024-11-20 10:04:19.367862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.570 [2024-11-20 10:04:19.367879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.570 qpair failed and we were unable to recover it. 00:30:48.570 [2024-11-20 10:04:19.368216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.570 [2024-11-20 10:04:19.368235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.570 qpair failed and we were unable to recover it. 00:30:48.570 [2024-11-20 10:04:19.368582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.570 [2024-11-20 10:04:19.368600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.570 qpair failed and we were unable to recover it. 00:30:48.570 [2024-11-20 10:04:19.368931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.570 [2024-11-20 10:04:19.368950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.570 qpair failed and we were unable to recover it. 00:30:48.570 [2024-11-20 10:04:19.369275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.570 [2024-11-20 10:04:19.369293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.570 qpair failed and we were unable to recover it. 00:30:48.570 [2024-11-20 10:04:19.369634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.570 [2024-11-20 10:04:19.369651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.570 qpair failed and we were unable to recover it. 00:30:48.570 [2024-11-20 10:04:19.369991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.570 [2024-11-20 10:04:19.370007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.570 qpair failed and we were unable to recover it. 00:30:48.570 [2024-11-20 10:04:19.370253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.570 [2024-11-20 10:04:19.370270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.570 qpair failed and we were unable to recover it. 00:30:48.570 [2024-11-20 10:04:19.370631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.570 [2024-11-20 10:04:19.370650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.570 qpair failed and we were unable to recover it. 00:30:48.570 [2024-11-20 10:04:19.370970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.570 [2024-11-20 10:04:19.370986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.570 qpair failed and we were unable to recover it. 00:30:48.570 [2024-11-20 10:04:19.371327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.570 [2024-11-20 10:04:19.371344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.570 qpair failed and we were unable to recover it. 00:30:48.570 [2024-11-20 10:04:19.371686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.571 [2024-11-20 10:04:19.371705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.571 qpair failed and we were unable to recover it. 00:30:48.571 [2024-11-20 10:04:19.372052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.571 [2024-11-20 10:04:19.372069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.571 qpair failed and we were unable to recover it. 00:30:48.571 [2024-11-20 10:04:19.372411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.571 [2024-11-20 10:04:19.372430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.571 qpair failed and we were unable to recover it. 00:30:48.571 [2024-11-20 10:04:19.372767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.571 [2024-11-20 10:04:19.372785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.571 qpair failed and we were unable to recover it. 00:30:48.571 [2024-11-20 10:04:19.373122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.571 [2024-11-20 10:04:19.373140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.571 qpair failed and we were unable to recover it. 00:30:48.571 [2024-11-20 10:04:19.373450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.571 [2024-11-20 10:04:19.373468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.571 qpair failed and we were unable to recover it. 00:30:48.571 [2024-11-20 10:04:19.373800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.571 [2024-11-20 10:04:19.373819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.571 qpair failed and we were unable to recover it. 00:30:48.571 [2024-11-20 10:04:19.374171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.571 [2024-11-20 10:04:19.374190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.571 qpair failed and we were unable to recover it. 00:30:48.571 [2024-11-20 10:04:19.374531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.571 [2024-11-20 10:04:19.374549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.571 qpair failed and we were unable to recover it. 00:30:48.571 [2024-11-20 10:04:19.374896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.571 [2024-11-20 10:04:19.374914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.571 qpair failed and we were unable to recover it. 00:30:48.571 [2024-11-20 10:04:19.375239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.571 [2024-11-20 10:04:19.375260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.571 qpair failed and we were unable to recover it. 00:30:48.571 [2024-11-20 10:04:19.375614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.571 [2024-11-20 10:04:19.375630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.571 qpair failed and we were unable to recover it. 00:30:48.571 [2024-11-20 10:04:19.375969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.571 [2024-11-20 10:04:19.375987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.571 qpair failed and we were unable to recover it. 00:30:48.571 [2024-11-20 10:04:19.376305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.571 [2024-11-20 10:04:19.376323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.571 qpair failed and we were unable to recover it. 00:30:48.571 [2024-11-20 10:04:19.376629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.571 [2024-11-20 10:04:19.376647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.571 qpair failed and we were unable to recover it. 00:30:48.571 [2024-11-20 10:04:19.376976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.571 [2024-11-20 10:04:19.376992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.571 qpair failed and we were unable to recover it. 00:30:48.571 [2024-11-20 10:04:19.377344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.571 [2024-11-20 10:04:19.377363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.571 qpair failed and we were unable to recover it. 00:30:48.571 [2024-11-20 10:04:19.377677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.571 [2024-11-20 10:04:19.377694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.571 qpair failed and we were unable to recover it. 00:30:48.571 [2024-11-20 10:04:19.377913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.571 [2024-11-20 10:04:19.377929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.571 qpair failed and we were unable to recover it. 00:30:48.571 [2024-11-20 10:04:19.378280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.571 [2024-11-20 10:04:19.378298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.571 qpair failed and we were unable to recover it. 00:30:48.571 [2024-11-20 10:04:19.378611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.571 [2024-11-20 10:04:19.378629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.571 qpair failed and we were unable to recover it. 00:30:48.571 [2024-11-20 10:04:19.378979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.571 [2024-11-20 10:04:19.378996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.571 qpair failed and we were unable to recover it. 00:30:48.571 [2024-11-20 10:04:19.379333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.571 [2024-11-20 10:04:19.379350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.571 qpair failed and we were unable to recover it. 00:30:48.571 [2024-11-20 10:04:19.379699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.571 [2024-11-20 10:04:19.379715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.571 qpair failed and we were unable to recover it. 00:30:48.571 [2024-11-20 10:04:19.380054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.571 [2024-11-20 10:04:19.380072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.571 qpair failed and we were unable to recover it. 00:30:48.571 [2024-11-20 10:04:19.380404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.571 [2024-11-20 10:04:19.380422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.571 qpair failed and we were unable to recover it. 00:30:48.571 [2024-11-20 10:04:19.380761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.571 [2024-11-20 10:04:19.380779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.571 qpair failed and we were unable to recover it. 00:30:48.571 [2024-11-20 10:04:19.381118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.571 [2024-11-20 10:04:19.381136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.571 qpair failed and we were unable to recover it. 00:30:48.571 [2024-11-20 10:04:19.381478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.571 [2024-11-20 10:04:19.381496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.571 qpair failed and we were unable to recover it. 00:30:48.571 [2024-11-20 10:04:19.381818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.571 [2024-11-20 10:04:19.381837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.571 qpair failed and we were unable to recover it. 00:30:48.571 [2024-11-20 10:04:19.382172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.571 [2024-11-20 10:04:19.382192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.571 qpair failed and we were unable to recover it. 00:30:48.571 [2024-11-20 10:04:19.382534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.571 [2024-11-20 10:04:19.382550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.571 qpair failed and we were unable to recover it. 00:30:48.571 [2024-11-20 10:04:19.382890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.571 [2024-11-20 10:04:19.382907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.571 qpair failed and we were unable to recover it. 00:30:48.571 [2024-11-20 10:04:19.383254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.571 [2024-11-20 10:04:19.383272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.571 qpair failed and we were unable to recover it. 00:30:48.571 [2024-11-20 10:04:19.383605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.571 [2024-11-20 10:04:19.383621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.571 qpair failed and we were unable to recover it. 00:30:48.571 [2024-11-20 10:04:19.383958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.571 [2024-11-20 10:04:19.383976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.571 qpair failed and we were unable to recover it. 00:30:48.571 [2024-11-20 10:04:19.384317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.571 [2024-11-20 10:04:19.384336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.571 qpair failed and we were unable to recover it. 00:30:48.571 [2024-11-20 10:04:19.384689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.571 [2024-11-20 10:04:19.384706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.571 qpair failed and we were unable to recover it. 00:30:48.571 [2024-11-20 10:04:19.385041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.571 [2024-11-20 10:04:19.385057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.571 qpair failed and we were unable to recover it. 00:30:48.571 [2024-11-20 10:04:19.385401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.571 [2024-11-20 10:04:19.385418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.571 qpair failed and we were unable to recover it. 00:30:48.571 [2024-11-20 10:04:19.385745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.571 [2024-11-20 10:04:19.385764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.571 qpair failed and we were unable to recover it. 00:30:48.571 [2024-11-20 10:04:19.386080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.571 [2024-11-20 10:04:19.386098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.571 qpair failed and we were unable to recover it. 00:30:48.571 [2024-11-20 10:04:19.386447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.571 [2024-11-20 10:04:19.386464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.571 qpair failed and we were unable to recover it. 00:30:48.571 [2024-11-20 10:04:19.386802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.571 [2024-11-20 10:04:19.386820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.571 qpair failed and we were unable to recover it. 00:30:48.571 [2024-11-20 10:04:19.387147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.571 [2024-11-20 10:04:19.387174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.571 qpair failed and we were unable to recover it. 00:30:48.571 [2024-11-20 10:04:19.387514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.571 [2024-11-20 10:04:19.387530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.571 qpair failed and we were unable to recover it. 00:30:48.571 [2024-11-20 10:04:19.387870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.571 [2024-11-20 10:04:19.387887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.571 qpair failed and we were unable to recover it. 00:30:48.571 [2024-11-20 10:04:19.388223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.571 [2024-11-20 10:04:19.388242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.571 qpair failed and we were unable to recover it. 00:30:48.571 [2024-11-20 10:04:19.388580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.571 [2024-11-20 10:04:19.388599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.571 qpair failed and we were unable to recover it. 00:30:48.571 [2024-11-20 10:04:19.388949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.571 [2024-11-20 10:04:19.388966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.571 qpair failed and we were unable to recover it. 00:30:48.571 [2024-11-20 10:04:19.389304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.571 [2024-11-20 10:04:19.389326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.571 qpair failed and we were unable to recover it. 00:30:48.571 [2024-11-20 10:04:19.389657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.571 [2024-11-20 10:04:19.389676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.571 qpair failed and we were unable to recover it. 00:30:48.571 [2024-11-20 10:04:19.390015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.571 [2024-11-20 10:04:19.390033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.571 qpair failed and we were unable to recover it. 00:30:48.571 [2024-11-20 10:04:19.390382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.571 [2024-11-20 10:04:19.390400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.571 qpair failed and we were unable to recover it. 00:30:48.571 [2024-11-20 10:04:19.390737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.571 [2024-11-20 10:04:19.390755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.571 qpair failed and we were unable to recover it. 00:30:48.572 [2024-11-20 10:04:19.391080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.572 [2024-11-20 10:04:19.391097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.572 qpair failed and we were unable to recover it. 00:30:48.572 [2024-11-20 10:04:19.391444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.572 [2024-11-20 10:04:19.391462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.572 qpair failed and we were unable to recover it. 00:30:48.572 [2024-11-20 10:04:19.391807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.572 [2024-11-20 10:04:19.391825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.572 qpair failed and we were unable to recover it. 00:30:48.572 [2024-11-20 10:04:19.392170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.572 [2024-11-20 10:04:19.392188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.572 qpair failed and we were unable to recover it. 00:30:48.572 [2024-11-20 10:04:19.392525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.572 [2024-11-20 10:04:19.392543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.572 qpair failed and we were unable to recover it. 00:30:48.572 [2024-11-20 10:04:19.392874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.572 [2024-11-20 10:04:19.392893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.572 qpair failed and we were unable to recover it. 00:30:48.572 [2024-11-20 10:04:19.393216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.572 [2024-11-20 10:04:19.393234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.572 qpair failed and we were unable to recover it. 00:30:48.572 [2024-11-20 10:04:19.393575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.572 [2024-11-20 10:04:19.393593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.572 qpair failed and we were unable to recover it. 00:30:48.572 [2024-11-20 10:04:19.393934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.572 [2024-11-20 10:04:19.393951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.572 qpair failed and we were unable to recover it. 00:30:48.572 [2024-11-20 10:04:19.394294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.572 [2024-11-20 10:04:19.394314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.572 qpair failed and we were unable to recover it. 00:30:48.572 [2024-11-20 10:04:19.394658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.572 [2024-11-20 10:04:19.394675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.572 qpair failed and we were unable to recover it. 00:30:48.572 [2024-11-20 10:04:19.395000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.572 [2024-11-20 10:04:19.395018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.572 qpair failed and we were unable to recover it. 00:30:48.572 [2024-11-20 10:04:19.395246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.572 [2024-11-20 10:04:19.395264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.572 qpair failed and we were unable to recover it. 00:30:48.572 [2024-11-20 10:04:19.395588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.572 [2024-11-20 10:04:19.395604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.572 qpair failed and we were unable to recover it. 00:30:48.572 [2024-11-20 10:04:19.395821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.572 [2024-11-20 10:04:19.395839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.572 qpair failed and we were unable to recover it. 00:30:48.572 [2024-11-20 10:04:19.396182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.572 [2024-11-20 10:04:19.396202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.572 qpair failed and we were unable to recover it. 00:30:48.572 [2024-11-20 10:04:19.396504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.572 [2024-11-20 10:04:19.396522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.572 qpair failed and we were unable to recover it. 00:30:48.572 [2024-11-20 10:04:19.396746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.572 [2024-11-20 10:04:19.396763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.572 qpair failed and we were unable to recover it. 00:30:48.572 [2024-11-20 10:04:19.397006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.572 [2024-11-20 10:04:19.397025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.572 qpair failed and we were unable to recover it. 00:30:48.572 [2024-11-20 10:04:19.397384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.572 [2024-11-20 10:04:19.397401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.572 qpair failed and we were unable to recover it. 00:30:48.572 [2024-11-20 10:04:19.397734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.572 [2024-11-20 10:04:19.397753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.572 qpair failed and we were unable to recover it. 00:30:48.572 [2024-11-20 10:04:19.398092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.572 [2024-11-20 10:04:19.398112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.572 qpair failed and we were unable to recover it. 00:30:48.572 [2024-11-20 10:04:19.398457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.572 [2024-11-20 10:04:19.398476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.572 qpair failed and we were unable to recover it. 00:30:48.572 [2024-11-20 10:04:19.398752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.572 [2024-11-20 10:04:19.398769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.572 qpair failed and we were unable to recover it. 00:30:48.572 [2024-11-20 10:04:19.399097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.572 [2024-11-20 10:04:19.399115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.572 qpair failed and we were unable to recover it. 00:30:48.572 [2024-11-20 10:04:19.399464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.572 [2024-11-20 10:04:19.399483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.572 qpair failed and we were unable to recover it. 00:30:48.572 [2024-11-20 10:04:19.399802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.572 [2024-11-20 10:04:19.399820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.572 qpair failed and we were unable to recover it. 00:30:48.572 [2024-11-20 10:04:19.400177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.572 [2024-11-20 10:04:19.400197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.572 qpair failed and we were unable to recover it. 00:30:48.572 [2024-11-20 10:04:19.400536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.572 [2024-11-20 10:04:19.400553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.572 qpair failed and we were unable to recover it. 00:30:48.572 [2024-11-20 10:04:19.400976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.572 [2024-11-20 10:04:19.400993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.572 qpair failed and we were unable to recover it. 00:30:48.572 [2024-11-20 10:04:19.401221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.572 [2024-11-20 10:04:19.401238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.572 qpair failed and we were unable to recover it. 00:30:48.572 [2024-11-20 10:04:19.401587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.572 [2024-11-20 10:04:19.401604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.572 qpair failed and we were unable to recover it. 00:30:48.572 [2024-11-20 10:04:19.401930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.572 [2024-11-20 10:04:19.401947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.572 qpair failed and we were unable to recover it. 00:30:48.572 [2024-11-20 10:04:19.402246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.572 [2024-11-20 10:04:19.402264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.572 qpair failed and we were unable to recover it. 00:30:48.572 [2024-11-20 10:04:19.402585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.572 [2024-11-20 10:04:19.402603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.572 qpair failed and we were unable to recover it. 00:30:48.572 [2024-11-20 10:04:19.403026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.572 [2024-11-20 10:04:19.403046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.572 qpair failed and we were unable to recover it. 00:30:48.572 [2024-11-20 10:04:19.403272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.572 [2024-11-20 10:04:19.403290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.572 qpair failed and we were unable to recover it. 00:30:48.572 [2024-11-20 10:04:19.403585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.572 [2024-11-20 10:04:19.403602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.572 qpair failed and we were unable to recover it. 00:30:48.572 [2024-11-20 10:04:19.403942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.572 [2024-11-20 10:04:19.403960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.572 qpair failed and we were unable to recover it. 00:30:48.572 [2024-11-20 10:04:19.404309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.572 [2024-11-20 10:04:19.404327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.572 qpair failed and we were unable to recover it. 00:30:48.572 [2024-11-20 10:04:19.404671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.572 [2024-11-20 10:04:19.404690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.572 qpair failed and we were unable to recover it. 00:30:48.572 [2024-11-20 10:04:19.405033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.572 [2024-11-20 10:04:19.405051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.572 qpair failed and we were unable to recover it. 00:30:48.572 [2024-11-20 10:04:19.405276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.572 [2024-11-20 10:04:19.405292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.572 qpair failed and we were unable to recover it. 00:30:48.572 [2024-11-20 10:04:19.405634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.572 [2024-11-20 10:04:19.405653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.572 qpair failed and we were unable to recover it. 00:30:48.572 [2024-11-20 10:04:19.405943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.572 [2024-11-20 10:04:19.405960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.572 qpair failed and we were unable to recover it. 00:30:48.572 [2024-11-20 10:04:19.406241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.572 [2024-11-20 10:04:19.406259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.572 qpair failed and we were unable to recover it. 00:30:48.572 [2024-11-20 10:04:19.406632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.572 [2024-11-20 10:04:19.406650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.572 qpair failed and we were unable to recover it. 00:30:48.572 [2024-11-20 10:04:19.406992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.572 [2024-11-20 10:04:19.407011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.572 qpair failed and we were unable to recover it. 00:30:48.572 [2024-11-20 10:04:19.407198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.572 [2024-11-20 10:04:19.407216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.572 qpair failed and we were unable to recover it. 00:30:48.572 [2024-11-20 10:04:19.407533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.572 [2024-11-20 10:04:19.407551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.572 qpair failed and we were unable to recover it. 00:30:48.572 [2024-11-20 10:04:19.407908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.572 [2024-11-20 10:04:19.407926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.572 qpair failed and we were unable to recover it. 00:30:48.572 [2024-11-20 10:04:19.408282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.572 [2024-11-20 10:04:19.408300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.572 qpair failed and we were unable to recover it. 00:30:48.572 [2024-11-20 10:04:19.408649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.572 [2024-11-20 10:04:19.408667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.572 qpair failed and we were unable to recover it. 00:30:48.572 [2024-11-20 10:04:19.409006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.572 [2024-11-20 10:04:19.409024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.572 qpair failed and we were unable to recover it. 00:30:48.572 [2024-11-20 10:04:19.409378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.573 [2024-11-20 10:04:19.409398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.573 qpair failed and we were unable to recover it. 00:30:48.573 [2024-11-20 10:04:19.409615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.573 [2024-11-20 10:04:19.409632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.573 qpair failed and we were unable to recover it. 00:30:48.573 [2024-11-20 10:04:19.409972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.573 [2024-11-20 10:04:19.409988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.573 qpair failed and we were unable to recover it. 00:30:48.573 [2024-11-20 10:04:19.410339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.573 [2024-11-20 10:04:19.410357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.573 qpair failed and we were unable to recover it. 00:30:48.573 [2024-11-20 10:04:19.410671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.573 [2024-11-20 10:04:19.410689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.573 qpair failed and we were unable to recover it. 00:30:48.573 [2024-11-20 10:04:19.411015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.573 [2024-11-20 10:04:19.411032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.573 qpair failed and we were unable to recover it. 00:30:48.573 [2024-11-20 10:04:19.411388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.573 [2024-11-20 10:04:19.411406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.573 qpair failed and we were unable to recover it. 00:30:48.573 [2024-11-20 10:04:19.411595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.573 [2024-11-20 10:04:19.411614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.573 qpair failed and we were unable to recover it. 00:30:48.573 [2024-11-20 10:04:19.412023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.573 [2024-11-20 10:04:19.412042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.573 qpair failed and we were unable to recover it. 00:30:48.573 [2024-11-20 10:04:19.412351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.573 [2024-11-20 10:04:19.412370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.573 qpair failed and we were unable to recover it. 00:30:48.573 [2024-11-20 10:04:19.412706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.573 [2024-11-20 10:04:19.412724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.573 qpair failed and we were unable to recover it. 00:30:48.573 [2024-11-20 10:04:19.413076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.573 [2024-11-20 10:04:19.413095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.573 qpair failed and we were unable to recover it. 00:30:48.573 [2024-11-20 10:04:19.413406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.573 [2024-11-20 10:04:19.413425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.573 qpair failed and we were unable to recover it. 00:30:48.573 [2024-11-20 10:04:19.413639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.573 [2024-11-20 10:04:19.413658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.573 qpair failed and we were unable to recover it. 00:30:48.573 [2024-11-20 10:04:19.414017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.573 [2024-11-20 10:04:19.414036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.573 qpair failed and we were unable to recover it. 00:30:48.573 [2024-11-20 10:04:19.414445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.573 [2024-11-20 10:04:19.414463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.573 qpair failed and we were unable to recover it. 00:30:48.573 [2024-11-20 10:04:19.414810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.573 [2024-11-20 10:04:19.414828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.573 qpair failed and we were unable to recover it. 00:30:48.573 [2024-11-20 10:04:19.415174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.573 [2024-11-20 10:04:19.415193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.573 qpair failed and we were unable to recover it. 00:30:48.573 [2024-11-20 10:04:19.415554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.573 [2024-11-20 10:04:19.415573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.573 qpair failed and we were unable to recover it. 00:30:48.573 [2024-11-20 10:04:19.415916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.573 [2024-11-20 10:04:19.415935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.573 qpair failed and we were unable to recover it. 00:30:48.573 [2024-11-20 10:04:19.416266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.573 [2024-11-20 10:04:19.416285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.573 qpair failed and we were unable to recover it. 00:30:48.573 [2024-11-20 10:04:19.416625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.573 [2024-11-20 10:04:19.416647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.573 qpair failed and we were unable to recover it. 00:30:48.573 [2024-11-20 10:04:19.416990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.573 [2024-11-20 10:04:19.417007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.573 qpair failed and we were unable to recover it. 00:30:48.573 [2024-11-20 10:04:19.417251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.573 [2024-11-20 10:04:19.417269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.573 qpair failed and we were unable to recover it. 00:30:48.573 [2024-11-20 10:04:19.417643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.573 [2024-11-20 10:04:19.417661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.573 qpair failed and we were unable to recover it. 00:30:48.573 [2024-11-20 10:04:19.418006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.573 [2024-11-20 10:04:19.418025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.573 qpair failed and we were unable to recover it. 00:30:48.573 [2024-11-20 10:04:19.418346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.573 [2024-11-20 10:04:19.418364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.573 qpair failed and we were unable to recover it. 00:30:48.573 [2024-11-20 10:04:19.418695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.573 [2024-11-20 10:04:19.418712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.573 qpair failed and we were unable to recover it. 00:30:48.573 [2024-11-20 10:04:19.418905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.573 [2024-11-20 10:04:19.418922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.573 qpair failed and we were unable to recover it. 00:30:48.573 [2024-11-20 10:04:19.419284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.573 [2024-11-20 10:04:19.419303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.573 qpair failed and we were unable to recover it. 00:30:48.573 [2024-11-20 10:04:19.419527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.573 [2024-11-20 10:04:19.419545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.573 qpair failed and we were unable to recover it. 00:30:48.573 [2024-11-20 10:04:19.419887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.573 [2024-11-20 10:04:19.419906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.573 qpair failed and we were unable to recover it. 00:30:48.573 [2024-11-20 10:04:19.420298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.573 [2024-11-20 10:04:19.420317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.573 qpair failed and we were unable to recover it. 00:30:48.573 [2024-11-20 10:04:19.420670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.573 [2024-11-20 10:04:19.420688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.573 qpair failed and we were unable to recover it. 00:30:48.573 [2024-11-20 10:04:19.421033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.573 [2024-11-20 10:04:19.421052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.573 qpair failed and we were unable to recover it. 00:30:48.573 [2024-11-20 10:04:19.421460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.573 [2024-11-20 10:04:19.421478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.573 qpair failed and we were unable to recover it. 00:30:48.573 [2024-11-20 10:04:19.421813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.573 [2024-11-20 10:04:19.421832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.573 qpair failed and we were unable to recover it. 00:30:48.573 [2024-11-20 10:04:19.422152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.573 [2024-11-20 10:04:19.422179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.573 qpair failed and we were unable to recover it. 00:30:48.573 [2024-11-20 10:04:19.422498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.573 [2024-11-20 10:04:19.422515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.573 qpair failed and we were unable to recover it. 00:30:48.573 [2024-11-20 10:04:19.422732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.573 [2024-11-20 10:04:19.422748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.573 qpair failed and we were unable to recover it. 00:30:48.573 [2024-11-20 10:04:19.423081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.573 [2024-11-20 10:04:19.423098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.573 qpair failed and we were unable to recover it. 00:30:48.573 [2024-11-20 10:04:19.423300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.573 [2024-11-20 10:04:19.423318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.573 qpair failed and we were unable to recover it. 00:30:48.573 [2024-11-20 10:04:19.423520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.573 [2024-11-20 10:04:19.423536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.573 qpair failed and we were unable to recover it. 00:30:48.573 [2024-11-20 10:04:19.423908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.573 [2024-11-20 10:04:19.423924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.573 qpair failed and we were unable to recover it. 00:30:48.573 [2024-11-20 10:04:19.424141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.573 [2024-11-20 10:04:19.424173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.573 qpair failed and we were unable to recover it. 00:30:48.573 [2024-11-20 10:04:19.424346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.573 [2024-11-20 10:04:19.424363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.573 qpair failed and we were unable to recover it. 00:30:48.573 [2024-11-20 10:04:19.424694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.573 [2024-11-20 10:04:19.424710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.573 qpair failed and we were unable to recover it. 00:30:48.573 [2024-11-20 10:04:19.425048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.573 [2024-11-20 10:04:19.425067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.573 qpair failed and we were unable to recover it. 00:30:48.573 [2024-11-20 10:04:19.425290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.573 [2024-11-20 10:04:19.425310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.573 qpair failed and we were unable to recover it. 00:30:48.573 [2024-11-20 10:04:19.425638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.573 [2024-11-20 10:04:19.425656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.573 qpair failed and we were unable to recover it. 00:30:48.573 [2024-11-20 10:04:19.425924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.573 [2024-11-20 10:04:19.425942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.573 qpair failed and we were unable to recover it. 00:30:48.573 [2024-11-20 10:04:19.426285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.574 [2024-11-20 10:04:19.426302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.574 qpair failed and we were unable to recover it. 00:30:48.574 [2024-11-20 10:04:19.426635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.574 [2024-11-20 10:04:19.426651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.574 qpair failed and we were unable to recover it. 00:30:48.574 [2024-11-20 10:04:19.426991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.574 [2024-11-20 10:04:19.427008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.574 qpair failed and we were unable to recover it. 00:30:48.574 [2024-11-20 10:04:19.427333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.574 [2024-11-20 10:04:19.427352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.574 qpair failed and we were unable to recover it. 00:30:48.574 [2024-11-20 10:04:19.427703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.574 [2024-11-20 10:04:19.427720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.574 qpair failed and we were unable to recover it. 00:30:48.574 [2024-11-20 10:04:19.428062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.574 [2024-11-20 10:04:19.428081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.574 qpair failed and we were unable to recover it. 00:30:48.574 [2024-11-20 10:04:19.428424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.574 [2024-11-20 10:04:19.428442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.574 qpair failed and we were unable to recover it. 00:30:48.574 [2024-11-20 10:04:19.428638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.574 [2024-11-20 10:04:19.428654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.574 qpair failed and we were unable to recover it. 00:30:48.574 [2024-11-20 10:04:19.428996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.574 [2024-11-20 10:04:19.429014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.574 qpair failed and we were unable to recover it. 00:30:48.574 [2024-11-20 10:04:19.429338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.574 [2024-11-20 10:04:19.429354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.574 qpair failed and we were unable to recover it. 00:30:48.574 [2024-11-20 10:04:19.429703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.574 [2024-11-20 10:04:19.429723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.574 qpair failed and we were unable to recover it. 00:30:48.574 [2024-11-20 10:04:19.429810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.574 [2024-11-20 10:04:19.429825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.574 qpair failed and we were unable to recover it. 00:30:48.574 [2024-11-20 10:04:19.430117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.574 [2024-11-20 10:04:19.430133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.574 qpair failed and we were unable to recover it. 00:30:48.574 [2024-11-20 10:04:19.430504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.574 [2024-11-20 10:04:19.430521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.574 qpair failed and we were unable to recover it. 00:30:48.574 [2024-11-20 10:04:19.430870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.574 [2024-11-20 10:04:19.430889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.574 qpair failed and we were unable to recover it. 00:30:48.574 [2024-11-20 10:04:19.431221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.574 [2024-11-20 10:04:19.431238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.574 qpair failed and we were unable to recover it. 00:30:48.574 [2024-11-20 10:04:19.431580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.574 [2024-11-20 10:04:19.431597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.574 qpair failed and we were unable to recover it. 00:30:48.574 [2024-11-20 10:04:19.431944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.574 [2024-11-20 10:04:19.431961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.574 qpair failed and we were unable to recover it. 00:30:48.574 [2024-11-20 10:04:19.432197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.574 [2024-11-20 10:04:19.432214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.574 qpair failed and we were unable to recover it. 00:30:48.574 [2024-11-20 10:04:19.432533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.574 [2024-11-20 10:04:19.432551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.574 qpair failed and we were unable to recover it. 00:30:48.574 [2024-11-20 10:04:19.432898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.574 [2024-11-20 10:04:19.432917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.574 qpair failed and we were unable to recover it. 00:30:48.574 [2024-11-20 10:04:19.433233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.574 [2024-11-20 10:04:19.433250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.574 qpair failed and we were unable to recover it. 00:30:48.574 [2024-11-20 10:04:19.433620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.574 [2024-11-20 10:04:19.433637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.574 qpair failed and we were unable to recover it. 00:30:48.574 [2024-11-20 10:04:19.433974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.574 [2024-11-20 10:04:19.433990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.574 qpair failed and we were unable to recover it. 00:30:48.574 [2024-11-20 10:04:19.434337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.574 [2024-11-20 10:04:19.434356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.574 qpair failed and we were unable to recover it. 00:30:48.574 [2024-11-20 10:04:19.434535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.574 [2024-11-20 10:04:19.434551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.574 qpair failed and we were unable to recover it. 00:30:48.574 [2024-11-20 10:04:19.434776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.574 [2024-11-20 10:04:19.434791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.574 qpair failed and we were unable to recover it. 00:30:48.574 [2024-11-20 10:04:19.435018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.574 [2024-11-20 10:04:19.435035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.574 qpair failed and we were unable to recover it. 00:30:48.574 [2024-11-20 10:04:19.435355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.574 [2024-11-20 10:04:19.435372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.574 qpair failed and we were unable to recover it. 00:30:48.574 [2024-11-20 10:04:19.435715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.574 [2024-11-20 10:04:19.435732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.574 qpair failed and we were unable to recover it. 00:30:48.574 [2024-11-20 10:04:19.436081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.574 [2024-11-20 10:04:19.436098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.574 qpair failed and we were unable to recover it. 00:30:48.574 [2024-11-20 10:04:19.436420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.574 [2024-11-20 10:04:19.436439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.574 qpair failed and we were unable to recover it. 00:30:48.574 [2024-11-20 10:04:19.436748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.574 [2024-11-20 10:04:19.436766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.574 qpair failed and we were unable to recover it. 00:30:48.574 [2024-11-20 10:04:19.437109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.574 [2024-11-20 10:04:19.437127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.574 qpair failed and we were unable to recover it. 00:30:48.574 [2024-11-20 10:04:19.437463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.574 [2024-11-20 10:04:19.437480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.574 qpair failed and we were unable to recover it. 00:30:48.574 [2024-11-20 10:04:19.437812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.574 [2024-11-20 10:04:19.437831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.574 qpair failed and we were unable to recover it. 00:30:48.574 [2024-11-20 10:04:19.438176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.574 [2024-11-20 10:04:19.438193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.574 qpair failed and we were unable to recover it. 00:30:48.574 [2024-11-20 10:04:19.438560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.574 [2024-11-20 10:04:19.438578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.574 qpair failed and we were unable to recover it. 00:30:48.574 [2024-11-20 10:04:19.438760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.574 [2024-11-20 10:04:19.438777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.574 qpair failed and we were unable to recover it. 00:30:48.574 [2024-11-20 10:04:19.438999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.574 [2024-11-20 10:04:19.439015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.574 qpair failed and we were unable to recover it. 00:30:48.574 [2024-11-20 10:04:19.439274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.574 [2024-11-20 10:04:19.439292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.574 qpair failed and we were unable to recover it. 00:30:48.574 [2024-11-20 10:04:19.439645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.574 [2024-11-20 10:04:19.439662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.574 qpair failed and we were unable to recover it. 00:30:48.574 [2024-11-20 10:04:19.439976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.574 [2024-11-20 10:04:19.439993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.574 qpair failed and we were unable to recover it. 00:30:48.574 [2024-11-20 10:04:19.440314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.574 [2024-11-20 10:04:19.440330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.574 qpair failed and we were unable to recover it. 00:30:48.574 [2024-11-20 10:04:19.440518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.574 [2024-11-20 10:04:19.440534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.574 qpair failed and we were unable to recover it. 00:30:48.574 [2024-11-20 10:04:19.440717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.574 [2024-11-20 10:04:19.440733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.574 qpair failed and we were unable to recover it. 00:30:48.574 [2024-11-20 10:04:19.440918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.574 [2024-11-20 10:04:19.440936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.574 qpair failed and we were unable to recover it. 00:30:48.574 [2024-11-20 10:04:19.441300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.574 [2024-11-20 10:04:19.441318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.574 qpair failed and we were unable to recover it. 00:30:48.574 [2024-11-20 10:04:19.441648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.574 [2024-11-20 10:04:19.441667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.574 qpair failed and we were unable to recover it. 00:30:48.574 [2024-11-20 10:04:19.441888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.574 [2024-11-20 10:04:19.441907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.574 qpair failed and we were unable to recover it. 00:30:48.574 [2024-11-20 10:04:19.442284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.574 [2024-11-20 10:04:19.442303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.574 qpair failed and we were unable to recover it. 00:30:48.574 [2024-11-20 10:04:19.442649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.574 [2024-11-20 10:04:19.442667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.574 qpair failed and we were unable to recover it. 00:30:48.574 [2024-11-20 10:04:19.442892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.574 [2024-11-20 10:04:19.442908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.574 qpair failed and we were unable to recover it. 00:30:48.574 [2024-11-20 10:04:19.443280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.574 [2024-11-20 10:04:19.443297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.574 qpair failed and we were unable to recover it. 00:30:48.574 [2024-11-20 10:04:19.443640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.574 [2024-11-20 10:04:19.443658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.574 qpair failed and we were unable to recover it. 00:30:48.574 [2024-11-20 10:04:19.443981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.574 [2024-11-20 10:04:19.443998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.574 qpair failed and we were unable to recover it. 00:30:48.574 [2024-11-20 10:04:19.444335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.575 [2024-11-20 10:04:19.444353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.575 qpair failed and we were unable to recover it. 00:30:48.575 [2024-11-20 10:04:19.444687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.575 [2024-11-20 10:04:19.444703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.575 qpair failed and we were unable to recover it. 00:30:48.575 [2024-11-20 10:04:19.445048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.575 [2024-11-20 10:04:19.445066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.575 qpair failed and we were unable to recover it. 00:30:48.575 [2024-11-20 10:04:19.445392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.575 [2024-11-20 10:04:19.445411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.575 qpair failed and we were unable to recover it. 00:30:48.575 [2024-11-20 10:04:19.445744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.575 [2024-11-20 10:04:19.445763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.575 qpair failed and we were unable to recover it. 00:30:48.575 [2024-11-20 10:04:19.445978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.575 [2024-11-20 10:04:19.445997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.575 qpair failed and we were unable to recover it. 00:30:48.575 [2024-11-20 10:04:19.446191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.575 [2024-11-20 10:04:19.446208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.575 qpair failed and we were unable to recover it. 00:30:48.575 [2024-11-20 10:04:19.446548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.575 [2024-11-20 10:04:19.446567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.575 qpair failed and we were unable to recover it. 00:30:48.575 [2024-11-20 10:04:19.446903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.575 [2024-11-20 10:04:19.446920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.575 qpair failed and we were unable to recover it. 00:30:48.575 [2024-11-20 10:04:19.447283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.575 [2024-11-20 10:04:19.447301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.575 qpair failed and we were unable to recover it. 00:30:48.575 [2024-11-20 10:04:19.447613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.575 [2024-11-20 10:04:19.447630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.575 qpair failed and we were unable to recover it. 00:30:48.575 [2024-11-20 10:04:19.447969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.575 [2024-11-20 10:04:19.447986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.575 qpair failed and we were unable to recover it. 00:30:48.575 [2024-11-20 10:04:19.448324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.575 [2024-11-20 10:04:19.448343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.575 qpair failed and we were unable to recover it. 00:30:48.575 [2024-11-20 10:04:19.448688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.575 [2024-11-20 10:04:19.448705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.575 qpair failed and we were unable to recover it. 00:30:48.575 [2024-11-20 10:04:19.449052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.575 [2024-11-20 10:04:19.449070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.575 qpair failed and we were unable to recover it. 00:30:48.575 [2024-11-20 10:04:19.449395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.575 [2024-11-20 10:04:19.449414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.575 qpair failed and we were unable to recover it. 00:30:48.575 [2024-11-20 10:04:19.449658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.575 [2024-11-20 10:04:19.449677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.575 qpair failed and we were unable to recover it. 00:30:48.575 [2024-11-20 10:04:19.449998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.575 [2024-11-20 10:04:19.450015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.575 qpair failed and we were unable to recover it. 00:30:48.575 [2024-11-20 10:04:19.450245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.575 [2024-11-20 10:04:19.450262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.575 qpair failed and we were unable to recover it. 00:30:48.575 [2024-11-20 10:04:19.450621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.575 [2024-11-20 10:04:19.450639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.575 qpair failed and we were unable to recover it. 00:30:48.575 [2024-11-20 10:04:19.450997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.575 [2024-11-20 10:04:19.451016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.575 qpair failed and we were unable to recover it. 00:30:48.575 [2024-11-20 10:04:19.451373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.575 [2024-11-20 10:04:19.451395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.575 qpair failed and we were unable to recover it. 00:30:48.575 [2024-11-20 10:04:19.451742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.575 [2024-11-20 10:04:19.451761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.575 qpair failed and we were unable to recover it. 00:30:48.575 [2024-11-20 10:04:19.452136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.575 [2024-11-20 10:04:19.452154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.575 qpair failed and we were unable to recover it. 00:30:48.575 [2024-11-20 10:04:19.452470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.575 [2024-11-20 10:04:19.452488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.575 qpair failed and we were unable to recover it. 00:30:48.575 [2024-11-20 10:04:19.452725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.575 [2024-11-20 10:04:19.452742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.575 qpair failed and we were unable to recover it. 00:30:48.575 [2024-11-20 10:04:19.453095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.575 [2024-11-20 10:04:19.453113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.575 qpair failed and we were unable to recover it. 00:30:48.575 [2024-11-20 10:04:19.453460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.575 [2024-11-20 10:04:19.453479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.575 qpair failed and we were unable to recover it. 00:30:48.575 [2024-11-20 10:04:19.453796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.575 [2024-11-20 10:04:19.453815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.575 qpair failed and we were unable to recover it. 00:30:48.575 [2024-11-20 10:04:19.454197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.575 [2024-11-20 10:04:19.454214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.575 qpair failed and we were unable to recover it. 00:30:48.575 [2024-11-20 10:04:19.454545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.575 [2024-11-20 10:04:19.454562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.575 qpair failed and we were unable to recover it. 00:30:48.575 [2024-11-20 10:04:19.454785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.575 [2024-11-20 10:04:19.454802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.575 qpair failed and we were unable to recover it. 00:30:48.575 [2024-11-20 10:04:19.455136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.575 [2024-11-20 10:04:19.455155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.575 qpair failed and we were unable to recover it. 00:30:48.575 [2024-11-20 10:04:19.455501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.575 [2024-11-20 10:04:19.455518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.575 qpair failed and we were unable to recover it. 00:30:48.575 [2024-11-20 10:04:19.455844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.575 [2024-11-20 10:04:19.455862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.575 qpair failed and we were unable to recover it. 00:30:48.575 [2024-11-20 10:04:19.456211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.575 [2024-11-20 10:04:19.456229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.575 qpair failed and we were unable to recover it. 00:30:48.575 [2024-11-20 10:04:19.456423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.575 [2024-11-20 10:04:19.456439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.575 qpair failed and we were unable to recover it. 00:30:48.575 [2024-11-20 10:04:19.456757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.575 [2024-11-20 10:04:19.456774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.575 qpair failed and we were unable to recover it. 00:30:48.575 [2024-11-20 10:04:19.457114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.575 [2024-11-20 10:04:19.457133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.575 qpair failed and we were unable to recover it. 00:30:48.575 [2024-11-20 10:04:19.457475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.575 [2024-11-20 10:04:19.457493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.575 qpair failed and we were unable to recover it. 00:30:48.575 [2024-11-20 10:04:19.457836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.575 [2024-11-20 10:04:19.457854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.575 qpair failed and we were unable to recover it. 00:30:48.575 [2024-11-20 10:04:19.458183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.575 [2024-11-20 10:04:19.458202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.575 qpair failed and we were unable to recover it. 00:30:48.575 [2024-11-20 10:04:19.458568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.575 [2024-11-20 10:04:19.458586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.575 qpair failed and we were unable to recover it. 00:30:48.575 [2024-11-20 10:04:19.458920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.575 [2024-11-20 10:04:19.458937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.575 qpair failed and we were unable to recover it. 00:30:48.575 [2024-11-20 10:04:19.459284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.575 [2024-11-20 10:04:19.459304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.575 qpair failed and we were unable to recover it. 00:30:48.575 [2024-11-20 10:04:19.459656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.575 [2024-11-20 10:04:19.459673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.575 qpair failed and we were unable to recover it. 00:30:48.575 [2024-11-20 10:04:19.459856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.575 [2024-11-20 10:04:19.459874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.575 qpair failed and we were unable to recover it. 00:30:48.848 [2024-11-20 10:04:19.460226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.848 [2024-11-20 10:04:19.460247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.848 qpair failed and we were unable to recover it. 00:30:48.848 [2024-11-20 10:04:19.460445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.848 [2024-11-20 10:04:19.460465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.848 qpair failed and we were unable to recover it. 00:30:48.848 [2024-11-20 10:04:19.460683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.848 [2024-11-20 10:04:19.460700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.848 qpair failed and we were unable to recover it. 00:30:48.848 [2024-11-20 10:04:19.461050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.848 [2024-11-20 10:04:19.461067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.848 qpair failed and we were unable to recover it. 00:30:48.848 [2024-11-20 10:04:19.461427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.848 [2024-11-20 10:04:19.461446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.848 qpair failed and we were unable to recover it. 00:30:48.848 [2024-11-20 10:04:19.461790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.848 [2024-11-20 10:04:19.461809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.848 qpair failed and we were unable to recover it. 00:30:48.849 [2024-11-20 10:04:19.461988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.849 [2024-11-20 10:04:19.462005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.849 qpair failed and we were unable to recover it. 00:30:48.849 [2024-11-20 10:04:19.462318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.849 [2024-11-20 10:04:19.462338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.849 qpair failed and we were unable to recover it. 00:30:48.849 [2024-11-20 10:04:19.462725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.849 [2024-11-20 10:04:19.462744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.849 qpair failed and we were unable to recover it. 00:30:48.849 [2024-11-20 10:04:19.463079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.849 [2024-11-20 10:04:19.463098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.849 qpair failed and we were unable to recover it. 00:30:48.849 [2024-11-20 10:04:19.463445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.849 [2024-11-20 10:04:19.463466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.849 qpair failed and we were unable to recover it. 00:30:48.849 [2024-11-20 10:04:19.463793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.849 [2024-11-20 10:04:19.463813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.849 qpair failed and we were unable to recover it. 00:30:48.849 [2024-11-20 10:04:19.463995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.849 [2024-11-20 10:04:19.464014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.849 qpair failed and we were unable to recover it. 00:30:48.849 [2024-11-20 10:04:19.464188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.849 [2024-11-20 10:04:19.464207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.849 qpair failed and we were unable to recover it. 00:30:48.849 [2024-11-20 10:04:19.464507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.849 [2024-11-20 10:04:19.464536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.849 qpair failed and we were unable to recover it. 00:30:48.849 [2024-11-20 10:04:19.464890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.849 [2024-11-20 10:04:19.464907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.849 qpair failed and we were unable to recover it. 00:30:48.849 [2024-11-20 10:04:19.465104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.849 [2024-11-20 10:04:19.465124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.849 qpair failed and we were unable to recover it. 00:30:48.849 [2024-11-20 10:04:19.465369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.849 [2024-11-20 10:04:19.465387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.849 qpair failed and we were unable to recover it. 00:30:48.849 [2024-11-20 10:04:19.465598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.849 [2024-11-20 10:04:19.465615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.849 qpair failed and we were unable to recover it. 00:30:48.849 [2024-11-20 10:04:19.465812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.849 [2024-11-20 10:04:19.465831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.849 qpair failed and we were unable to recover it. 00:30:48.849 [2024-11-20 10:04:19.466182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.849 [2024-11-20 10:04:19.466201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.849 qpair failed and we were unable to recover it. 00:30:48.849 [2024-11-20 10:04:19.466533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.849 [2024-11-20 10:04:19.466550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.849 qpair failed and we were unable to recover it. 00:30:48.849 [2024-11-20 10:04:19.466936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.849 [2024-11-20 10:04:19.466953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.849 qpair failed and we were unable to recover it. 00:30:48.849 [2024-11-20 10:04:19.467193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.849 [2024-11-20 10:04:19.467209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.849 qpair failed and we were unable to recover it. 00:30:48.849 [2024-11-20 10:04:19.467546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.849 [2024-11-20 10:04:19.467563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.849 qpair failed and we were unable to recover it. 00:30:48.849 [2024-11-20 10:04:19.467913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.849 [2024-11-20 10:04:19.467931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.849 qpair failed and we were unable to recover it. 00:30:48.849 [2024-11-20 10:04:19.468278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.849 [2024-11-20 10:04:19.468295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.849 qpair failed and we were unable to recover it. 00:30:48.849 [2024-11-20 10:04:19.468527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.849 [2024-11-20 10:04:19.468543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.849 qpair failed and we were unable to recover it. 00:30:48.849 [2024-11-20 10:04:19.468889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.849 [2024-11-20 10:04:19.468907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.849 qpair failed and we were unable to recover it. 00:30:48.849 [2024-11-20 10:04:19.469289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.849 [2024-11-20 10:04:19.469308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.849 qpair failed and we were unable to recover it. 00:30:48.849 [2024-11-20 10:04:19.469508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.849 [2024-11-20 10:04:19.469526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.849 qpair failed and we were unable to recover it. 00:30:48.849 [2024-11-20 10:04:19.469900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.849 [2024-11-20 10:04:19.469918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.849 qpair failed and we were unable to recover it. 00:30:48.849 [2024-11-20 10:04:19.470259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.849 [2024-11-20 10:04:19.470278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.849 qpair failed and we were unable to recover it. 00:30:48.849 [2024-11-20 10:04:19.470620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.849 [2024-11-20 10:04:19.470637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.849 qpair failed and we were unable to recover it. 00:30:48.849 [2024-11-20 10:04:19.470982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.849 [2024-11-20 10:04:19.471000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.849 qpair failed and we were unable to recover it. 00:30:48.850 [2024-11-20 10:04:19.471177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.850 [2024-11-20 10:04:19.471196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.850 qpair failed and we were unable to recover it. 00:30:48.850 [2024-11-20 10:04:19.471496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.850 [2024-11-20 10:04:19.471513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.850 qpair failed and we were unable to recover it. 00:30:48.850 [2024-11-20 10:04:19.471722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.850 [2024-11-20 10:04:19.471740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.850 qpair failed and we were unable to recover it. 00:30:48.850 [2024-11-20 10:04:19.472103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.850 [2024-11-20 10:04:19.472122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.850 qpair failed and we were unable to recover it. 00:30:48.850 [2024-11-20 10:04:19.472473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.850 [2024-11-20 10:04:19.472494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.850 qpair failed and we were unable to recover it. 00:30:48.850 [2024-11-20 10:04:19.472832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.850 [2024-11-20 10:04:19.472848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.850 qpair failed and we were unable to recover it. 00:30:48.850 [2024-11-20 10:04:19.473210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.850 [2024-11-20 10:04:19.473229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.850 qpair failed and we were unable to recover it. 00:30:48.850 [2024-11-20 10:04:19.473561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.850 [2024-11-20 10:04:19.473579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.850 qpair failed and we were unable to recover it. 00:30:48.850 [2024-11-20 10:04:19.473906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.850 [2024-11-20 10:04:19.473923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.850 qpair failed and we were unable to recover it. 00:30:48.850 [2024-11-20 10:04:19.474111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.850 [2024-11-20 10:04:19.474130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.850 qpair failed and we were unable to recover it. 00:30:48.850 [2024-11-20 10:04:19.474473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.850 [2024-11-20 10:04:19.474491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.850 qpair failed and we were unable to recover it. 00:30:48.850 [2024-11-20 10:04:19.474723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.850 [2024-11-20 10:04:19.474739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.850 qpair failed and we were unable to recover it. 00:30:48.850 [2024-11-20 10:04:19.474928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.850 [2024-11-20 10:04:19.474944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.850 qpair failed and we were unable to recover it. 00:30:48.850 [2024-11-20 10:04:19.475304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.850 [2024-11-20 10:04:19.475323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.850 qpair failed and we were unable to recover it. 00:30:48.850 [2024-11-20 10:04:19.475703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.850 [2024-11-20 10:04:19.475721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.850 qpair failed and we were unable to recover it. 00:30:48.850 [2024-11-20 10:04:19.476068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.850 [2024-11-20 10:04:19.476086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.850 qpair failed and we were unable to recover it. 00:30:48.850 [2024-11-20 10:04:19.476420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.850 [2024-11-20 10:04:19.476438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.850 qpair failed and we were unable to recover it. 00:30:48.850 [2024-11-20 10:04:19.476695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.850 [2024-11-20 10:04:19.476711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.850 qpair failed and we were unable to recover it. 00:30:48.850 [2024-11-20 10:04:19.477073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.850 [2024-11-20 10:04:19.477089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.850 qpair failed and we were unable to recover it. 00:30:48.850 [2024-11-20 10:04:19.477410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.850 [2024-11-20 10:04:19.477432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.850 qpair failed and we were unable to recover it. 00:30:48.850 [2024-11-20 10:04:19.477802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.850 [2024-11-20 10:04:19.477821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.850 qpair failed and we were unable to recover it. 00:30:48.850 [2024-11-20 10:04:19.478125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.850 [2024-11-20 10:04:19.478143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.850 qpair failed and we were unable to recover it. 00:30:48.850 [2024-11-20 10:04:19.478482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.850 [2024-11-20 10:04:19.478500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.850 qpair failed and we were unable to recover it. 00:30:48.850 [2024-11-20 10:04:19.478828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.850 [2024-11-20 10:04:19.478848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.850 qpair failed and we were unable to recover it. 00:30:48.850 [2024-11-20 10:04:19.479030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.850 [2024-11-20 10:04:19.479047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.850 qpair failed and we were unable to recover it. 00:30:48.850 [2024-11-20 10:04:19.479382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.850 [2024-11-20 10:04:19.479400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.850 qpair failed and we were unable to recover it. 00:30:48.850 [2024-11-20 10:04:19.479737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.850 [2024-11-20 10:04:19.479755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.850 qpair failed and we were unable to recover it. 00:30:48.850 [2024-11-20 10:04:19.480108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.850 [2024-11-20 10:04:19.480126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.850 qpair failed and we were unable to recover it. 00:30:48.850 [2024-11-20 10:04:19.480488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.850 [2024-11-20 10:04:19.480506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.850 qpair failed and we were unable to recover it. 00:30:48.850 [2024-11-20 10:04:19.480837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.850 [2024-11-20 10:04:19.480854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.850 qpair failed and we were unable to recover it. 00:30:48.850 [2024-11-20 10:04:19.481137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.850 [2024-11-20 10:04:19.481154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.851 qpair failed and we were unable to recover it. 00:30:48.851 [2024-11-20 10:04:19.481514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.851 [2024-11-20 10:04:19.481533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.851 qpair failed and we were unable to recover it. 00:30:48.851 [2024-11-20 10:04:19.481855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.851 [2024-11-20 10:04:19.481873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.851 qpair failed and we were unable to recover it. 00:30:48.851 [2024-11-20 10:04:19.482216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.851 [2024-11-20 10:04:19.482233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.851 qpair failed and we were unable to recover it. 00:30:48.851 [2024-11-20 10:04:19.482569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.851 [2024-11-20 10:04:19.482586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.851 qpair failed and we were unable to recover it. 00:30:48.851 [2024-11-20 10:04:19.482926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.851 [2024-11-20 10:04:19.482944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.851 qpair failed and we were unable to recover it. 00:30:48.851 [2024-11-20 10:04:19.483273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.851 [2024-11-20 10:04:19.483291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.851 qpair failed and we were unable to recover it. 00:30:48.851 [2024-11-20 10:04:19.483634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.851 [2024-11-20 10:04:19.483651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.851 qpair failed and we were unable to recover it. 00:30:48.851 [2024-11-20 10:04:19.483977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.851 [2024-11-20 10:04:19.483995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.851 qpair failed and we were unable to recover it. 00:30:48.851 [2024-11-20 10:04:19.484326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.851 [2024-11-20 10:04:19.484344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.851 qpair failed and we were unable to recover it. 00:30:48.851 [2024-11-20 10:04:19.484672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.851 [2024-11-20 10:04:19.484691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.851 qpair failed and we were unable to recover it. 00:30:48.851 [2024-11-20 10:04:19.485027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.851 [2024-11-20 10:04:19.485044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.851 qpair failed and we were unable to recover it. 00:30:48.851 [2024-11-20 10:04:19.485471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.851 [2024-11-20 10:04:19.485489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.851 qpair failed and we were unable to recover it. 00:30:48.851 [2024-11-20 10:04:19.485820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.851 [2024-11-20 10:04:19.485838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.851 qpair failed and we were unable to recover it. 00:30:48.851 [2024-11-20 10:04:19.486051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.851 [2024-11-20 10:04:19.486070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.851 qpair failed and we were unable to recover it. 00:30:48.851 [2024-11-20 10:04:19.486387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.851 [2024-11-20 10:04:19.486405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.851 qpair failed and we were unable to recover it. 00:30:48.851 [2024-11-20 10:04:19.486741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.851 [2024-11-20 10:04:19.486758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.851 qpair failed and we were unable to recover it. 00:30:48.851 [2024-11-20 10:04:19.487099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.851 [2024-11-20 10:04:19.487116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.851 qpair failed and we were unable to recover it. 00:30:48.851 [2024-11-20 10:04:19.487475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.851 [2024-11-20 10:04:19.487492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.851 qpair failed and we were unable to recover it. 00:30:48.851 [2024-11-20 10:04:19.487825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.851 [2024-11-20 10:04:19.487844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.851 qpair failed and we were unable to recover it. 00:30:48.851 [2024-11-20 10:04:19.488180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.851 [2024-11-20 10:04:19.488198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.851 qpair failed and we were unable to recover it. 00:30:48.851 [2024-11-20 10:04:19.488541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.851 [2024-11-20 10:04:19.488559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.851 qpair failed and we were unable to recover it. 00:30:48.851 [2024-11-20 10:04:19.488904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.851 [2024-11-20 10:04:19.488921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.851 qpair failed and we were unable to recover it. 00:30:48.851 [2024-11-20 10:04:19.489228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.851 [2024-11-20 10:04:19.489261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.851 qpair failed and we were unable to recover it. 00:30:48.851 [2024-11-20 10:04:19.489592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.851 [2024-11-20 10:04:19.489611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.851 qpair failed and we were unable to recover it. 00:30:48.851 [2024-11-20 10:04:19.489942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.851 [2024-11-20 10:04:19.489959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.851 qpair failed and we were unable to recover it. 00:30:48.851 [2024-11-20 10:04:19.490291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.851 [2024-11-20 10:04:19.490308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.851 qpair failed and we were unable to recover it. 00:30:48.851 [2024-11-20 10:04:19.490663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.851 [2024-11-20 10:04:19.490680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.851 qpair failed and we were unable to recover it. 00:30:48.851 [2024-11-20 10:04:19.491002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.851 [2024-11-20 10:04:19.491018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.851 qpair failed and we were unable to recover it. 00:30:48.851 [2024-11-20 10:04:19.491377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.851 [2024-11-20 10:04:19.491399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.851 qpair failed and we were unable to recover it. 00:30:48.851 [2024-11-20 10:04:19.491797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.851 [2024-11-20 10:04:19.491816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.851 qpair failed and we were unable to recover it. 00:30:48.851 [2024-11-20 10:04:19.492144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.851 [2024-11-20 10:04:19.492169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.851 qpair failed and we were unable to recover it. 00:30:48.852 [2024-11-20 10:04:19.492520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.852 [2024-11-20 10:04:19.492537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.852 qpair failed and we were unable to recover it. 00:30:48.852 [2024-11-20 10:04:19.492872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.852 [2024-11-20 10:04:19.492891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.852 qpair failed and we were unable to recover it. 00:30:48.852 [2024-11-20 10:04:19.493217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.852 [2024-11-20 10:04:19.493234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.852 qpair failed and we were unable to recover it. 00:30:48.852 [2024-11-20 10:04:19.493587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.852 [2024-11-20 10:04:19.493604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.852 qpair failed and we were unable to recover it. 00:30:48.852 [2024-11-20 10:04:19.493943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.852 [2024-11-20 10:04:19.493962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.852 qpair failed and we were unable to recover it. 00:30:48.852 [2024-11-20 10:04:19.494282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.852 [2024-11-20 10:04:19.494299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.852 qpair failed and we were unable to recover it. 00:30:48.852 [2024-11-20 10:04:19.494617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.852 [2024-11-20 10:04:19.494636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.852 qpair failed and we were unable to recover it. 00:30:48.852 [2024-11-20 10:04:19.494977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.852 [2024-11-20 10:04:19.494993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.852 qpair failed and we were unable to recover it. 00:30:48.852 [2024-11-20 10:04:19.495329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.852 [2024-11-20 10:04:19.495349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.852 qpair failed and we were unable to recover it. 00:30:48.852 [2024-11-20 10:04:19.495681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.852 [2024-11-20 10:04:19.495699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.852 qpair failed and we were unable to recover it. 00:30:48.852 [2024-11-20 10:04:19.495902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.852 [2024-11-20 10:04:19.495923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.852 qpair failed and we were unable to recover it. 00:30:48.852 [2024-11-20 10:04:19.496316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.852 [2024-11-20 10:04:19.496335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.852 qpair failed and we were unable to recover it. 00:30:48.852 [2024-11-20 10:04:19.496672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.852 [2024-11-20 10:04:19.496692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.852 qpair failed and we were unable to recover it. 00:30:48.852 [2024-11-20 10:04:19.497022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.852 [2024-11-20 10:04:19.497039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.852 qpair failed and we were unable to recover it. 00:30:48.852 [2024-11-20 10:04:19.497379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.852 [2024-11-20 10:04:19.497398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.852 qpair failed and we were unable to recover it. 00:30:48.852 [2024-11-20 10:04:19.497729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.852 [2024-11-20 10:04:19.497746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.852 qpair failed and we were unable to recover it. 00:30:48.852 [2024-11-20 10:04:19.498078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.852 [2024-11-20 10:04:19.498096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.852 qpair failed and we were unable to recover it. 00:30:48.852 [2024-11-20 10:04:19.498435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.852 [2024-11-20 10:04:19.498453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.852 qpair failed and we were unable to recover it. 00:30:48.852 [2024-11-20 10:04:19.498779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.852 [2024-11-20 10:04:19.498798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.852 qpair failed and we were unable to recover it. 00:30:48.852 [2024-11-20 10:04:19.499089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.852 [2024-11-20 10:04:19.499107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.852 qpair failed and we were unable to recover it. 00:30:48.852 [2024-11-20 10:04:19.499410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.852 [2024-11-20 10:04:19.499430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.852 qpair failed and we were unable to recover it. 00:30:48.852 [2024-11-20 10:04:19.499641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.852 [2024-11-20 10:04:19.499662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.852 qpair failed and we were unable to recover it. 00:30:48.852 [2024-11-20 10:04:19.500009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.852 [2024-11-20 10:04:19.500027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.852 qpair failed and we were unable to recover it. 00:30:48.852 [2024-11-20 10:04:19.500352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.852 [2024-11-20 10:04:19.500370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.852 qpair failed and we were unable to recover it. 00:30:48.852 [2024-11-20 10:04:19.500704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.852 [2024-11-20 10:04:19.500723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.852 qpair failed and we were unable to recover it. 00:30:48.852 [2024-11-20 10:04:19.501066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.852 [2024-11-20 10:04:19.501084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.852 qpair failed and we were unable to recover it. 00:30:48.852 [2024-11-20 10:04:19.501399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.852 [2024-11-20 10:04:19.501419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.852 qpair failed and we were unable to recover it. 00:30:48.852 [2024-11-20 10:04:19.501754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.852 [2024-11-20 10:04:19.501771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.852 qpair failed and we were unable to recover it. 00:30:48.852 [2024-11-20 10:04:19.502182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.852 [2024-11-20 10:04:19.502200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.852 qpair failed and we were unable to recover it. 00:30:48.852 [2024-11-20 10:04:19.502496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.852 [2024-11-20 10:04:19.502514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.852 qpair failed and we were unable to recover it. 00:30:48.852 [2024-11-20 10:04:19.502843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.852 [2024-11-20 10:04:19.502860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.852 qpair failed and we were unable to recover it. 00:30:48.852 [2024-11-20 10:04:19.503213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.852 [2024-11-20 10:04:19.503232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.852 qpair failed and we were unable to recover it. 00:30:48.852 [2024-11-20 10:04:19.503577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.852 [2024-11-20 10:04:19.503594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.852 qpair failed and we were unable to recover it. 00:30:48.852 [2024-11-20 10:04:19.503937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.853 [2024-11-20 10:04:19.503954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.853 qpair failed and we were unable to recover it. 00:30:48.853 [2024-11-20 10:04:19.504147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.853 [2024-11-20 10:04:19.504174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.853 qpair failed and we were unable to recover it. 00:30:48.853 [2024-11-20 10:04:19.504482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.853 [2024-11-20 10:04:19.504499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.853 qpair failed and we were unable to recover it. 00:30:48.853 [2024-11-20 10:04:19.504730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.853 [2024-11-20 10:04:19.504747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.853 qpair failed and we were unable to recover it. 00:30:48.853 [2024-11-20 10:04:19.505115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.853 [2024-11-20 10:04:19.505135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.853 qpair failed and we were unable to recover it. 00:30:48.853 [2024-11-20 10:04:19.505492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.853 [2024-11-20 10:04:19.505511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.853 qpair failed and we were unable to recover it. 00:30:48.853 [2024-11-20 10:04:19.505845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.853 [2024-11-20 10:04:19.505862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.853 qpair failed and we were unable to recover it. 00:30:48.853 [2024-11-20 10:04:19.506141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.853 [2024-11-20 10:04:19.506165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.853 qpair failed and we were unable to recover it. 00:30:48.853 [2024-11-20 10:04:19.506483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.853 [2024-11-20 10:04:19.506501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.853 qpair failed and we were unable to recover it. 00:30:48.853 [2024-11-20 10:04:19.506851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.853 [2024-11-20 10:04:19.506870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.853 qpair failed and we were unable to recover it. 00:30:48.853 [2024-11-20 10:04:19.507210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.853 [2024-11-20 10:04:19.507228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.853 qpair failed and we were unable to recover it. 00:30:48.853 [2024-11-20 10:04:19.507573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.853 [2024-11-20 10:04:19.507591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.853 qpair failed and we were unable to recover it. 00:30:48.853 [2024-11-20 10:04:19.507971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.853 [2024-11-20 10:04:19.507988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.853 qpair failed and we were unable to recover it. 00:30:48.853 [2024-11-20 10:04:19.508336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.853 [2024-11-20 10:04:19.508356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.853 qpair failed and we were unable to recover it. 00:30:48.853 [2024-11-20 10:04:19.508680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.853 [2024-11-20 10:04:19.508698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.853 qpair failed and we were unable to recover it. 00:30:48.853 [2024-11-20 10:04:19.509041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.853 [2024-11-20 10:04:19.509059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.853 qpair failed and we were unable to recover it. 00:30:48.853 [2024-11-20 10:04:19.509378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.853 [2024-11-20 10:04:19.509396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.853 qpair failed and we were unable to recover it. 00:30:48.853 [2024-11-20 10:04:19.509709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.853 [2024-11-20 10:04:19.509728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.853 qpair failed and we were unable to recover it. 00:30:48.853 [2024-11-20 10:04:19.510076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.853 [2024-11-20 10:04:19.510093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.853 qpair failed and we were unable to recover it. 00:30:48.853 [2024-11-20 10:04:19.510439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.853 [2024-11-20 10:04:19.510456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.853 qpair failed and we were unable to recover it. 00:30:48.853 [2024-11-20 10:04:19.510789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.853 [2024-11-20 10:04:19.510807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.853 qpair failed and we were unable to recover it. 00:30:48.853 [2024-11-20 10:04:19.511166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.853 [2024-11-20 10:04:19.511184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.853 qpair failed and we were unable to recover it. 00:30:48.853 [2024-11-20 10:04:19.511524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.853 [2024-11-20 10:04:19.511542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.853 qpair failed and we were unable to recover it. 00:30:48.853 [2024-11-20 10:04:19.511843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.853 [2024-11-20 10:04:19.511862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.853 qpair failed and we were unable to recover it. 00:30:48.853 [2024-11-20 10:04:19.512197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.853 [2024-11-20 10:04:19.512215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.853 qpair failed and we were unable to recover it. 00:30:48.853 [2024-11-20 10:04:19.512578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.853 [2024-11-20 10:04:19.512595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.853 qpair failed and we were unable to recover it. 00:30:48.853 [2024-11-20 10:04:19.512938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.853 [2024-11-20 10:04:19.512956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.853 qpair failed and we were unable to recover it. 00:30:48.853 [2024-11-20 10:04:19.513293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.853 [2024-11-20 10:04:19.513311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.853 qpair failed and we were unable to recover it. 00:30:48.853 [2024-11-20 10:04:19.513653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.853 [2024-11-20 10:04:19.513670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.853 qpair failed and we were unable to recover it. 00:30:48.853 [2024-11-20 10:04:19.514017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.853 [2024-11-20 10:04:19.514034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.853 qpair failed and we were unable to recover it. 00:30:48.853 [2024-11-20 10:04:19.514381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.853 [2024-11-20 10:04:19.514401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.854 qpair failed and we were unable to recover it. 00:30:48.854 [2024-11-20 10:04:19.514719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.854 [2024-11-20 10:04:19.514736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.854 qpair failed and we were unable to recover it. 00:30:48.854 [2024-11-20 10:04:19.514933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.854 [2024-11-20 10:04:19.514951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.854 qpair failed and we were unable to recover it. 00:30:48.854 [2024-11-20 10:04:19.515262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.854 [2024-11-20 10:04:19.515281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.854 qpair failed and we were unable to recover it. 00:30:48.854 [2024-11-20 10:04:19.515660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.854 [2024-11-20 10:04:19.515678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.854 qpair failed and we were unable to recover it. 00:30:48.854 [2024-11-20 10:04:19.515871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.854 [2024-11-20 10:04:19.515889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.854 qpair failed and we were unable to recover it. 00:30:48.854 [2024-11-20 10:04:19.516220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.854 [2024-11-20 10:04:19.516237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.854 qpair failed and we were unable to recover it. 00:30:48.854 [2024-11-20 10:04:19.516571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.854 [2024-11-20 10:04:19.516588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.854 qpair failed and we were unable to recover it. 00:30:48.854 [2024-11-20 10:04:19.516928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.854 [2024-11-20 10:04:19.516945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.854 qpair failed and we were unable to recover it. 00:30:48.854 [2024-11-20 10:04:19.517284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.854 [2024-11-20 10:04:19.517302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.854 qpair failed and we were unable to recover it. 00:30:48.854 [2024-11-20 10:04:19.517613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.854 [2024-11-20 10:04:19.517632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.854 qpair failed and we were unable to recover it. 00:30:48.854 [2024-11-20 10:04:19.517974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.854 [2024-11-20 10:04:19.517991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.854 qpair failed and we were unable to recover it. 00:30:48.854 [2024-11-20 10:04:19.518332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.854 [2024-11-20 10:04:19.518350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.854 qpair failed and we were unable to recover it. 00:30:48.854 [2024-11-20 10:04:19.518685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.854 [2024-11-20 10:04:19.518703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.854 qpair failed and we were unable to recover it. 00:30:48.854 [2024-11-20 10:04:19.519038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.854 [2024-11-20 10:04:19.519060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.854 qpair failed and we were unable to recover it. 00:30:48.854 [2024-11-20 10:04:19.519394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.854 [2024-11-20 10:04:19.519412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.854 qpair failed and we were unable to recover it. 00:30:48.854 [2024-11-20 10:04:19.519747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.854 [2024-11-20 10:04:19.519765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.854 qpair failed and we were unable to recover it. 00:30:48.854 [2024-11-20 10:04:19.520101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.854 [2024-11-20 10:04:19.520119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.854 qpair failed and we were unable to recover it. 00:30:48.854 [2024-11-20 10:04:19.520452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.854 [2024-11-20 10:04:19.520472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.854 qpair failed and we were unable to recover it. 00:30:48.854 [2024-11-20 10:04:19.520795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.854 [2024-11-20 10:04:19.520812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.854 qpair failed and we were unable to recover it. 00:30:48.854 [2024-11-20 10:04:19.521148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.854 [2024-11-20 10:04:19.521173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.854 qpair failed and we were unable to recover it. 00:30:48.854 [2024-11-20 10:04:19.521515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.854 [2024-11-20 10:04:19.521533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.854 qpair failed and we were unable to recover it. 00:30:48.854 [2024-11-20 10:04:19.521945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.854 [2024-11-20 10:04:19.521962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.854 qpair failed and we were unable to recover it. 00:30:48.854 [2024-11-20 10:04:19.522285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.854 [2024-11-20 10:04:19.522303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.854 qpair failed and we were unable to recover it. 00:30:48.854 [2024-11-20 10:04:19.522643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.854 [2024-11-20 10:04:19.522661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.854 qpair failed and we were unable to recover it. 00:30:48.854 [2024-11-20 10:04:19.522993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.855 [2024-11-20 10:04:19.523012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.855 qpair failed and we were unable to recover it. 00:30:48.855 [2024-11-20 10:04:19.523328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.855 [2024-11-20 10:04:19.523346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.855 qpair failed and we were unable to recover it. 00:30:48.855 [2024-11-20 10:04:19.523706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.855 [2024-11-20 10:04:19.523724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.855 qpair failed and we were unable to recover it. 00:30:48.855 [2024-11-20 10:04:19.524076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.855 [2024-11-20 10:04:19.524096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.855 qpair failed and we were unable to recover it. 00:30:48.855 [2024-11-20 10:04:19.524431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.855 [2024-11-20 10:04:19.524450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.855 qpair failed and we were unable to recover it. 00:30:48.855 [2024-11-20 10:04:19.524644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.855 [2024-11-20 10:04:19.524661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.855 qpair failed and we were unable to recover it. 00:30:48.855 [2024-11-20 10:04:19.524991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.855 [2024-11-20 10:04:19.525009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.855 qpair failed and we were unable to recover it. 00:30:48.855 [2024-11-20 10:04:19.525353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.855 [2024-11-20 10:04:19.525372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.855 qpair failed and we were unable to recover it. 00:30:48.855 [2024-11-20 10:04:19.525706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.855 [2024-11-20 10:04:19.525723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.855 qpair failed and we were unable to recover it. 00:30:48.855 [2024-11-20 10:04:19.526056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.855 [2024-11-20 10:04:19.526074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.855 qpair failed and we were unable to recover it. 00:30:48.855 [2024-11-20 10:04:19.526411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.855 [2024-11-20 10:04:19.526429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.855 qpair failed and we were unable to recover it. 00:30:48.855 [2024-11-20 10:04:19.526767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.855 [2024-11-20 10:04:19.526785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.855 qpair failed and we were unable to recover it. 00:30:48.855 [2024-11-20 10:04:19.527124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.855 [2024-11-20 10:04:19.527141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.855 qpair failed and we were unable to recover it. 00:30:48.855 [2024-11-20 10:04:19.527471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.855 [2024-11-20 10:04:19.527490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.855 qpair failed and we were unable to recover it. 00:30:48.855 [2024-11-20 10:04:19.527803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.855 [2024-11-20 10:04:19.527821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.855 qpair failed and we were unable to recover it. 00:30:48.855 [2024-11-20 10:04:19.528156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.855 [2024-11-20 10:04:19.528180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.855 qpair failed and we were unable to recover it. 00:30:48.855 [2024-11-20 10:04:19.528501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.855 [2024-11-20 10:04:19.528518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.855 qpair failed and we were unable to recover it. 00:30:48.855 [2024-11-20 10:04:19.528748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.855 [2024-11-20 10:04:19.528764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.855 qpair failed and we were unable to recover it. 00:30:48.855 [2024-11-20 10:04:19.528923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.855 [2024-11-20 10:04:19.528939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.855 qpair failed and we were unable to recover it. 00:30:48.855 [2024-11-20 10:04:19.529299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.855 [2024-11-20 10:04:19.529316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.855 qpair failed and we were unable to recover it. 00:30:48.855 [2024-11-20 10:04:19.529657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.855 [2024-11-20 10:04:19.529675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.855 qpair failed and we were unable to recover it. 00:30:48.855 [2024-11-20 10:04:19.530015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.855 [2024-11-20 10:04:19.530033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.855 qpair failed and we were unable to recover it. 00:30:48.855 [2024-11-20 10:04:19.530410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.855 [2024-11-20 10:04:19.530428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.855 qpair failed and we were unable to recover it. 00:30:48.855 [2024-11-20 10:04:19.530760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.855 [2024-11-20 10:04:19.530779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.855 qpair failed and we were unable to recover it. 00:30:48.855 [2024-11-20 10:04:19.531112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.855 [2024-11-20 10:04:19.531129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.855 qpair failed and we were unable to recover it. 00:30:48.855 [2024-11-20 10:04:19.531465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.855 [2024-11-20 10:04:19.531484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.855 qpair failed and we were unable to recover it. 00:30:48.855 [2024-11-20 10:04:19.531859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.855 [2024-11-20 10:04:19.531877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.855 qpair failed and we were unable to recover it. 00:30:48.855 [2024-11-20 10:04:19.532212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.855 [2024-11-20 10:04:19.532230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.855 qpair failed and we were unable to recover it. 00:30:48.855 [2024-11-20 10:04:19.532587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.855 [2024-11-20 10:04:19.532605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.855 qpair failed and we were unable to recover it. 00:30:48.855 [2024-11-20 10:04:19.532944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.855 [2024-11-20 10:04:19.532970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.855 qpair failed and we were unable to recover it. 00:30:48.855 [2024-11-20 10:04:19.533183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.855 [2024-11-20 10:04:19.533200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.855 qpair failed and we were unable to recover it. 00:30:48.855 [2024-11-20 10:04:19.533546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.856 [2024-11-20 10:04:19.533563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.856 qpair failed and we were unable to recover it. 00:30:48.856 [2024-11-20 10:04:19.533894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.856 [2024-11-20 10:04:19.533912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.856 qpair failed and we were unable to recover it. 00:30:48.856 [2024-11-20 10:04:19.534253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.856 [2024-11-20 10:04:19.534272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.856 qpair failed and we were unable to recover it. 00:30:48.856 [2024-11-20 10:04:19.534602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.856 [2024-11-20 10:04:19.534618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.856 qpair failed and we were unable to recover it. 00:30:48.856 [2024-11-20 10:04:19.535012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.856 [2024-11-20 10:04:19.535030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.856 qpair failed and we were unable to recover it. 00:30:48.856 [2024-11-20 10:04:19.535336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.856 [2024-11-20 10:04:19.535354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.856 qpair failed and we were unable to recover it. 00:30:48.856 [2024-11-20 10:04:19.535711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.856 [2024-11-20 10:04:19.535728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.856 qpair failed and we were unable to recover it. 00:30:48.856 [2024-11-20 10:04:19.536076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.856 [2024-11-20 10:04:19.536094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.856 qpair failed and we were unable to recover it. 00:30:48.856 [2024-11-20 10:04:19.536441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.856 [2024-11-20 10:04:19.536458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.856 qpair failed and we were unable to recover it. 00:30:48.856 [2024-11-20 10:04:19.536786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.856 [2024-11-20 10:04:19.536804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.856 qpair failed and we were unable to recover it. 00:30:48.856 [2024-11-20 10:04:19.537164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.856 [2024-11-20 10:04:19.537181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.856 qpair failed and we were unable to recover it. 00:30:48.856 [2024-11-20 10:04:19.537515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.856 [2024-11-20 10:04:19.537533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.856 qpair failed and we were unable to recover it. 00:30:48.856 [2024-11-20 10:04:19.537869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.856 [2024-11-20 10:04:19.537886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.856 qpair failed and we were unable to recover it. 00:30:48.856 [2024-11-20 10:04:19.538270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.856 [2024-11-20 10:04:19.538289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.856 qpair failed and we were unable to recover it. 00:30:48.856 [2024-11-20 10:04:19.538613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.856 [2024-11-20 10:04:19.538631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.856 qpair failed and we were unable to recover it. 00:30:48.856 [2024-11-20 10:04:19.538974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.856 [2024-11-20 10:04:19.538991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.856 qpair failed and we were unable to recover it. 00:30:48.856 [2024-11-20 10:04:19.539366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.856 [2024-11-20 10:04:19.539386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.856 qpair failed and we were unable to recover it. 00:30:48.856 [2024-11-20 10:04:19.539736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.856 [2024-11-20 10:04:19.539755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.856 qpair failed and we were unable to recover it. 00:30:48.856 [2024-11-20 10:04:19.540093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.856 [2024-11-20 10:04:19.540110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.856 qpair failed and we were unable to recover it. 00:30:48.856 [2024-11-20 10:04:19.540460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.856 [2024-11-20 10:04:19.540479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.856 qpair failed and we were unable to recover it. 00:30:48.856 [2024-11-20 10:04:19.540822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.856 [2024-11-20 10:04:19.540839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.856 qpair failed and we were unable to recover it. 00:30:48.856 [2024-11-20 10:04:19.541175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.856 [2024-11-20 10:04:19.541194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.856 qpair failed and we were unable to recover it. 00:30:48.856 [2024-11-20 10:04:19.541558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.856 [2024-11-20 10:04:19.541575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.856 qpair failed and we were unable to recover it. 00:30:48.856 [2024-11-20 10:04:19.541964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.856 [2024-11-20 10:04:19.541982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.856 qpair failed and we were unable to recover it. 00:30:48.856 [2024-11-20 10:04:19.542305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.856 [2024-11-20 10:04:19.542323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.856 qpair failed and we were unable to recover it. 00:30:48.856 [2024-11-20 10:04:19.542658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.856 [2024-11-20 10:04:19.542677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.856 qpair failed and we were unable to recover it. 00:30:48.856 [2024-11-20 10:04:19.543014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.856 [2024-11-20 10:04:19.543032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.856 qpair failed and we were unable to recover it. 00:30:48.856 [2024-11-20 10:04:19.543381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.856 [2024-11-20 10:04:19.543400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.856 qpair failed and we were unable to recover it. 00:30:48.856 [2024-11-20 10:04:19.543742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.856 [2024-11-20 10:04:19.543759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.856 qpair failed and we were unable to recover it. 00:30:48.856 [2024-11-20 10:04:19.544101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.856 [2024-11-20 10:04:19.544121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.856 qpair failed and we were unable to recover it. 00:30:48.856 [2024-11-20 10:04:19.544456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.856 [2024-11-20 10:04:19.544473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.856 qpair failed and we were unable to recover it. 00:30:48.856 [2024-11-20 10:04:19.544799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.857 [2024-11-20 10:04:19.544817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.857 qpair failed and we were unable to recover it. 00:30:48.857 [2024-11-20 10:04:19.545151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.857 [2024-11-20 10:04:19.545174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.857 qpair failed and we were unable to recover it. 00:30:48.857 [2024-11-20 10:04:19.545515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.857 [2024-11-20 10:04:19.545533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.857 qpair failed and we were unable to recover it. 00:30:48.857 [2024-11-20 10:04:19.545873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.857 [2024-11-20 10:04:19.545890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.857 qpair failed and we were unable to recover it. 00:30:48.857 [2024-11-20 10:04:19.546243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.857 [2024-11-20 10:04:19.546260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.857 qpair failed and we were unable to recover it. 00:30:48.857 [2024-11-20 10:04:19.546602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.857 [2024-11-20 10:04:19.546619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.857 qpair failed and we were unable to recover it. 00:30:48.857 [2024-11-20 10:04:19.546958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.857 [2024-11-20 10:04:19.546977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.857 qpair failed and we were unable to recover it. 00:30:48.857 [2024-11-20 10:04:19.547317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.857 [2024-11-20 10:04:19.547338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.857 qpair failed and we were unable to recover it. 00:30:48.857 [2024-11-20 10:04:19.547555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.857 [2024-11-20 10:04:19.547573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.857 qpair failed and we were unable to recover it. 00:30:48.857 [2024-11-20 10:04:19.547906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.857 [2024-11-20 10:04:19.547923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.857 qpair failed and we were unable to recover it. 00:30:48.857 [2024-11-20 10:04:19.548258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.857 [2024-11-20 10:04:19.548275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.857 qpair failed and we were unable to recover it. 00:30:48.857 [2024-11-20 10:04:19.548654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.857 [2024-11-20 10:04:19.548670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.857 qpair failed and we were unable to recover it. 00:30:48.857 [2024-11-20 10:04:19.549003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.857 [2024-11-20 10:04:19.549022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.857 qpair failed and we were unable to recover it. 00:30:48.857 [2024-11-20 10:04:19.549337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.857 [2024-11-20 10:04:19.549355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.857 qpair failed and we were unable to recover it. 00:30:48.857 [2024-11-20 10:04:19.549689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.857 [2024-11-20 10:04:19.549707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.857 qpair failed and we were unable to recover it. 00:30:48.857 [2024-11-20 10:04:19.550036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.857 [2024-11-20 10:04:19.550054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.857 qpair failed and we were unable to recover it. 00:30:48.857 [2024-11-20 10:04:19.550383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.857 [2024-11-20 10:04:19.550400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.857 qpair failed and we were unable to recover it. 00:30:48.857 [2024-11-20 10:04:19.550726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.857 [2024-11-20 10:04:19.550744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.857 qpair failed and we were unable to recover it. 00:30:48.857 [2024-11-20 10:04:19.551081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.857 [2024-11-20 10:04:19.551099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.857 qpair failed and we were unable to recover it. 00:30:48.857 [2024-11-20 10:04:19.551434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.857 [2024-11-20 10:04:19.551453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.857 qpair failed and we were unable to recover it. 00:30:48.857 [2024-11-20 10:04:19.551797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.857 [2024-11-20 10:04:19.551815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.857 qpair failed and we were unable to recover it. 00:30:48.857 [2024-11-20 10:04:19.552155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.857 [2024-11-20 10:04:19.552180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.857 qpair failed and we were unable to recover it. 00:30:48.857 [2024-11-20 10:04:19.552515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.857 [2024-11-20 10:04:19.552532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.857 qpair failed and we were unable to recover it. 00:30:48.857 [2024-11-20 10:04:19.552873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.857 [2024-11-20 10:04:19.552891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.857 qpair failed and we were unable to recover it. 00:30:48.857 [2024-11-20 10:04:19.553239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.857 [2024-11-20 10:04:19.553257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.857 qpair failed and we were unable to recover it. 00:30:48.857 [2024-11-20 10:04:19.553576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.858 [2024-11-20 10:04:19.553595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.858 qpair failed and we were unable to recover it. 00:30:48.858 [2024-11-20 10:04:19.553921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.858 [2024-11-20 10:04:19.553939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.858 qpair failed and we were unable to recover it. 00:30:48.858 [2024-11-20 10:04:19.554263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.858 [2024-11-20 10:04:19.554282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.858 qpair failed and we were unable to recover it. 00:30:48.858 [2024-11-20 10:04:19.554621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.858 [2024-11-20 10:04:19.554641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.858 qpair failed and we were unable to recover it. 00:30:48.858 [2024-11-20 10:04:19.555023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.858 [2024-11-20 10:04:19.555041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.858 qpair failed and we were unable to recover it. 00:30:48.858 [2024-11-20 10:04:19.555415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.858 [2024-11-20 10:04:19.555434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.858 qpair failed and we were unable to recover it. 00:30:48.858 [2024-11-20 10:04:19.555772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.858 [2024-11-20 10:04:19.555789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.858 qpair failed and we were unable to recover it. 00:30:48.858 [2024-11-20 10:04:19.556118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.858 [2024-11-20 10:04:19.556137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.858 qpair failed and we were unable to recover it. 00:30:48.858 [2024-11-20 10:04:19.556473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.858 [2024-11-20 10:04:19.556492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.858 qpair failed and we were unable to recover it. 00:30:48.858 [2024-11-20 10:04:19.556834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.858 [2024-11-20 10:04:19.556853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.858 qpair failed and we were unable to recover it. 00:30:48.858 [2024-11-20 10:04:19.557183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.858 [2024-11-20 10:04:19.557201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.858 qpair failed and we were unable to recover it. 00:30:48.858 [2024-11-20 10:04:19.557574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.858 [2024-11-20 10:04:19.557592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.858 qpair failed and we were unable to recover it. 00:30:48.858 [2024-11-20 10:04:19.557924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.858 [2024-11-20 10:04:19.557941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.858 qpair failed and we were unable to recover it. 00:30:48.858 [2024-11-20 10:04:19.558281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.858 [2024-11-20 10:04:19.558300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.858 qpair failed and we were unable to recover it. 00:30:48.858 [2024-11-20 10:04:19.558641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.858 [2024-11-20 10:04:19.558659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.858 qpair failed and we were unable to recover it. 00:30:48.858 [2024-11-20 10:04:19.558977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.858 [2024-11-20 10:04:19.558995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.858 qpair failed and we were unable to recover it. 00:30:48.858 [2024-11-20 10:04:19.559368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.858 [2024-11-20 10:04:19.559387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.858 qpair failed and we were unable to recover it. 00:30:48.858 [2024-11-20 10:04:19.559723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.858 [2024-11-20 10:04:19.559741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.858 qpair failed and we were unable to recover it. 00:30:48.858 [2024-11-20 10:04:19.560059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.858 [2024-11-20 10:04:19.560077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.858 qpair failed and we were unable to recover it. 00:30:48.858 [2024-11-20 10:04:19.560398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.858 [2024-11-20 10:04:19.560416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.858 qpair failed and we were unable to recover it. 00:30:48.858 [2024-11-20 10:04:19.560747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.858 [2024-11-20 10:04:19.560765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.858 qpair failed and we were unable to recover it. 00:30:48.858 [2024-11-20 10:04:19.561102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.858 [2024-11-20 10:04:19.561119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.858 qpair failed and we were unable to recover it. 00:30:48.858 [2024-11-20 10:04:19.561439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.858 [2024-11-20 10:04:19.561461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.858 qpair failed and we were unable to recover it. 00:30:48.858 [2024-11-20 10:04:19.561810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.858 [2024-11-20 10:04:19.561828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.858 qpair failed and we were unable to recover it. 00:30:48.858 [2024-11-20 10:04:19.562169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.858 [2024-11-20 10:04:19.562187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.858 qpair failed and we were unable to recover it. 00:30:48.858 [2024-11-20 10:04:19.562525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.858 [2024-11-20 10:04:19.562543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.858 qpair failed and we were unable to recover it. 00:30:48.858 [2024-11-20 10:04:19.562873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.858 [2024-11-20 10:04:19.562891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.858 qpair failed and we were unable to recover it. 00:30:48.858 [2024-11-20 10:04:19.563239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.858 [2024-11-20 10:04:19.563256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.858 qpair failed and we were unable to recover it. 00:30:48.858 [2024-11-20 10:04:19.563606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.858 [2024-11-20 10:04:19.563625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.858 qpair failed and we were unable to recover it. 00:30:48.858 [2024-11-20 10:04:19.563961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.858 [2024-11-20 10:04:19.563980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.858 qpair failed and we were unable to recover it. 00:30:48.858 [2024-11-20 10:04:19.564351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.858 [2024-11-20 10:04:19.564369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.859 qpair failed and we were unable to recover it. 00:30:48.859 [2024-11-20 10:04:19.564761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.859 [2024-11-20 10:04:19.564778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.859 qpair failed and we were unable to recover it. 00:30:48.859 [2024-11-20 10:04:19.565116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.859 [2024-11-20 10:04:19.565132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.859 qpair failed and we were unable to recover it. 00:30:48.859 [2024-11-20 10:04:19.565466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.859 [2024-11-20 10:04:19.565484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.859 qpair failed and we were unable to recover it. 00:30:48.859 [2024-11-20 10:04:19.565804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.859 [2024-11-20 10:04:19.565822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.859 qpair failed and we were unable to recover it. 00:30:48.859 [2024-11-20 10:04:19.566173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.859 [2024-11-20 10:04:19.566191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.859 qpair failed and we were unable to recover it. 00:30:48.859 [2024-11-20 10:04:19.566530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.859 [2024-11-20 10:04:19.566548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.859 qpair failed and we were unable to recover it. 00:30:48.859 [2024-11-20 10:04:19.566881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.859 [2024-11-20 10:04:19.566898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.859 qpair failed and we were unable to recover it. 00:30:48.859 [2024-11-20 10:04:19.567240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.859 [2024-11-20 10:04:19.567260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.859 qpair failed and we were unable to recover it. 00:30:48.859 [2024-11-20 10:04:19.567591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.859 [2024-11-20 10:04:19.567607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.859 qpair failed and we were unable to recover it. 00:30:48.859 [2024-11-20 10:04:19.567948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.859 [2024-11-20 10:04:19.567967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.859 qpair failed and we were unable to recover it. 00:30:48.859 [2024-11-20 10:04:19.568305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.859 [2024-11-20 10:04:19.568322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.859 qpair failed and we were unable to recover it. 00:30:48.859 [2024-11-20 10:04:19.568663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.859 [2024-11-20 10:04:19.568681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.859 qpair failed and we were unable to recover it. 00:30:48.859 [2024-11-20 10:04:19.569054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.859 [2024-11-20 10:04:19.569073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.859 qpair failed and we were unable to recover it. 00:30:48.859 [2024-11-20 10:04:19.569379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.859 [2024-11-20 10:04:19.569396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.859 qpair failed and we were unable to recover it. 00:30:48.859 [2024-11-20 10:04:19.569763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.859 [2024-11-20 10:04:19.569780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.859 qpair failed and we were unable to recover it. 00:30:48.859 [2024-11-20 10:04:19.570113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.859 [2024-11-20 10:04:19.570131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.859 qpair failed and we were unable to recover it. 00:30:48.859 [2024-11-20 10:04:19.570483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.859 [2024-11-20 10:04:19.570501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.859 qpair failed and we were unable to recover it. 00:30:48.859 [2024-11-20 10:04:19.570878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.859 [2024-11-20 10:04:19.570895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.859 qpair failed and we were unable to recover it. 00:30:48.859 [2024-11-20 10:04:19.571230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.859 [2024-11-20 10:04:19.571248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.859 qpair failed and we were unable to recover it. 00:30:48.859 [2024-11-20 10:04:19.571593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.859 [2024-11-20 10:04:19.571610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.859 qpair failed and we were unable to recover it. 00:30:48.859 [2024-11-20 10:04:19.571956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.859 [2024-11-20 10:04:19.571974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.859 qpair failed and we were unable to recover it. 00:30:48.859 [2024-11-20 10:04:19.572315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.859 [2024-11-20 10:04:19.572333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.859 qpair failed and we were unable to recover it. 00:30:48.859 [2024-11-20 10:04:19.572670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.859 [2024-11-20 10:04:19.572688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.859 qpair failed and we were unable to recover it. 00:30:48.859 [2024-11-20 10:04:19.573020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.859 [2024-11-20 10:04:19.573036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.859 qpair failed and we were unable to recover it. 00:30:48.859 [2024-11-20 10:04:19.573385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.859 [2024-11-20 10:04:19.573403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.859 qpair failed and we were unable to recover it. 00:30:48.859 [2024-11-20 10:04:19.573743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.859 [2024-11-20 10:04:19.573761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.859 qpair failed and we were unable to recover it. 00:30:48.859 [2024-11-20 10:04:19.573941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.859 [2024-11-20 10:04:19.573959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.859 qpair failed and we were unable to recover it. 00:30:48.859 [2024-11-20 10:04:19.574343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.859 [2024-11-20 10:04:19.574362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.859 qpair failed and we were unable to recover it. 00:30:48.859 [2024-11-20 10:04:19.574703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.859 [2024-11-20 10:04:19.574719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.859 qpair failed and we were unable to recover it. 00:30:48.859 [2024-11-20 10:04:19.575059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.859 [2024-11-20 10:04:19.575075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.859 qpair failed and we were unable to recover it. 00:30:48.859 [2024-11-20 10:04:19.575460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.859 [2024-11-20 10:04:19.575478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.859 qpair failed and we were unable to recover it. 00:30:48.859 [2024-11-20 10:04:19.575815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.859 [2024-11-20 10:04:19.575836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.859 qpair failed and we were unable to recover it. 00:30:48.859 [2024-11-20 10:04:19.576151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.860 [2024-11-20 10:04:19.576175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.860 qpair failed and we were unable to recover it. 00:30:48.860 [2024-11-20 10:04:19.576516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.860 [2024-11-20 10:04:19.576533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.860 qpair failed and we were unable to recover it. 00:30:48.860 [2024-11-20 10:04:19.576864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.860 [2024-11-20 10:04:19.576883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.860 qpair failed and we were unable to recover it. 00:30:48.860 [2024-11-20 10:04:19.577224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.860 [2024-11-20 10:04:19.577241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.860 qpair failed and we were unable to recover it. 00:30:48.860 [2024-11-20 10:04:19.577593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.860 [2024-11-20 10:04:19.577612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.860 qpair failed and we were unable to recover it. 00:30:48.860 [2024-11-20 10:04:19.577984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.860 [2024-11-20 10:04:19.578001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.860 qpair failed and we were unable to recover it. 00:30:48.860 [2024-11-20 10:04:19.578334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.860 [2024-11-20 10:04:19.578352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.860 qpair failed and we were unable to recover it. 00:30:48.860 [2024-11-20 10:04:19.578696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.860 [2024-11-20 10:04:19.578713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.860 qpair failed and we were unable to recover it. 00:30:48.860 [2024-11-20 10:04:19.578916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.860 [2024-11-20 10:04:19.578934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.860 qpair failed and we were unable to recover it. 00:30:48.860 [2024-11-20 10:04:19.579277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.860 [2024-11-20 10:04:19.579295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.860 qpair failed and we were unable to recover it. 00:30:48.860 [2024-11-20 10:04:19.579631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.860 [2024-11-20 10:04:19.579650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.860 qpair failed and we were unable to recover it. 00:30:48.860 [2024-11-20 10:04:19.579996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.860 [2024-11-20 10:04:19.580014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.860 qpair failed and we were unable to recover it. 00:30:48.860 [2024-11-20 10:04:19.580363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.860 [2024-11-20 10:04:19.580382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.860 qpair failed and we were unable to recover it. 00:30:48.860 [2024-11-20 10:04:19.580762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.860 [2024-11-20 10:04:19.580779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.860 qpair failed and we were unable to recover it. 00:30:48.860 [2024-11-20 10:04:19.581104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.860 [2024-11-20 10:04:19.581122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.860 qpair failed and we were unable to recover it. 00:30:48.860 [2024-11-20 10:04:19.581460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.860 [2024-11-20 10:04:19.581478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.860 qpair failed and we were unable to recover it. 00:30:48.860 [2024-11-20 10:04:19.581825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.860 [2024-11-20 10:04:19.581843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.860 qpair failed and we were unable to recover it. 00:30:48.860 [2024-11-20 10:04:19.582174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.860 [2024-11-20 10:04:19.582192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.860 qpair failed and we were unable to recover it. 00:30:48.860 [2024-11-20 10:04:19.582539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.860 [2024-11-20 10:04:19.582558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.860 qpair failed and we were unable to recover it. 00:30:48.860 [2024-11-20 10:04:19.582901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.860 [2024-11-20 10:04:19.582919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.860 qpair failed and we were unable to recover it. 00:30:48.860 [2024-11-20 10:04:19.583266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.860 [2024-11-20 10:04:19.583283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.860 qpair failed and we were unable to recover it. 00:30:48.860 [2024-11-20 10:04:19.583624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.860 [2024-11-20 10:04:19.583643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.860 qpair failed and we were unable to recover it. 00:30:48.860 [2024-11-20 10:04:19.583979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.860 [2024-11-20 10:04:19.583998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.860 qpair failed and we were unable to recover it. 00:30:48.860 [2024-11-20 10:04:19.584331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.860 [2024-11-20 10:04:19.584348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.860 qpair failed and we were unable to recover it. 00:30:48.860 [2024-11-20 10:04:19.584659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.860 [2024-11-20 10:04:19.584675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.860 qpair failed and we were unable to recover it. 00:30:48.860 [2024-11-20 10:04:19.585008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.860 [2024-11-20 10:04:19.585024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.860 qpair failed and we were unable to recover it. 00:30:48.860 [2024-11-20 10:04:19.585339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.860 [2024-11-20 10:04:19.585357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.860 qpair failed and we were unable to recover it. 00:30:48.860 [2024-11-20 10:04:19.585543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.860 [2024-11-20 10:04:19.585561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.860 qpair failed and we were unable to recover it. 00:30:48.860 [2024-11-20 10:04:19.585890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.861 [2024-11-20 10:04:19.585908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.861 qpair failed and we were unable to recover it. 00:30:48.861 [2024-11-20 10:04:19.586245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.861 [2024-11-20 10:04:19.586262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.861 qpair failed and we were unable to recover it. 00:30:48.861 [2024-11-20 10:04:19.586489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.861 [2024-11-20 10:04:19.586505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.861 qpair failed and we were unable to recover it. 00:30:48.861 [2024-11-20 10:04:19.586868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.861 [2024-11-20 10:04:19.586885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.861 qpair failed and we were unable to recover it. 00:30:48.861 [2024-11-20 10:04:19.587219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.861 [2024-11-20 10:04:19.587236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.861 qpair failed and we were unable to recover it. 00:30:48.861 [2024-11-20 10:04:19.587575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.861 [2024-11-20 10:04:19.587592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.861 qpair failed and we were unable to recover it. 00:30:48.861 [2024-11-20 10:04:19.587933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.861 [2024-11-20 10:04:19.587951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.861 qpair failed and we were unable to recover it. 00:30:48.861 [2024-11-20 10:04:19.588278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.861 [2024-11-20 10:04:19.588295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.861 qpair failed and we were unable to recover it. 00:30:48.861 [2024-11-20 10:04:19.588635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.861 [2024-11-20 10:04:19.588653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.861 qpair failed and we were unable to recover it. 00:30:48.861 [2024-11-20 10:04:19.588998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.861 [2024-11-20 10:04:19.589015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.861 qpair failed and we were unable to recover it. 00:30:48.861 [2024-11-20 10:04:19.589340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.861 [2024-11-20 10:04:19.589356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.861 qpair failed and we were unable to recover it. 00:30:48.861 [2024-11-20 10:04:19.589695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.861 [2024-11-20 10:04:19.589716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.861 qpair failed and we were unable to recover it. 00:30:48.861 [2024-11-20 10:04:19.590041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.861 [2024-11-20 10:04:19.590058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.861 qpair failed and we were unable to recover it. 00:30:48.861 [2024-11-20 10:04:19.590398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.861 [2024-11-20 10:04:19.590415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.861 qpair failed and we were unable to recover it. 00:30:48.861 [2024-11-20 10:04:19.590758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.861 [2024-11-20 10:04:19.590776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.861 qpair failed and we were unable to recover it. 00:30:48.861 [2024-11-20 10:04:19.590975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.861 [2024-11-20 10:04:19.590995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.861 qpair failed and we were unable to recover it. 00:30:48.861 [2024-11-20 10:04:19.591346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.861 [2024-11-20 10:04:19.591364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.861 qpair failed and we were unable to recover it. 00:30:48.861 [2024-11-20 10:04:19.591698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.861 [2024-11-20 10:04:19.591716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.861 qpair failed and we were unable to recover it. 00:30:48.861 [2024-11-20 10:04:19.592051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.861 [2024-11-20 10:04:19.592069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.861 qpair failed and we were unable to recover it. 00:30:48.861 [2024-11-20 10:04:19.592403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.861 [2024-11-20 10:04:19.592421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.861 qpair failed and we were unable to recover it. 00:30:48.861 [2024-11-20 10:04:19.592762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.861 [2024-11-20 10:04:19.592781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.861 qpair failed and we were unable to recover it. 00:30:48.861 [2024-11-20 10:04:19.593111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.861 [2024-11-20 10:04:19.593130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.861 qpair failed and we were unable to recover it. 00:30:48.861 [2024-11-20 10:04:19.593441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.861 [2024-11-20 10:04:19.593460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.861 qpair failed and we were unable to recover it. 00:30:48.861 [2024-11-20 10:04:19.593798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.861 [2024-11-20 10:04:19.593816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.861 qpair failed and we were unable to recover it. 00:30:48.861 [2024-11-20 10:04:19.594138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.861 [2024-11-20 10:04:19.594157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.861 qpair failed and we were unable to recover it. 00:30:48.861 [2024-11-20 10:04:19.594492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.861 [2024-11-20 10:04:19.594510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.861 qpair failed and we were unable to recover it. 00:30:48.861 [2024-11-20 10:04:19.594847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.861 [2024-11-20 10:04:19.594865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.861 qpair failed and we were unable to recover it. 00:30:48.861 [2024-11-20 10:04:19.595207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.861 [2024-11-20 10:04:19.595224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.861 qpair failed and we were unable to recover it. 00:30:48.861 [2024-11-20 10:04:19.595553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.861 [2024-11-20 10:04:19.595569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.861 qpair failed and we were unable to recover it. 00:30:48.861 [2024-11-20 10:04:19.595934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.861 [2024-11-20 10:04:19.595951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.861 qpair failed and we were unable to recover it. 00:30:48.861 [2024-11-20 10:04:19.596285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.861 [2024-11-20 10:04:19.596302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.861 qpair failed and we were unable to recover it. 00:30:48.861 [2024-11-20 10:04:19.596637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.861 [2024-11-20 10:04:19.596654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.861 qpair failed and we were unable to recover it. 00:30:48.861 [2024-11-20 10:04:19.597003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.861 [2024-11-20 10:04:19.597021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.861 qpair failed and we were unable to recover it. 00:30:48.862 [2024-11-20 10:04:19.597406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.862 [2024-11-20 10:04:19.597423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.862 qpair failed and we were unable to recover it. 00:30:48.862 [2024-11-20 10:04:19.597806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.862 [2024-11-20 10:04:19.597825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.862 qpair failed and we were unable to recover it. 00:30:48.862 [2024-11-20 10:04:19.598176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.862 [2024-11-20 10:04:19.598194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.862 qpair failed and we were unable to recover it. 00:30:48.862 [2024-11-20 10:04:19.598534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.862 [2024-11-20 10:04:19.598552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.862 qpair failed and we were unable to recover it. 00:30:48.862 [2024-11-20 10:04:19.598888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.862 [2024-11-20 10:04:19.598906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.862 qpair failed and we were unable to recover it. 00:30:48.862 [2024-11-20 10:04:19.599187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.862 [2024-11-20 10:04:19.599205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.862 qpair failed and we were unable to recover it. 00:30:48.862 [2024-11-20 10:04:19.599545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.862 [2024-11-20 10:04:19.599562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.862 qpair failed and we were unable to recover it. 00:30:48.862 [2024-11-20 10:04:19.599889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.862 [2024-11-20 10:04:19.599908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.862 qpair failed and we were unable to recover it. 00:30:48.862 [2024-11-20 10:04:19.600229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.862 [2024-11-20 10:04:19.600247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.862 qpair failed and we were unable to recover it. 00:30:48.862 [2024-11-20 10:04:19.600601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.862 [2024-11-20 10:04:19.600619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.862 qpair failed and we were unable to recover it. 00:30:48.862 [2024-11-20 10:04:19.600973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.862 [2024-11-20 10:04:19.600990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.862 qpair failed and we were unable to recover it. 00:30:48.862 [2024-11-20 10:04:19.601310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.862 [2024-11-20 10:04:19.601328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.862 qpair failed and we were unable to recover it. 00:30:48.862 [2024-11-20 10:04:19.601662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.862 [2024-11-20 10:04:19.601679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.862 qpair failed and we were unable to recover it. 00:30:48.862 [2024-11-20 10:04:19.602016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.862 [2024-11-20 10:04:19.602035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.862 qpair failed and we were unable to recover it. 00:30:48.862 [2024-11-20 10:04:19.602386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.862 [2024-11-20 10:04:19.602403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.862 qpair failed and we were unable to recover it. 00:30:48.862 [2024-11-20 10:04:19.602763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.862 [2024-11-20 10:04:19.602780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.862 qpair failed and we were unable to recover it. 00:30:48.862 [2024-11-20 10:04:19.603111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.862 [2024-11-20 10:04:19.603128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.862 qpair failed and we were unable to recover it. 00:30:48.862 [2024-11-20 10:04:19.603466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.862 [2024-11-20 10:04:19.603484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.862 qpair failed and we were unable to recover it. 00:30:48.862 [2024-11-20 10:04:19.603815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.862 [2024-11-20 10:04:19.603841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.862 qpair failed and we were unable to recover it. 00:30:48.862 [2024-11-20 10:04:19.604164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.862 [2024-11-20 10:04:19.604182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.862 qpair failed and we were unable to recover it. 00:30:48.862 [2024-11-20 10:04:19.604532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.862 [2024-11-20 10:04:19.604548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.862 qpair failed and we were unable to recover it. 00:30:48.862 [2024-11-20 10:04:19.604892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.862 [2024-11-20 10:04:19.604909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.862 qpair failed and we were unable to recover it. 00:30:48.862 [2024-11-20 10:04:19.605256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.862 [2024-11-20 10:04:19.605274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.862 qpair failed and we were unable to recover it. 00:30:48.862 [2024-11-20 10:04:19.605616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.862 [2024-11-20 10:04:19.605634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.862 qpair failed and we were unable to recover it. 00:30:48.862 [2024-11-20 10:04:19.605971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.862 [2024-11-20 10:04:19.605988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.862 qpair failed and we were unable to recover it. 00:30:48.862 [2024-11-20 10:04:19.606327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.862 [2024-11-20 10:04:19.606345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.862 qpair failed and we were unable to recover it. 00:30:48.862 [2024-11-20 10:04:19.606682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.862 [2024-11-20 10:04:19.606699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.862 qpair failed and we were unable to recover it. 00:30:48.862 [2024-11-20 10:04:19.607058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.862 [2024-11-20 10:04:19.607076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.862 qpair failed and we were unable to recover it. 00:30:48.862 [2024-11-20 10:04:19.607412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.862 [2024-11-20 10:04:19.607431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.862 qpair failed and we were unable to recover it. 00:30:48.862 [2024-11-20 10:04:19.607763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.862 [2024-11-20 10:04:19.607782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.862 qpair failed and we were unable to recover it. 00:30:48.862 [2024-11-20 10:04:19.607994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.862 [2024-11-20 10:04:19.608013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.863 qpair failed and we were unable to recover it. 00:30:48.863 [2024-11-20 10:04:19.608342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.863 [2024-11-20 10:04:19.608359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.863 qpair failed and we were unable to recover it. 00:30:48.863 [2024-11-20 10:04:19.608683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.863 [2024-11-20 10:04:19.608701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.863 qpair failed and we were unable to recover it. 00:30:48.863 [2024-11-20 10:04:19.609048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.863 [2024-11-20 10:04:19.609065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.863 qpair failed and we were unable to recover it. 00:30:48.863 [2024-11-20 10:04:19.609408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.863 [2024-11-20 10:04:19.609425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.863 qpair failed and we were unable to recover it. 00:30:48.863 [2024-11-20 10:04:19.609785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.863 [2024-11-20 10:04:19.609803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.863 qpair failed and we were unable to recover it. 00:30:48.863 [2024-11-20 10:04:19.610071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.863 [2024-11-20 10:04:19.610089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.863 qpair failed and we were unable to recover it. 00:30:48.863 [2024-11-20 10:04:19.610401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.863 [2024-11-20 10:04:19.610419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.863 qpair failed and we were unable to recover it. 00:30:48.863 [2024-11-20 10:04:19.610757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.863 [2024-11-20 10:04:19.610775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.863 qpair failed and we were unable to recover it. 00:30:48.863 [2024-11-20 10:04:19.611104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.863 [2024-11-20 10:04:19.611121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.863 qpair failed and we were unable to recover it. 00:30:48.863 [2024-11-20 10:04:19.611458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.863 [2024-11-20 10:04:19.611476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.863 qpair failed and we were unable to recover it. 00:30:48.863 [2024-11-20 10:04:19.611813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.863 [2024-11-20 10:04:19.611832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.863 qpair failed and we were unable to recover it. 00:30:48.863 [2024-11-20 10:04:19.612166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.863 [2024-11-20 10:04:19.612186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.863 qpair failed and we were unable to recover it. 00:30:48.863 [2024-11-20 10:04:19.612525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.863 [2024-11-20 10:04:19.612542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.863 qpair failed and we were unable to recover it. 00:30:48.863 [2024-11-20 10:04:19.612906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.863 [2024-11-20 10:04:19.612923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.863 qpair failed and we were unable to recover it. 00:30:48.863 [2024-11-20 10:04:19.613256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.863 [2024-11-20 10:04:19.613274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.863 qpair failed and we were unable to recover it. 00:30:48.863 [2024-11-20 10:04:19.613610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.863 [2024-11-20 10:04:19.613628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.863 qpair failed and we were unable to recover it. 00:30:48.863 [2024-11-20 10:04:19.613966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.863 [2024-11-20 10:04:19.613983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.863 qpair failed and we were unable to recover it. 00:30:48.863 [2024-11-20 10:04:19.614361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.863 [2024-11-20 10:04:19.614379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.863 qpair failed and we were unable to recover it. 00:30:48.863 [2024-11-20 10:04:19.614584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.863 [2024-11-20 10:04:19.614604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.863 qpair failed and we were unable to recover it. 00:30:48.863 [2024-11-20 10:04:19.614952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.863 [2024-11-20 10:04:19.614971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.863 qpair failed and we were unable to recover it. 00:30:48.863 [2024-11-20 10:04:19.615308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.863 [2024-11-20 10:04:19.615326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.863 qpair failed and we were unable to recover it. 00:30:48.863 [2024-11-20 10:04:19.615643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.863 [2024-11-20 10:04:19.615660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.863 qpair failed and we were unable to recover it. 00:30:48.863 [2024-11-20 10:04:19.616003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.863 [2024-11-20 10:04:19.616020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.863 qpair failed and we were unable to recover it. 00:30:48.863 [2024-11-20 10:04:19.616341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.863 [2024-11-20 10:04:19.616359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.863 qpair failed and we were unable to recover it. 00:30:48.863 [2024-11-20 10:04:19.616709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.863 [2024-11-20 10:04:19.616726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.863 qpair failed and we were unable to recover it. 00:30:48.863 [2024-11-20 10:04:19.617070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.863 [2024-11-20 10:04:19.617088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.863 qpair failed and we were unable to recover it. 00:30:48.863 [2024-11-20 10:04:19.617426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.863 [2024-11-20 10:04:19.617444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.863 qpair failed and we were unable to recover it. 00:30:48.863 [2024-11-20 10:04:19.617774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.863 [2024-11-20 10:04:19.617796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.863 qpair failed and we were unable to recover it. 00:30:48.863 [2024-11-20 10:04:19.618129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.863 [2024-11-20 10:04:19.618148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.863 qpair failed and we were unable to recover it. 00:30:48.864 [2024-11-20 10:04:19.618490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.864 [2024-11-20 10:04:19.618508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.864 qpair failed and we were unable to recover it. 00:30:48.864 [2024-11-20 10:04:19.618849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.864 [2024-11-20 10:04:19.618867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.864 qpair failed and we were unable to recover it. 00:30:48.864 [2024-11-20 10:04:19.619069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.864 [2024-11-20 10:04:19.619089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.864 qpair failed and we were unable to recover it. 00:30:48.864 [2024-11-20 10:04:19.619418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.864 [2024-11-20 10:04:19.619437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.864 qpair failed and we were unable to recover it. 00:30:48.864 [2024-11-20 10:04:19.619758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.864 [2024-11-20 10:04:19.619776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.864 qpair failed and we were unable to recover it. 00:30:48.864 [2024-11-20 10:04:19.620111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.864 [2024-11-20 10:04:19.620128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.864 qpair failed and we were unable to recover it. 00:30:48.864 [2024-11-20 10:04:19.620461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.864 [2024-11-20 10:04:19.620480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.864 qpair failed and we were unable to recover it. 00:30:48.864 [2024-11-20 10:04:19.620818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.864 [2024-11-20 10:04:19.620835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.864 qpair failed and we were unable to recover it. 00:30:48.864 [2024-11-20 10:04:19.621164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.864 [2024-11-20 10:04:19.621183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.864 qpair failed and we were unable to recover it. 00:30:48.864 [2024-11-20 10:04:19.621526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.864 [2024-11-20 10:04:19.621543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.864 qpair failed and we were unable to recover it. 00:30:48.864 [2024-11-20 10:04:19.621880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.864 [2024-11-20 10:04:19.621899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.864 qpair failed and we were unable to recover it. 00:30:48.864 [2024-11-20 10:04:19.622241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.864 [2024-11-20 10:04:19.622258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.864 qpair failed and we were unable to recover it. 00:30:48.864 [2024-11-20 10:04:19.622590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.864 [2024-11-20 10:04:19.622609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.864 qpair failed and we were unable to recover it. 00:30:48.864 [2024-11-20 10:04:19.622965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.864 [2024-11-20 10:04:19.622982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.864 qpair failed and we were unable to recover it. 00:30:48.864 [2024-11-20 10:04:19.623319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.864 [2024-11-20 10:04:19.623339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.864 qpair failed and we were unable to recover it. 00:30:48.864 [2024-11-20 10:04:19.623572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.864 [2024-11-20 10:04:19.623590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.864 qpair failed and we were unable to recover it. 00:30:48.864 [2024-11-20 10:04:19.623914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.864 [2024-11-20 10:04:19.623932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.864 qpair failed and we were unable to recover it. 00:30:48.864 [2024-11-20 10:04:19.624273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.864 [2024-11-20 10:04:19.624291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.864 qpair failed and we were unable to recover it. 00:30:48.864 [2024-11-20 10:04:19.624634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.864 [2024-11-20 10:04:19.624653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.864 qpair failed and we were unable to recover it. 00:30:48.864 [2024-11-20 10:04:19.624988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.864 [2024-11-20 10:04:19.625006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.864 qpair failed and we were unable to recover it. 00:30:48.864 [2024-11-20 10:04:19.625349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.864 [2024-11-20 10:04:19.625367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.864 qpair failed and we were unable to recover it. 00:30:48.864 [2024-11-20 10:04:19.625705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.864 [2024-11-20 10:04:19.625722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.864 qpair failed and we were unable to recover it. 00:30:48.864 [2024-11-20 10:04:19.626062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.864 [2024-11-20 10:04:19.626078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.864 qpair failed and we were unable to recover it. 00:30:48.864 [2024-11-20 10:04:19.626421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.864 [2024-11-20 10:04:19.626438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.864 qpair failed and we were unable to recover it. 00:30:48.864 [2024-11-20 10:04:19.626754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.864 [2024-11-20 10:04:19.626773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.864 qpair failed and we were unable to recover it. 00:30:48.864 [2024-11-20 10:04:19.627110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.864 [2024-11-20 10:04:19.627129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.864 qpair failed and we were unable to recover it. 00:30:48.864 [2024-11-20 10:04:19.627459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.864 [2024-11-20 10:04:19.627480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.864 qpair failed and we were unable to recover it. 00:30:48.864 [2024-11-20 10:04:19.627803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.864 [2024-11-20 10:04:19.627820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.864 qpair failed and we were unable to recover it. 00:30:48.864 [2024-11-20 10:04:19.628135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.864 [2024-11-20 10:04:19.628151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.864 qpair failed and we were unable to recover it. 00:30:48.864 [2024-11-20 10:04:19.628491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.865 [2024-11-20 10:04:19.628508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.865 qpair failed and we were unable to recover it. 00:30:48.865 [2024-11-20 10:04:19.628841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.865 [2024-11-20 10:04:19.628859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.865 qpair failed and we were unable to recover it. 00:30:48.865 [2024-11-20 10:04:19.629197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.865 [2024-11-20 10:04:19.629215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.865 qpair failed and we were unable to recover it. 00:30:48.865 [2024-11-20 10:04:19.629561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.865 [2024-11-20 10:04:19.629580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.865 qpair failed and we were unable to recover it. 00:30:48.865 [2024-11-20 10:04:19.629956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.865 [2024-11-20 10:04:19.629973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.865 qpair failed and we were unable to recover it. 00:30:48.865 [2024-11-20 10:04:19.630345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.865 [2024-11-20 10:04:19.630363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.865 qpair failed and we were unable to recover it. 00:30:48.865 [2024-11-20 10:04:19.630689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.865 [2024-11-20 10:04:19.630706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.865 qpair failed and we were unable to recover it. 00:30:48.865 [2024-11-20 10:04:19.631052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.865 [2024-11-20 10:04:19.631068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.865 qpair failed and we were unable to recover it. 00:30:48.865 [2024-11-20 10:04:19.631459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.865 [2024-11-20 10:04:19.631477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.865 qpair failed and we were unable to recover it. 00:30:48.865 [2024-11-20 10:04:19.631811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.865 [2024-11-20 10:04:19.631831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.865 qpair failed and we were unable to recover it. 00:30:48.865 [2024-11-20 10:04:19.632172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.865 [2024-11-20 10:04:19.632190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.865 qpair failed and we were unable to recover it. 00:30:48.865 [2024-11-20 10:04:19.632544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.865 [2024-11-20 10:04:19.632563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.865 qpair failed and we were unable to recover it. 00:30:48.865 [2024-11-20 10:04:19.632883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.865 [2024-11-20 10:04:19.632902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.865 qpair failed and we were unable to recover it. 00:30:48.865 [2024-11-20 10:04:19.633235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.865 [2024-11-20 10:04:19.633254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.865 qpair failed and we were unable to recover it. 00:30:48.865 [2024-11-20 10:04:19.633593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.865 [2024-11-20 10:04:19.633611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.865 qpair failed and we were unable to recover it. 00:30:48.865 [2024-11-20 10:04:19.633934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.865 [2024-11-20 10:04:19.633953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.865 qpair failed and we were unable to recover it. 00:30:48.865 [2024-11-20 10:04:19.634289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.865 [2024-11-20 10:04:19.634307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.865 qpair failed and we were unable to recover it. 00:30:48.865 [2024-11-20 10:04:19.634646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.865 [2024-11-20 10:04:19.634665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.865 qpair failed and we were unable to recover it. 00:30:48.865 [2024-11-20 10:04:19.635000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.865 [2024-11-20 10:04:19.635018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.865 qpair failed and we were unable to recover it. 00:30:48.865 [2024-11-20 10:04:19.635360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.865 [2024-11-20 10:04:19.635379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.865 qpair failed and we were unable to recover it. 00:30:48.865 [2024-11-20 10:04:19.635705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.865 [2024-11-20 10:04:19.635722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.865 qpair failed and we were unable to recover it. 00:30:48.865 [2024-11-20 10:04:19.636056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.865 [2024-11-20 10:04:19.636074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.865 qpair failed and we were unable to recover it. 00:30:48.865 [2024-11-20 10:04:19.636431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.865 [2024-11-20 10:04:19.636450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.865 qpair failed and we were unable to recover it. 00:30:48.865 [2024-11-20 10:04:19.636768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.865 [2024-11-20 10:04:19.636786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.865 qpair failed and we were unable to recover it. 00:30:48.865 [2024-11-20 10:04:19.637123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.865 [2024-11-20 10:04:19.637142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.865 qpair failed and we were unable to recover it. 00:30:48.865 [2024-11-20 10:04:19.637487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.865 [2024-11-20 10:04:19.637504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.865 qpair failed and we were unable to recover it. 00:30:48.865 [2024-11-20 10:04:19.637704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.865 [2024-11-20 10:04:19.637723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.865 qpair failed and we were unable to recover it. 00:30:48.865 [2024-11-20 10:04:19.638045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.865 [2024-11-20 10:04:19.638063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.865 qpair failed and we were unable to recover it. 00:30:48.865 [2024-11-20 10:04:19.638407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.865 [2024-11-20 10:04:19.638425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.865 qpair failed and we were unable to recover it. 00:30:48.865 [2024-11-20 10:04:19.638656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.865 [2024-11-20 10:04:19.638674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.865 qpair failed and we were unable to recover it. 00:30:48.865 [2024-11-20 10:04:19.639002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.865 [2024-11-20 10:04:19.639020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.865 qpair failed and we were unable to recover it. 00:30:48.865 [2024-11-20 10:04:19.639367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.866 [2024-11-20 10:04:19.639385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.866 qpair failed and we were unable to recover it. 00:30:48.866 [2024-11-20 10:04:19.639712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.866 [2024-11-20 10:04:19.639730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.866 qpair failed and we were unable to recover it. 00:30:48.866 [2024-11-20 10:04:19.640065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.866 [2024-11-20 10:04:19.640083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.866 qpair failed and we were unable to recover it. 00:30:48.866 [2024-11-20 10:04:19.640465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.866 [2024-11-20 10:04:19.640483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.866 qpair failed and we were unable to recover it. 00:30:48.866 [2024-11-20 10:04:19.640815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.866 [2024-11-20 10:04:19.640833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.866 qpair failed and we were unable to recover it. 00:30:48.866 [2024-11-20 10:04:19.641174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.866 [2024-11-20 10:04:19.641197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.866 qpair failed and we were unable to recover it. 00:30:48.866 [2024-11-20 10:04:19.641417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.866 [2024-11-20 10:04:19.641436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.866 qpair failed and we were unable to recover it. 00:30:48.866 [2024-11-20 10:04:19.641852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.866 [2024-11-20 10:04:19.641870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.866 qpair failed and we were unable to recover it. 00:30:48.866 [2024-11-20 10:04:19.642208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.866 [2024-11-20 10:04:19.642227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.866 qpair failed and we were unable to recover it. 00:30:48.866 [2024-11-20 10:04:19.642562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.866 [2024-11-20 10:04:19.642579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.866 qpair failed and we were unable to recover it. 00:30:48.866 [2024-11-20 10:04:19.642914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.866 [2024-11-20 10:04:19.642932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.866 qpair failed and we were unable to recover it. 00:30:48.866 [2024-11-20 10:04:19.643272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.866 [2024-11-20 10:04:19.643289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.866 qpair failed and we were unable to recover it. 00:30:48.866 [2024-11-20 10:04:19.643603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.866 [2024-11-20 10:04:19.643622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.866 qpair failed and we were unable to recover it. 00:30:48.866 [2024-11-20 10:04:19.643966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.866 [2024-11-20 10:04:19.643983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.866 qpair failed and we were unable to recover it. 00:30:48.866 [2024-11-20 10:04:19.644355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.866 [2024-11-20 10:04:19.644374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.866 qpair failed and we were unable to recover it. 00:30:48.866 [2024-11-20 10:04:19.644716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.866 [2024-11-20 10:04:19.644733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.866 qpair failed and we were unable to recover it. 00:30:48.866 [2024-11-20 10:04:19.645107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.866 [2024-11-20 10:04:19.645125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.866 qpair failed and we were unable to recover it. 00:30:48.866 [2024-11-20 10:04:19.645465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.866 [2024-11-20 10:04:19.645482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.866 qpair failed and we were unable to recover it. 00:30:48.866 [2024-11-20 10:04:19.645813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.866 [2024-11-20 10:04:19.645831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.866 qpair failed and we were unable to recover it. 00:30:48.866 [2024-11-20 10:04:19.646182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.866 [2024-11-20 10:04:19.646202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.866 qpair failed and we were unable to recover it. 00:30:48.866 [2024-11-20 10:04:19.646544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.866 [2024-11-20 10:04:19.646561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.866 qpair failed and we were unable to recover it. 00:30:48.866 [2024-11-20 10:04:19.646910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.866 [2024-11-20 10:04:19.646927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.866 qpair failed and we were unable to recover it. 00:30:48.866 [2024-11-20 10:04:19.647146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.866 [2024-11-20 10:04:19.647175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.866 qpair failed and we were unable to recover it. 00:30:48.866 [2024-11-20 10:04:19.647509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.866 [2024-11-20 10:04:19.647526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.866 qpair failed and we were unable to recover it. 00:30:48.866 [2024-11-20 10:04:19.647875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.866 [2024-11-20 10:04:19.647891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.866 qpair failed and we were unable to recover it. 00:30:48.866 [2024-11-20 10:04:19.648238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.866 [2024-11-20 10:04:19.648255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.866 qpair failed and we were unable to recover it. 00:30:48.866 [2024-11-20 10:04:19.648585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.866 [2024-11-20 10:04:19.648601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.866 qpair failed and we were unable to recover it. 00:30:48.866 [2024-11-20 10:04:19.648938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.866 [2024-11-20 10:04:19.648957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.866 qpair failed and we were unable to recover it. 00:30:48.866 [2024-11-20 10:04:19.649310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.866 [2024-11-20 10:04:19.649328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.866 qpair failed and we were unable to recover it. 00:30:48.866 [2024-11-20 10:04:19.649668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.866 [2024-11-20 10:04:19.649687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.866 qpair failed and we were unable to recover it. 00:30:48.866 [2024-11-20 10:04:19.650022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.866 [2024-11-20 10:04:19.650039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.866 qpair failed and we were unable to recover it. 00:30:48.866 [2024-11-20 10:04:19.650377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.866 [2024-11-20 10:04:19.650396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.867 qpair failed and we were unable to recover it. 00:30:48.867 [2024-11-20 10:04:19.650774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.867 [2024-11-20 10:04:19.650791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.867 qpair failed and we were unable to recover it. 00:30:48.867 [2024-11-20 10:04:19.651131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.867 [2024-11-20 10:04:19.651149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.867 qpair failed and we were unable to recover it. 00:30:48.867 [2024-11-20 10:04:19.651424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.867 [2024-11-20 10:04:19.651441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.867 qpair failed and we were unable to recover it. 00:30:48.867 [2024-11-20 10:04:19.651742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.867 [2024-11-20 10:04:19.651760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.867 qpair failed and we were unable to recover it. 00:30:48.867 [2024-11-20 10:04:19.652085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.867 [2024-11-20 10:04:19.652103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.867 qpair failed and we were unable to recover it. 00:30:48.867 [2024-11-20 10:04:19.652444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.867 [2024-11-20 10:04:19.652462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.867 qpair failed and we were unable to recover it. 00:30:48.867 [2024-11-20 10:04:19.652695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.867 [2024-11-20 10:04:19.652712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.867 qpair failed and we were unable to recover it. 00:30:48.867 [2024-11-20 10:04:19.653049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.867 [2024-11-20 10:04:19.653065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.867 qpair failed and we were unable to recover it. 00:30:48.867 [2024-11-20 10:04:19.653398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.867 [2024-11-20 10:04:19.653414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.867 qpair failed and we were unable to recover it. 00:30:48.867 [2024-11-20 10:04:19.653747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.867 [2024-11-20 10:04:19.653764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.867 qpair failed and we were unable to recover it. 00:30:48.867 [2024-11-20 10:04:19.654098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.867 [2024-11-20 10:04:19.654113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.867 qpair failed and we were unable to recover it. 00:30:48.867 [2024-11-20 10:04:19.654457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.867 [2024-11-20 10:04:19.654475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.867 qpair failed and we were unable to recover it. 00:30:48.867 [2024-11-20 10:04:19.654829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.867 [2024-11-20 10:04:19.654847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.867 qpair failed and we were unable to recover it. 00:30:48.867 [2024-11-20 10:04:19.655065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.867 [2024-11-20 10:04:19.655087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.867 qpair failed and we were unable to recover it. 00:30:48.867 [2024-11-20 10:04:19.655309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.867 [2024-11-20 10:04:19.655327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.867 qpair failed and we were unable to recover it. 00:30:48.867 [2024-11-20 10:04:19.655672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.867 [2024-11-20 10:04:19.655691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.867 qpair failed and we were unable to recover it. 00:30:48.867 [2024-11-20 10:04:19.656010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.867 [2024-11-20 10:04:19.656027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.867 qpair failed and we were unable to recover it. 00:30:48.867 [2024-11-20 10:04:19.656377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.867 [2024-11-20 10:04:19.656396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.867 qpair failed and we were unable to recover it. 00:30:48.867 [2024-11-20 10:04:19.656740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.867 [2024-11-20 10:04:19.656756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.867 qpair failed and we were unable to recover it. 00:30:48.867 [2024-11-20 10:04:19.657095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.867 [2024-11-20 10:04:19.657113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.867 qpair failed and we were unable to recover it. 00:30:48.867 [2024-11-20 10:04:19.657342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.867 [2024-11-20 10:04:19.657360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.867 qpair failed and we were unable to recover it. 00:30:48.867 [2024-11-20 10:04:19.657770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.867 [2024-11-20 10:04:19.657789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.867 qpair failed and we were unable to recover it. 00:30:48.867 [2024-11-20 10:04:19.658132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.867 [2024-11-20 10:04:19.658150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.867 qpair failed and we were unable to recover it. 00:30:48.867 [2024-11-20 10:04:19.658495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.867 [2024-11-20 10:04:19.658513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.867 qpair failed and we were unable to recover it. 00:30:48.867 [2024-11-20 10:04:19.658867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.867 [2024-11-20 10:04:19.658885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.867 qpair failed and we were unable to recover it. 00:30:48.867 [2024-11-20 10:04:19.659230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.867 [2024-11-20 10:04:19.659249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.867 qpair failed and we were unable to recover it. 00:30:48.867 [2024-11-20 10:04:19.659460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.867 [2024-11-20 10:04:19.659477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.867 qpair failed and we were unable to recover it. 00:30:48.867 [2024-11-20 10:04:19.659810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.867 [2024-11-20 10:04:19.659828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.867 qpair failed and we were unable to recover it. 00:30:48.867 [2024-11-20 10:04:19.660168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.867 [2024-11-20 10:04:19.660185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.867 qpair failed and we were unable to recover it. 00:30:48.867 [2024-11-20 10:04:19.660527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.867 [2024-11-20 10:04:19.660545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.867 qpair failed and we were unable to recover it. 00:30:48.867 [2024-11-20 10:04:19.660899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.867 [2024-11-20 10:04:19.660918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.867 qpair failed and we were unable to recover it. 00:30:48.867 [2024-11-20 10:04:19.661270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.867 [2024-11-20 10:04:19.661288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.867 qpair failed and we were unable to recover it. 00:30:48.867 [2024-11-20 10:04:19.661648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.867 [2024-11-20 10:04:19.661665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.867 qpair failed and we were unable to recover it. 00:30:48.867 [2024-11-20 10:04:19.662021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.867 [2024-11-20 10:04:19.662038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.867 qpair failed and we were unable to recover it. 00:30:48.867 [2024-11-20 10:04:19.662382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.867 [2024-11-20 10:04:19.662402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.867 qpair failed and we were unable to recover it. 00:30:48.867 [2024-11-20 10:04:19.662738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.868 [2024-11-20 10:04:19.662756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.868 qpair failed and we were unable to recover it. 00:30:48.868 [2024-11-20 10:04:19.663109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.868 [2024-11-20 10:04:19.663127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.868 qpair failed and we were unable to recover it. 00:30:48.868 [2024-11-20 10:04:19.663349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.868 [2024-11-20 10:04:19.663367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.868 qpair failed and we were unable to recover it. 00:30:48.868 [2024-11-20 10:04:19.663716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.868 [2024-11-20 10:04:19.663735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.868 qpair failed and we were unable to recover it. 00:30:48.868 [2024-11-20 10:04:19.664077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.868 [2024-11-20 10:04:19.664094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.868 qpair failed and we were unable to recover it. 00:30:48.868 [2024-11-20 10:04:19.664409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.868 [2024-11-20 10:04:19.664427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.868 qpair failed and we were unable to recover it. 00:30:48.868 [2024-11-20 10:04:19.664776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.868 [2024-11-20 10:04:19.664794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.868 qpair failed and we were unable to recover it. 00:30:48.868 [2024-11-20 10:04:19.665130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.868 [2024-11-20 10:04:19.665147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.868 qpair failed and we were unable to recover it. 00:30:48.868 [2024-11-20 10:04:19.665528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.868 [2024-11-20 10:04:19.665545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.868 qpair failed and we were unable to recover it. 00:30:48.868 [2024-11-20 10:04:19.665898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.868 [2024-11-20 10:04:19.665916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.868 qpair failed and we were unable to recover it. 00:30:48.868 [2024-11-20 10:04:19.666261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.868 [2024-11-20 10:04:19.666279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.868 qpair failed and we were unable to recover it. 00:30:48.868 [2024-11-20 10:04:19.666619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.868 [2024-11-20 10:04:19.666638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.868 qpair failed and we were unable to recover it. 00:30:48.868 [2024-11-20 10:04:19.666980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.868 [2024-11-20 10:04:19.666998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.868 qpair failed and we were unable to recover it. 00:30:48.868 [2024-11-20 10:04:19.667319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.868 [2024-11-20 10:04:19.667336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.868 qpair failed and we were unable to recover it. 00:30:48.868 [2024-11-20 10:04:19.667698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.868 [2024-11-20 10:04:19.667715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.868 qpair failed and we were unable to recover it. 00:30:48.868 [2024-11-20 10:04:19.668054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.868 [2024-11-20 10:04:19.668071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.868 qpair failed and we were unable to recover it. 00:30:48.868 [2024-11-20 10:04:19.668424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.868 [2024-11-20 10:04:19.668442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.868 qpair failed and we were unable to recover it. 00:30:48.868 [2024-11-20 10:04:19.668780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.868 [2024-11-20 10:04:19.668798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.868 qpair failed and we were unable to recover it. 00:30:48.868 [2024-11-20 10:04:19.669078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.868 [2024-11-20 10:04:19.669100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.868 qpair failed and we were unable to recover it. 00:30:48.868 [2024-11-20 10:04:19.669428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.868 [2024-11-20 10:04:19.669446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.868 qpair failed and we were unable to recover it. 00:30:48.868 [2024-11-20 10:04:19.669789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.868 [2024-11-20 10:04:19.669808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.868 qpair failed and we were unable to recover it. 00:30:48.868 [2024-11-20 10:04:19.670171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.868 [2024-11-20 10:04:19.670188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.868 qpair failed and we were unable to recover it. 00:30:48.868 [2024-11-20 10:04:19.670528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.868 [2024-11-20 10:04:19.670546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.868 qpair failed and we were unable to recover it. 00:30:48.868 [2024-11-20 10:04:19.670920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.868 [2024-11-20 10:04:19.670938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.868 qpair failed and we were unable to recover it. 00:30:48.868 [2024-11-20 10:04:19.671155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.868 [2024-11-20 10:04:19.671179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.868 qpair failed and we were unable to recover it. 00:30:48.868 [2024-11-20 10:04:19.671405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.868 [2024-11-20 10:04:19.671424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.868 qpair failed and we were unable to recover it. 00:30:48.868 [2024-11-20 10:04:19.671753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.868 [2024-11-20 10:04:19.671770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.869 qpair failed and we were unable to recover it. 00:30:48.869 [2024-11-20 10:04:19.672492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.869 [2024-11-20 10:04:19.672516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.869 qpair failed and we were unable to recover it. 00:30:48.869 [2024-11-20 10:04:19.672845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.869 [2024-11-20 10:04:19.672863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.869 qpair failed and we were unable to recover it. 00:30:48.869 [2024-11-20 10:04:19.673216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.869 [2024-11-20 10:04:19.673235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.869 qpair failed and we were unable to recover it. 00:30:48.869 [2024-11-20 10:04:19.673351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.869 [2024-11-20 10:04:19.673365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.869 qpair failed and we were unable to recover it. 00:30:48.869 [2024-11-20 10:04:19.673576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.869 [2024-11-20 10:04:19.673595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.869 qpair failed and we were unable to recover it. 00:30:48.869 [2024-11-20 10:04:19.673795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.869 [2024-11-20 10:04:19.673811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.869 qpair failed and we were unable to recover it. 00:30:48.869 [2024-11-20 10:04:19.674145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.869 [2024-11-20 10:04:19.674170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.869 qpair failed and we were unable to recover it. 00:30:48.869 [2024-11-20 10:04:19.674515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.869 [2024-11-20 10:04:19.674532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.869 qpair failed and we were unable to recover it. 00:30:48.869 [2024-11-20 10:04:19.674758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.869 [2024-11-20 10:04:19.674775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.869 qpair failed and we were unable to recover it. 00:30:48.869 [2024-11-20 10:04:19.675124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.869 [2024-11-20 10:04:19.675141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.869 qpair failed and we were unable to recover it. 00:30:48.869 [2024-11-20 10:04:19.675499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.869 [2024-11-20 10:04:19.675518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.869 qpair failed and we were unable to recover it. 00:30:48.869 [2024-11-20 10:04:19.675872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.869 [2024-11-20 10:04:19.675889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.869 qpair failed and we were unable to recover it. 00:30:48.869 [2024-11-20 10:04:19.676281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.869 [2024-11-20 10:04:19.676299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.869 qpair failed and we were unable to recover it. 00:30:48.869 [2024-11-20 10:04:19.676614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.869 [2024-11-20 10:04:19.676631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.869 qpair failed and we were unable to recover it. 00:30:48.869 [2024-11-20 10:04:19.676945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.869 [2024-11-20 10:04:19.676961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.869 qpair failed and we were unable to recover it. 00:30:48.869 [2024-11-20 10:04:19.677167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.869 [2024-11-20 10:04:19.677185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.869 qpair failed and we were unable to recover it. 00:30:48.869 [2024-11-20 10:04:19.677586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.869 [2024-11-20 10:04:19.677604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.869 qpair failed and we were unable to recover it. 00:30:48.869 [2024-11-20 10:04:19.677951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.869 [2024-11-20 10:04:19.677969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.869 qpair failed and we were unable to recover it. 00:30:48.869 [2024-11-20 10:04:19.678307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.869 [2024-11-20 10:04:19.678325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.869 qpair failed and we were unable to recover it. 00:30:48.869 [2024-11-20 10:04:19.678649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.869 [2024-11-20 10:04:19.678666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.869 qpair failed and we were unable to recover it. 00:30:48.869 [2024-11-20 10:04:19.679006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.869 [2024-11-20 10:04:19.679024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.869 qpair failed and we were unable to recover it. 00:30:48.869 [2024-11-20 10:04:19.679359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.869 [2024-11-20 10:04:19.679379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.869 qpair failed and we were unable to recover it. 00:30:48.869 [2024-11-20 10:04:19.679725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.869 [2024-11-20 10:04:19.679743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.869 qpair failed and we were unable to recover it. 00:30:48.869 [2024-11-20 10:04:19.680090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.869 [2024-11-20 10:04:19.680108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.869 qpair failed and we were unable to recover it. 00:30:48.869 [2024-11-20 10:04:19.680424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.869 [2024-11-20 10:04:19.680442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.869 qpair failed and we were unable to recover it. 00:30:48.869 [2024-11-20 10:04:19.680790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.869 [2024-11-20 10:04:19.680808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.869 qpair failed and we were unable to recover it. 00:30:48.869 [2024-11-20 10:04:19.681142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.869 [2024-11-20 10:04:19.681174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.869 qpair failed and we were unable to recover it. 00:30:48.869 [2024-11-20 10:04:19.681352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.869 [2024-11-20 10:04:19.681371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.869 qpair failed and we were unable to recover it. 00:30:48.869 [2024-11-20 10:04:19.681720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.869 [2024-11-20 10:04:19.681737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.869 qpair failed and we were unable to recover it. 00:30:48.869 [2024-11-20 10:04:19.681940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.869 [2024-11-20 10:04:19.681956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.869 qpair failed and we were unable to recover it. 00:30:48.869 [2024-11-20 10:04:19.682257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.869 [2024-11-20 10:04:19.682277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.869 qpair failed and we were unable to recover it. 00:30:48.869 [2024-11-20 10:04:19.682614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.869 [2024-11-20 10:04:19.682635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.869 qpair failed and we were unable to recover it. 00:30:48.869 [2024-11-20 10:04:19.682853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.870 [2024-11-20 10:04:19.682871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.870 qpair failed and we were unable to recover it. 00:30:48.870 [2024-11-20 10:04:19.683085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.870 [2024-11-20 10:04:19.683102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.870 qpair failed and we were unable to recover it. 00:30:48.870 [2024-11-20 10:04:19.683447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.870 [2024-11-20 10:04:19.683465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.870 qpair failed and we were unable to recover it. 00:30:48.870 [2024-11-20 10:04:19.683821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.870 [2024-11-20 10:04:19.683838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.870 qpair failed and we were unable to recover it. 00:30:48.870 [2024-11-20 10:04:19.684183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.870 [2024-11-20 10:04:19.684201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.870 qpair failed and we were unable to recover it. 00:30:48.870 [2024-11-20 10:04:19.684548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.870 [2024-11-20 10:04:19.684566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.870 qpair failed and we were unable to recover it. 00:30:48.870 [2024-11-20 10:04:19.684904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.870 [2024-11-20 10:04:19.684923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.870 qpair failed and we were unable to recover it. 00:30:48.870 [2024-11-20 10:04:19.685033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.870 [2024-11-20 10:04:19.685049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.870 qpair failed and we were unable to recover it. 00:30:48.870 [2024-11-20 10:04:19.685271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.870 [2024-11-20 10:04:19.685288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.870 qpair failed and we were unable to recover it. 00:30:48.870 [2024-11-20 10:04:19.685666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.870 [2024-11-20 10:04:19.685684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.870 qpair failed and we were unable to recover it. 00:30:48.870 [2024-11-20 10:04:19.686027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.870 [2024-11-20 10:04:19.686045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.870 qpair failed and we were unable to recover it. 00:30:48.870 [2024-11-20 10:04:19.686253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.870 [2024-11-20 10:04:19.686273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.870 qpair failed and we were unable to recover it. 00:30:48.870 [2024-11-20 10:04:19.686463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.870 [2024-11-20 10:04:19.686480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.870 qpair failed and we were unable to recover it. 00:30:48.870 [2024-11-20 10:04:19.686679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.870 [2024-11-20 10:04:19.686697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.870 qpair failed and we were unable to recover it. 00:30:48.870 [2024-11-20 10:04:19.687043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.870 [2024-11-20 10:04:19.687061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.870 qpair failed and we were unable to recover it. 00:30:48.870 [2024-11-20 10:04:19.687264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.870 [2024-11-20 10:04:19.687281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.870 qpair failed and we were unable to recover it. 00:30:48.870 [2024-11-20 10:04:19.687492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.870 [2024-11-20 10:04:19.687508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.870 qpair failed and we were unable to recover it. 00:30:48.870 [2024-11-20 10:04:19.687851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.870 [2024-11-20 10:04:19.687868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.870 qpair failed and we were unable to recover it. 00:30:48.870 [2024-11-20 10:04:19.688210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.870 [2024-11-20 10:04:19.688227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.870 qpair failed and we were unable to recover it. 00:30:48.870 [2024-11-20 10:04:19.688551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.870 [2024-11-20 10:04:19.688569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.870 qpair failed and we were unable to recover it. 00:30:48.870 [2024-11-20 10:04:19.688914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.870 [2024-11-20 10:04:19.688931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.870 qpair failed and we were unable to recover it. 00:30:48.870 [2024-11-20 10:04:19.689275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.870 [2024-11-20 10:04:19.689292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.870 qpair failed and we were unable to recover it. 00:30:48.870 [2024-11-20 10:04:19.689642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.870 [2024-11-20 10:04:19.689659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.870 qpair failed and we were unable to recover it. 00:30:48.870 [2024-11-20 10:04:19.689979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.870 [2024-11-20 10:04:19.689996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.870 qpair failed and we were unable to recover it. 00:30:48.870 [2024-11-20 10:04:19.690338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.870 [2024-11-20 10:04:19.690355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.870 qpair failed and we were unable to recover it. 00:30:48.870 [2024-11-20 10:04:19.690713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.870 [2024-11-20 10:04:19.690732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.870 qpair failed and we were unable to recover it. 00:30:48.870 [2024-11-20 10:04:19.690969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.870 [2024-11-20 10:04:19.690987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.870 qpair failed and we were unable to recover it. 00:30:48.870 [2024-11-20 10:04:19.691317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.870 [2024-11-20 10:04:19.691335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.870 qpair failed and we were unable to recover it. 00:30:48.870 [2024-11-20 10:04:19.691695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.870 [2024-11-20 10:04:19.691714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.870 qpair failed and we were unable to recover it. 00:30:48.870 [2024-11-20 10:04:19.692055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.870 [2024-11-20 10:04:19.692071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.870 qpair failed and we were unable to recover it. 00:30:48.870 [2024-11-20 10:04:19.692300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.870 [2024-11-20 10:04:19.692319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.870 qpair failed and we were unable to recover it. 00:30:48.870 [2024-11-20 10:04:19.692674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.870 [2024-11-20 10:04:19.692691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.870 qpair failed and we were unable to recover it. 00:30:48.870 [2024-11-20 10:04:19.692935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.870 [2024-11-20 10:04:19.692952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.870 qpair failed and we were unable to recover it. 00:30:48.870 [2024-11-20 10:04:19.693294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.871 [2024-11-20 10:04:19.693312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.871 qpair failed and we were unable to recover it. 00:30:48.871 [2024-11-20 10:04:19.693657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.871 [2024-11-20 10:04:19.693676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.871 qpair failed and we were unable to recover it. 00:30:48.871 [2024-11-20 10:04:19.694028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.871 [2024-11-20 10:04:19.694044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.871 qpair failed and we were unable to recover it. 00:30:48.871 [2024-11-20 10:04:19.694331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.871 [2024-11-20 10:04:19.694348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.871 qpair failed and we were unable to recover it. 00:30:48.871 [2024-11-20 10:04:19.694686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.871 [2024-11-20 10:04:19.694703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.871 qpair failed and we were unable to recover it. 00:30:48.871 [2024-11-20 10:04:19.695046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.871 [2024-11-20 10:04:19.695064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.871 qpair failed and we were unable to recover it. 00:30:48.871 [2024-11-20 10:04:19.695448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.871 [2024-11-20 10:04:19.695469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.871 qpair failed and we were unable to recover it. 00:30:48.871 [2024-11-20 10:04:19.695825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.871 [2024-11-20 10:04:19.695841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.871 qpair failed and we were unable to recover it. 00:30:48.871 [2024-11-20 10:04:19.696182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.871 [2024-11-20 10:04:19.696200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.871 qpair failed and we were unable to recover it. 00:30:48.871 [2024-11-20 10:04:19.696513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.871 [2024-11-20 10:04:19.696530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.871 qpair failed and we were unable to recover it. 00:30:48.871 [2024-11-20 10:04:19.696736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.871 [2024-11-20 10:04:19.696752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.871 qpair failed and we were unable to recover it. 00:30:48.871 [2024-11-20 10:04:19.697113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.871 [2024-11-20 10:04:19.697131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.871 qpair failed and we were unable to recover it. 00:30:48.871 [2024-11-20 10:04:19.697478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.871 [2024-11-20 10:04:19.697496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.871 qpair failed and we were unable to recover it. 00:30:48.871 [2024-11-20 10:04:19.697841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.871 [2024-11-20 10:04:19.697857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.871 qpair failed and we were unable to recover it. 00:30:48.871 [2024-11-20 10:04:19.698208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.871 [2024-11-20 10:04:19.698225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.871 qpair failed and we were unable to recover it. 00:30:48.871 [2024-11-20 10:04:19.698589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.871 [2024-11-20 10:04:19.698608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.871 qpair failed and we were unable to recover it. 00:30:48.871 [2024-11-20 10:04:19.698950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.871 [2024-11-20 10:04:19.698970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.871 qpair failed and we were unable to recover it. 00:30:48.871 [2024-11-20 10:04:19.699307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.871 [2024-11-20 10:04:19.699325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.871 qpair failed and we were unable to recover it. 00:30:48.871 [2024-11-20 10:04:19.699656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.871 [2024-11-20 10:04:19.699675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.871 qpair failed and we were unable to recover it. 00:30:48.871 [2024-11-20 10:04:19.700045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.871 [2024-11-20 10:04:19.700062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.871 qpair failed and we were unable to recover it. 00:30:48.871 [2024-11-20 10:04:19.700406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.871 [2024-11-20 10:04:19.700426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.871 qpair failed and we were unable to recover it. 00:30:48.871 [2024-11-20 10:04:19.700634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.871 [2024-11-20 10:04:19.700651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.871 qpair failed and we were unable to recover it. 00:30:48.871 [2024-11-20 10:04:19.701010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.871 [2024-11-20 10:04:19.701030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.871 qpair failed and we were unable to recover it. 00:30:48.871 [2024-11-20 10:04:19.701377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.871 [2024-11-20 10:04:19.701396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.871 qpair failed and we were unable to recover it. 00:30:48.871 [2024-11-20 10:04:19.701743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.871 [2024-11-20 10:04:19.701761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.871 qpair failed and we were unable to recover it. 00:30:48.871 [2024-11-20 10:04:19.701908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.871 [2024-11-20 10:04:19.701926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.871 qpair failed and we were unable to recover it. 00:30:48.871 [2024-11-20 10:04:19.702010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.871 [2024-11-20 10:04:19.702025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.871 qpair failed and we were unable to recover it. 00:30:48.871 [2024-11-20 10:04:19.702400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.871 [2024-11-20 10:04:19.702418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.871 qpair failed and we were unable to recover it. 00:30:48.871 [2024-11-20 10:04:19.702612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.871 [2024-11-20 10:04:19.702630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.871 qpair failed and we were unable to recover it. 00:30:48.871 [2024-11-20 10:04:19.702969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.871 [2024-11-20 10:04:19.702987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.871 qpair failed and we were unable to recover it. 00:30:48.871 [2024-11-20 10:04:19.703321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.871 [2024-11-20 10:04:19.703339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.871 qpair failed and we were unable to recover it. 00:30:48.871 [2024-11-20 10:04:19.703726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.871 [2024-11-20 10:04:19.703743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.871 qpair failed and we were unable to recover it. 00:30:48.871 [2024-11-20 10:04:19.704078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.872 [2024-11-20 10:04:19.704094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.872 qpair failed and we were unable to recover it. 00:30:48.872 [2024-11-20 10:04:19.704407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.872 [2024-11-20 10:04:19.704425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.872 qpair failed and we were unable to recover it. 00:30:48.872 [2024-11-20 10:04:19.704685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.872 [2024-11-20 10:04:19.704702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.872 qpair failed and we were unable to recover it. 00:30:48.872 [2024-11-20 10:04:19.705062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.872 [2024-11-20 10:04:19.705078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.872 qpair failed and we were unable to recover it. 00:30:48.872 [2024-11-20 10:04:19.705268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.872 [2024-11-20 10:04:19.705285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.872 qpair failed and we were unable to recover it. 00:30:48.872 [2024-11-20 10:04:19.705653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.872 [2024-11-20 10:04:19.705670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.872 qpair failed and we were unable to recover it. 00:30:48.872 [2024-11-20 10:04:19.705901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.872 [2024-11-20 10:04:19.705917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.872 qpair failed and we were unable to recover it. 00:30:48.872 [2024-11-20 10:04:19.706275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.872 [2024-11-20 10:04:19.706294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.872 qpair failed and we were unable to recover it. 00:30:48.872 [2024-11-20 10:04:19.706656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.872 [2024-11-20 10:04:19.706674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.872 qpair failed and we were unable to recover it. 00:30:48.872 [2024-11-20 10:04:19.707009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.872 [2024-11-20 10:04:19.707027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.872 qpair failed and we were unable to recover it. 00:30:48.872 [2024-11-20 10:04:19.707240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.872 [2024-11-20 10:04:19.707257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.872 qpair failed and we were unable to recover it. 00:30:48.872 [2024-11-20 10:04:19.707640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.872 [2024-11-20 10:04:19.707658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.872 qpair failed and we were unable to recover it. 00:30:48.872 [2024-11-20 10:04:19.708023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.872 [2024-11-20 10:04:19.708042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.872 qpair failed and we were unable to recover it. 00:30:48.872 [2024-11-20 10:04:19.708366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.872 [2024-11-20 10:04:19.708384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.872 qpair failed and we were unable to recover it. 00:30:48.872 [2024-11-20 10:04:19.708731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.872 [2024-11-20 10:04:19.708754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.872 qpair failed and we were unable to recover it. 00:30:48.872 [2024-11-20 10:04:19.709097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.872 [2024-11-20 10:04:19.709116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.872 qpair failed and we were unable to recover it. 00:30:48.872 [2024-11-20 10:04:19.709316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.872 [2024-11-20 10:04:19.709335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.872 qpair failed and we were unable to recover it. 00:30:48.872 [2024-11-20 10:04:19.709690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.872 [2024-11-20 10:04:19.709708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.872 qpair failed and we were unable to recover it. 00:30:48.872 [2024-11-20 10:04:19.710050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.872 [2024-11-20 10:04:19.710068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.872 qpair failed and we were unable to recover it. 00:30:48.872 [2024-11-20 10:04:19.710252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.872 [2024-11-20 10:04:19.710270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.872 qpair failed and we were unable to recover it. 00:30:48.872 [2024-11-20 10:04:19.710615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.872 [2024-11-20 10:04:19.710633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.872 qpair failed and we were unable to recover it. 00:30:48.872 [2024-11-20 10:04:19.710707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.872 [2024-11-20 10:04:19.710724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.872 qpair failed and we were unable to recover it. 00:30:48.872 [2024-11-20 10:04:19.711012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.872 [2024-11-20 10:04:19.711031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.872 qpair failed and we were unable to recover it. 00:30:48.872 [2024-11-20 10:04:19.711390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.872 [2024-11-20 10:04:19.711408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.872 qpair failed and we were unable to recover it. 00:30:48.872 [2024-11-20 10:04:19.711739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.872 [2024-11-20 10:04:19.711756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.872 qpair failed and we were unable to recover it. 00:30:48.872 [2024-11-20 10:04:19.712074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.872 [2024-11-20 10:04:19.712091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.872 qpair failed and we were unable to recover it. 00:30:48.872 [2024-11-20 10:04:19.712438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.872 [2024-11-20 10:04:19.712456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.872 qpair failed and we were unable to recover it. 00:30:48.872 [2024-11-20 10:04:19.712794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.872 [2024-11-20 10:04:19.712812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.872 qpair failed and we were unable to recover it. 00:30:48.872 [2024-11-20 10:04:19.713023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.872 [2024-11-20 10:04:19.713041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.872 qpair failed and we were unable to recover it. 00:30:48.872 [2024-11-20 10:04:19.713375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.872 [2024-11-20 10:04:19.713392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.872 qpair failed and we were unable to recover it. 00:30:48.872 [2024-11-20 10:04:19.713733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.872 [2024-11-20 10:04:19.713750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.872 qpair failed and we were unable to recover it. 00:30:48.872 [2024-11-20 10:04:19.714078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.872 [2024-11-20 10:04:19.714095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.872 qpair failed and we were unable to recover it. 00:30:48.872 [2024-11-20 10:04:19.714353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.872 [2024-11-20 10:04:19.714370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.872 qpair failed and we were unable to recover it. 00:30:48.872 [2024-11-20 10:04:19.714694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.872 [2024-11-20 10:04:19.714711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.872 qpair failed and we were unable to recover it. 00:30:48.872 [2024-11-20 10:04:19.715123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.872 [2024-11-20 10:04:19.715141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.872 qpair failed and we were unable to recover it. 00:30:48.872 [2024-11-20 10:04:19.715521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.873 [2024-11-20 10:04:19.715539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.873 qpair failed and we were unable to recover it. 00:30:48.873 [2024-11-20 10:04:19.715876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.873 [2024-11-20 10:04:19.715895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.873 qpair failed and we were unable to recover it. 00:30:48.873 [2024-11-20 10:04:19.716273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.873 [2024-11-20 10:04:19.716291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.873 qpair failed and we were unable to recover it. 00:30:48.873 [2024-11-20 10:04:19.716640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.873 [2024-11-20 10:04:19.716658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.873 qpair failed and we were unable to recover it. 00:30:48.873 [2024-11-20 10:04:19.716993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.873 [2024-11-20 10:04:19.717010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.873 qpair failed and we were unable to recover it. 00:30:48.873 [2024-11-20 10:04:19.717385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.873 [2024-11-20 10:04:19.717403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.873 qpair failed and we were unable to recover it. 00:30:48.873 [2024-11-20 10:04:19.717723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.873 [2024-11-20 10:04:19.717740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.873 qpair failed and we were unable to recover it. 00:30:48.873 [2024-11-20 10:04:19.718077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.873 [2024-11-20 10:04:19.718095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.873 qpair failed and we were unable to recover it. 00:30:48.873 [2024-11-20 10:04:19.718423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.873 [2024-11-20 10:04:19.718442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.873 qpair failed and we were unable to recover it. 00:30:48.873 [2024-11-20 10:04:19.718780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.873 [2024-11-20 10:04:19.718798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.873 qpair failed and we were unable to recover it. 00:30:48.873 [2024-11-20 10:04:19.719157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.873 [2024-11-20 10:04:19.719182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.873 qpair failed and we were unable to recover it. 00:30:48.873 [2024-11-20 10:04:19.719575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.873 [2024-11-20 10:04:19.719592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.873 qpair failed and we were unable to recover it. 00:30:48.873 [2024-11-20 10:04:19.719931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.873 [2024-11-20 10:04:19.719949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.873 qpair failed and we were unable to recover it. 00:30:48.873 [2024-11-20 10:04:19.720346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.873 [2024-11-20 10:04:19.720363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.873 qpair failed and we were unable to recover it. 00:30:48.873 [2024-11-20 10:04:19.720694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.873 [2024-11-20 10:04:19.720713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.873 qpair failed and we were unable to recover it. 00:30:48.873 [2024-11-20 10:04:19.721093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.873 [2024-11-20 10:04:19.721111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.873 qpair failed and we were unable to recover it. 00:30:48.873 [2024-11-20 10:04:19.721447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.873 [2024-11-20 10:04:19.721467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.873 qpair failed and we were unable to recover it. 00:30:48.873 [2024-11-20 10:04:19.721685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.873 [2024-11-20 10:04:19.721703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.873 qpair failed and we were unable to recover it. 00:30:48.873 [2024-11-20 10:04:19.721889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.873 [2024-11-20 10:04:19.721905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.873 qpair failed and we were unable to recover it. 00:30:48.873 [2024-11-20 10:04:19.722236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.873 [2024-11-20 10:04:19.722257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.873 qpair failed and we were unable to recover it. 00:30:48.873 [2024-11-20 10:04:19.722620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.873 [2024-11-20 10:04:19.722638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.873 qpair failed and we were unable to recover it. 00:30:48.873 [2024-11-20 10:04:19.722975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.873 [2024-11-20 10:04:19.722992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.873 qpair failed and we were unable to recover it. 00:30:48.873 [2024-11-20 10:04:19.723307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.873 [2024-11-20 10:04:19.723325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.873 qpair failed and we were unable to recover it. 00:30:48.873 [2024-11-20 10:04:19.723665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.873 [2024-11-20 10:04:19.723681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.873 qpair failed and we were unable to recover it. 00:30:48.873 [2024-11-20 10:04:19.723998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.873 [2024-11-20 10:04:19.724017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.873 qpair failed and we were unable to recover it. 00:30:48.873 [2024-11-20 10:04:19.724239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.873 [2024-11-20 10:04:19.724258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.873 qpair failed and we were unable to recover it. 00:30:48.873 [2024-11-20 10:04:19.724575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.873 [2024-11-20 10:04:19.724591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.873 qpair failed and we were unable to recover it. 00:30:48.873 [2024-11-20 10:04:19.724943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.873 [2024-11-20 10:04:19.724960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.873 qpair failed and we were unable to recover it. 00:30:48.873 [2024-11-20 10:04:19.725303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.873 [2024-11-20 10:04:19.725320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.873 qpair failed and we were unable to recover it. 00:30:48.873 [2024-11-20 10:04:19.725661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.873 [2024-11-20 10:04:19.725678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.873 qpair failed and we were unable to recover it. 00:30:48.873 [2024-11-20 10:04:19.726037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.873 [2024-11-20 10:04:19.726055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.874 qpair failed and we were unable to recover it. 00:30:48.874 [2024-11-20 10:04:19.726384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.874 [2024-11-20 10:04:19.726401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.874 qpair failed and we were unable to recover it. 00:30:48.874 [2024-11-20 10:04:19.726616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.874 [2024-11-20 10:04:19.726634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.874 qpair failed and we were unable to recover it. 00:30:48.874 [2024-11-20 10:04:19.726964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.874 [2024-11-20 10:04:19.726982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.874 qpair failed and we were unable to recover it. 00:30:48.874 [2024-11-20 10:04:19.727336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.874 [2024-11-20 10:04:19.727355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.874 qpair failed and we were unable to recover it. 00:30:48.874 [2024-11-20 10:04:19.727696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.874 [2024-11-20 10:04:19.727715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.874 qpair failed and we were unable to recover it. 00:30:48.874 [2024-11-20 10:04:19.728026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.874 [2024-11-20 10:04:19.728043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.874 qpair failed and we were unable to recover it. 00:30:48.874 [2024-11-20 10:04:19.728392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.874 [2024-11-20 10:04:19.728412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.874 qpair failed and we were unable to recover it. 00:30:48.874 [2024-11-20 10:04:19.728744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.874 [2024-11-20 10:04:19.728761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.874 qpair failed and we were unable to recover it. 00:30:48.874 [2024-11-20 10:04:19.729090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.874 [2024-11-20 10:04:19.729108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.874 qpair failed and we were unable to recover it. 00:30:48.874 [2024-11-20 10:04:19.729394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.874 [2024-11-20 10:04:19.729412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.874 qpair failed and we were unable to recover it. 00:30:48.874 [2024-11-20 10:04:19.729734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.874 [2024-11-20 10:04:19.729751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.874 qpair failed and we were unable to recover it. 00:30:48.874 [2024-11-20 10:04:19.730069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.874 [2024-11-20 10:04:19.730086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.874 qpair failed and we were unable to recover it. 00:30:48.874 [2024-11-20 10:04:19.730422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.874 [2024-11-20 10:04:19.730441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.874 qpair failed and we were unable to recover it. 00:30:48.874 [2024-11-20 10:04:19.730782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.874 [2024-11-20 10:04:19.730799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.874 qpair failed and we were unable to recover it. 00:30:48.874 [2024-11-20 10:04:19.731196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.874 [2024-11-20 10:04:19.731214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.874 qpair failed and we were unable to recover it. 00:30:48.874 [2024-11-20 10:04:19.731577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.874 [2024-11-20 10:04:19.731596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.874 qpair failed and we were unable to recover it. 00:30:48.874 [2024-11-20 10:04:19.731936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.874 [2024-11-20 10:04:19.731954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.874 qpair failed and we were unable to recover it. 00:30:48.874 [2024-11-20 10:04:19.732296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.874 [2024-11-20 10:04:19.732313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.874 qpair failed and we were unable to recover it. 00:30:48.874 [2024-11-20 10:04:19.732645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.874 [2024-11-20 10:04:19.732664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.874 qpair failed and we were unable to recover it. 00:30:48.874 [2024-11-20 10:04:19.733002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.874 [2024-11-20 10:04:19.733019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.874 qpair failed and we were unable to recover it. 00:30:48.874 [2024-11-20 10:04:19.733332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.874 [2024-11-20 10:04:19.733349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.874 qpair failed and we were unable to recover it. 00:30:48.874 [2024-11-20 10:04:19.733699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.874 [2024-11-20 10:04:19.733716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.874 qpair failed and we were unable to recover it. 00:30:48.874 [2024-11-20 10:04:19.734053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.874 [2024-11-20 10:04:19.734071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.874 qpair failed and we were unable to recover it. 00:30:48.874 [2024-11-20 10:04:19.734384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.874 [2024-11-20 10:04:19.734402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.874 qpair failed and we were unable to recover it. 00:30:48.874 [2024-11-20 10:04:19.734708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.874 [2024-11-20 10:04:19.734724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.874 qpair failed and we were unable to recover it. 00:30:48.874 [2024-11-20 10:04:19.735065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.874 [2024-11-20 10:04:19.735083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.875 qpair failed and we were unable to recover it. 00:30:48.875 [2024-11-20 10:04:19.735424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.875 [2024-11-20 10:04:19.735442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.875 qpair failed and we were unable to recover it. 00:30:48.875 [2024-11-20 10:04:19.735787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.875 [2024-11-20 10:04:19.735805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.875 qpair failed and we were unable to recover it. 00:30:48.875 [2024-11-20 10:04:19.736144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.875 [2024-11-20 10:04:19.736171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.875 qpair failed and we were unable to recover it. 00:30:48.875 [2024-11-20 10:04:19.736518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.875 [2024-11-20 10:04:19.736535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.875 qpair failed and we were unable to recover it. 00:30:48.875 [2024-11-20 10:04:19.736876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.875 [2024-11-20 10:04:19.736894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.875 qpair failed and we were unable to recover it. 00:30:48.875 [2024-11-20 10:04:19.737202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.875 [2024-11-20 10:04:19.737220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.875 qpair failed and we were unable to recover it. 00:30:48.875 [2024-11-20 10:04:19.737556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.875 [2024-11-20 10:04:19.737574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.875 qpair failed and we were unable to recover it. 00:30:48.875 [2024-11-20 10:04:19.737914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.875 [2024-11-20 10:04:19.737932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.875 qpair failed and we were unable to recover it. 00:30:48.875 [2024-11-20 10:04:19.738272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.875 [2024-11-20 10:04:19.738292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.875 qpair failed and we were unable to recover it. 00:30:48.875 [2024-11-20 10:04:19.738650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.875 [2024-11-20 10:04:19.738667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.875 qpair failed and we were unable to recover it. 00:30:48.875 [2024-11-20 10:04:19.739002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.875 [2024-11-20 10:04:19.739020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.875 qpair failed and we were unable to recover it. 00:30:48.875 [2024-11-20 10:04:19.739337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.875 [2024-11-20 10:04:19.739356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.875 qpair failed and we were unable to recover it. 00:30:48.875 [2024-11-20 10:04:19.739695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.875 [2024-11-20 10:04:19.739713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.875 qpair failed and we were unable to recover it. 00:30:48.875 [2024-11-20 10:04:19.740067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.875 [2024-11-20 10:04:19.740085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.875 qpair failed and we were unable to recover it. 00:30:48.875 [2024-11-20 10:04:19.740471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.875 [2024-11-20 10:04:19.740489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.875 qpair failed and we were unable to recover it. 00:30:48.875 [2024-11-20 10:04:19.740821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.875 [2024-11-20 10:04:19.740840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.875 qpair failed and we were unable to recover it. 00:30:48.875 [2024-11-20 10:04:19.741175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.875 [2024-11-20 10:04:19.741193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.875 qpair failed and we were unable to recover it. 00:30:48.875 [2024-11-20 10:04:19.741534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.875 [2024-11-20 10:04:19.741552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.875 qpair failed and we were unable to recover it. 00:30:48.875 [2024-11-20 10:04:19.741892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.875 [2024-11-20 10:04:19.741908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.875 qpair failed and we were unable to recover it. 00:30:48.875 [2024-11-20 10:04:19.742246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.875 [2024-11-20 10:04:19.742263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.875 qpair failed and we were unable to recover it. 00:30:48.875 [2024-11-20 10:04:19.742574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.875 [2024-11-20 10:04:19.742590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.875 qpair failed and we were unable to recover it. 00:30:48.875 [2024-11-20 10:04:19.742957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.875 [2024-11-20 10:04:19.742975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.875 qpair failed and we were unable to recover it. 00:30:48.875 [2024-11-20 10:04:19.743341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.875 [2024-11-20 10:04:19.743359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.875 qpair failed and we were unable to recover it. 00:30:48.875 [2024-11-20 10:04:19.743701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.875 [2024-11-20 10:04:19.743718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.875 qpair failed and we were unable to recover it. 00:30:48.875 [2024-11-20 10:04:19.744044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.875 [2024-11-20 10:04:19.744062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.875 qpair failed and we were unable to recover it. 00:30:48.875 [2024-11-20 10:04:19.744405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.875 [2024-11-20 10:04:19.744424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.875 qpair failed and we were unable to recover it. 00:30:48.875 [2024-11-20 10:04:19.744643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.875 [2024-11-20 10:04:19.744662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.875 qpair failed and we were unable to recover it. 00:30:48.875 [2024-11-20 10:04:19.745005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.875 [2024-11-20 10:04:19.745023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.875 qpair failed and we were unable to recover it. 00:30:48.875 [2024-11-20 10:04:19.745371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.875 [2024-11-20 10:04:19.745389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.875 qpair failed and we were unable to recover it. 00:30:48.875 [2024-11-20 10:04:19.745735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.875 [2024-11-20 10:04:19.745752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.876 qpair failed and we were unable to recover it. 00:30:48.876 [2024-11-20 10:04:19.746083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.876 [2024-11-20 10:04:19.746100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.876 qpair failed and we were unable to recover it. 00:30:48.876 [2024-11-20 10:04:19.746422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.876 [2024-11-20 10:04:19.746442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.876 qpair failed and we were unable to recover it. 00:30:48.876 [2024-11-20 10:04:19.746785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.876 [2024-11-20 10:04:19.746802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.876 qpair failed and we were unable to recover it. 00:30:48.876 [2024-11-20 10:04:19.747147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.876 [2024-11-20 10:04:19.747179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.876 qpair failed and we were unable to recover it. 00:30:48.876 [2024-11-20 10:04:19.747519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.876 [2024-11-20 10:04:19.747537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.876 qpair failed and we were unable to recover it. 00:30:48.876 [2024-11-20 10:04:19.747874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.876 [2024-11-20 10:04:19.747893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.876 qpair failed and we were unable to recover it. 00:30:48.876 [2024-11-20 10:04:19.748227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.876 [2024-11-20 10:04:19.748245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.876 qpair failed and we were unable to recover it. 00:30:48.876 [2024-11-20 10:04:19.748564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.876 [2024-11-20 10:04:19.748583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.876 qpair failed and we were unable to recover it. 00:30:48.876 [2024-11-20 10:04:19.748901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.876 [2024-11-20 10:04:19.748919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.876 qpair failed and we were unable to recover it. 00:30:48.876 [2024-11-20 10:04:19.749215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.876 [2024-11-20 10:04:19.749232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:48.876 qpair failed and we were unable to recover it. 00:30:49.150 [2024-11-20 10:04:19.749618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.150 [2024-11-20 10:04:19.749639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.150 qpair failed and we were unable to recover it. 00:30:49.150 [2024-11-20 10:04:19.749956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.150 [2024-11-20 10:04:19.749974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.150 qpair failed and we were unable to recover it. 00:30:49.150 [2024-11-20 10:04:19.750326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.150 [2024-11-20 10:04:19.750349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.150 qpair failed and we were unable to recover it. 00:30:49.150 [2024-11-20 10:04:19.750684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.150 [2024-11-20 10:04:19.750703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.150 qpair failed and we were unable to recover it. 00:30:49.150 [2024-11-20 10:04:19.750886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.150 [2024-11-20 10:04:19.750904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.150 qpair failed and we were unable to recover it. 00:30:49.150 [2024-11-20 10:04:19.751257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.150 [2024-11-20 10:04:19.751276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.150 qpair failed and we were unable to recover it. 00:30:49.150 [2024-11-20 10:04:19.751603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.151 [2024-11-20 10:04:19.751622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.151 qpair failed and we were unable to recover it. 00:30:49.151 [2024-11-20 10:04:19.751970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.151 [2024-11-20 10:04:19.751989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.151 qpair failed and we were unable to recover it. 00:30:49.151 [2024-11-20 10:04:19.752325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.151 [2024-11-20 10:04:19.752345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.151 qpair failed and we were unable to recover it. 00:30:49.151 [2024-11-20 10:04:19.752577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.151 [2024-11-20 10:04:19.752595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.151 qpair failed and we were unable to recover it. 00:30:49.151 [2024-11-20 10:04:19.752944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.151 [2024-11-20 10:04:19.752964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.151 qpair failed and we were unable to recover it. 00:30:49.151 [2024-11-20 10:04:19.753301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.151 [2024-11-20 10:04:19.753320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.151 qpair failed and we were unable to recover it. 00:30:49.151 [2024-11-20 10:04:19.753668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.151 [2024-11-20 10:04:19.753686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.151 qpair failed and we were unable to recover it. 00:30:49.151 [2024-11-20 10:04:19.754011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.151 [2024-11-20 10:04:19.754029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.151 qpair failed and we were unable to recover it. 00:30:49.151 [2024-11-20 10:04:19.754362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.151 [2024-11-20 10:04:19.754380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.151 qpair failed and we were unable to recover it. 00:30:49.151 [2024-11-20 10:04:19.754718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.151 [2024-11-20 10:04:19.754736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.151 qpair failed and we were unable to recover it. 00:30:49.151 [2024-11-20 10:04:19.755075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.151 [2024-11-20 10:04:19.755093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.151 qpair failed and we were unable to recover it. 00:30:49.151 [2024-11-20 10:04:19.755420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.151 [2024-11-20 10:04:19.755439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.151 qpair failed and we were unable to recover it. 00:30:49.151 [2024-11-20 10:04:19.755775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.151 [2024-11-20 10:04:19.755793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.151 qpair failed and we were unable to recover it. 00:30:49.151 [2024-11-20 10:04:19.756133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.151 [2024-11-20 10:04:19.756150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.151 qpair failed and we were unable to recover it. 00:30:49.151 [2024-11-20 10:04:19.756500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.151 [2024-11-20 10:04:19.756517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.151 qpair failed and we were unable to recover it. 00:30:49.151 [2024-11-20 10:04:19.756844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.151 [2024-11-20 10:04:19.756863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.151 qpair failed and we were unable to recover it. 00:30:49.151 [2024-11-20 10:04:19.757198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.151 [2024-11-20 10:04:19.757216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.151 qpair failed and we were unable to recover it. 00:30:49.151 [2024-11-20 10:04:19.757551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.151 [2024-11-20 10:04:19.757569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.151 qpair failed and we were unable to recover it. 00:30:49.151 [2024-11-20 10:04:19.757904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.151 [2024-11-20 10:04:19.757921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.151 qpair failed and we were unable to recover it. 00:30:49.151 [2024-11-20 10:04:19.758265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.151 [2024-11-20 10:04:19.758283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.151 qpair failed and we were unable to recover it. 00:30:49.151 [2024-11-20 10:04:19.758618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.151 [2024-11-20 10:04:19.758636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.151 qpair failed and we were unable to recover it. 00:30:49.151 [2024-11-20 10:04:19.758978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.151 [2024-11-20 10:04:19.758995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.151 qpair failed and we were unable to recover it. 00:30:49.151 [2024-11-20 10:04:19.759338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.151 [2024-11-20 10:04:19.759358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.151 qpair failed and we were unable to recover it. 00:30:49.151 [2024-11-20 10:04:19.759689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.151 [2024-11-20 10:04:19.759706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.151 qpair failed and we were unable to recover it. 00:30:49.151 [2024-11-20 10:04:19.760041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.151 [2024-11-20 10:04:19.760060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.151 qpair failed and we were unable to recover it. 00:30:49.151 [2024-11-20 10:04:19.760371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.151 [2024-11-20 10:04:19.760390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.151 qpair failed and we were unable to recover it. 00:30:49.151 [2024-11-20 10:04:19.760743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.151 [2024-11-20 10:04:19.760762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.151 qpair failed and we were unable to recover it. 00:30:49.151 [2024-11-20 10:04:19.761100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.151 [2024-11-20 10:04:19.761117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.151 qpair failed and we were unable to recover it. 00:30:49.151 [2024-11-20 10:04:19.761457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.151 [2024-11-20 10:04:19.761477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.151 qpair failed and we were unable to recover it. 00:30:49.151 [2024-11-20 10:04:19.761805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.151 [2024-11-20 10:04:19.761823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.151 qpair failed and we were unable to recover it. 00:30:49.151 [2024-11-20 10:04:19.762172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.151 [2024-11-20 10:04:19.762192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.151 qpair failed and we were unable to recover it. 00:30:49.151 [2024-11-20 10:04:19.762512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.151 [2024-11-20 10:04:19.762529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.151 qpair failed and we were unable to recover it. 00:30:49.151 [2024-11-20 10:04:19.762871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.151 [2024-11-20 10:04:19.762889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.151 qpair failed and we were unable to recover it. 00:30:49.151 [2024-11-20 10:04:19.763229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.151 [2024-11-20 10:04:19.763247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.151 qpair failed and we were unable to recover it. 00:30:49.151 [2024-11-20 10:04:19.763591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.151 [2024-11-20 10:04:19.763609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.151 qpair failed and we were unable to recover it. 00:30:49.151 [2024-11-20 10:04:19.763930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.151 [2024-11-20 10:04:19.763948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.151 qpair failed and we were unable to recover it. 00:30:49.151 [2024-11-20 10:04:19.764279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.151 [2024-11-20 10:04:19.764299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.151 qpair failed and we were unable to recover it. 00:30:49.152 [2024-11-20 10:04:19.764631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.152 [2024-11-20 10:04:19.764648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.152 qpair failed and we were unable to recover it. 00:30:49.152 [2024-11-20 10:04:19.764999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.152 [2024-11-20 10:04:19.765018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.152 qpair failed and we were unable to recover it. 00:30:49.152 [2024-11-20 10:04:19.765364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.152 [2024-11-20 10:04:19.765382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.152 qpair failed and we were unable to recover it. 00:30:49.152 [2024-11-20 10:04:19.765615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.152 [2024-11-20 10:04:19.765631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.152 qpair failed and we were unable to recover it. 00:30:49.152 [2024-11-20 10:04:19.766002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.152 [2024-11-20 10:04:19.766019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.152 qpair failed and we were unable to recover it. 00:30:49.152 [2024-11-20 10:04:19.766333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.152 [2024-11-20 10:04:19.766349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.152 qpair failed and we were unable to recover it. 00:30:49.152 [2024-11-20 10:04:19.766710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.152 [2024-11-20 10:04:19.766727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.152 qpair failed and we were unable to recover it. 00:30:49.152 [2024-11-20 10:04:19.767051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.152 [2024-11-20 10:04:19.767069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.152 qpair failed and we were unable to recover it. 00:30:49.152 [2024-11-20 10:04:19.767422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.152 [2024-11-20 10:04:19.767439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.152 qpair failed and we were unable to recover it. 00:30:49.152 [2024-11-20 10:04:19.767772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.152 [2024-11-20 10:04:19.767790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.152 qpair failed and we were unable to recover it. 00:30:49.152 [2024-11-20 10:04:19.768144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.152 [2024-11-20 10:04:19.768182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.152 qpair failed and we were unable to recover it. 00:30:49.152 [2024-11-20 10:04:19.768509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.152 [2024-11-20 10:04:19.768528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.152 qpair failed and we were unable to recover it. 00:30:49.152 [2024-11-20 10:04:19.768863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.152 [2024-11-20 10:04:19.768881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.152 qpair failed and we were unable to recover it. 00:30:49.152 [2024-11-20 10:04:19.769213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.152 [2024-11-20 10:04:19.769232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.152 qpair failed and we were unable to recover it. 00:30:49.152 [2024-11-20 10:04:19.769589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.152 [2024-11-20 10:04:19.769605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.152 qpair failed and we were unable to recover it. 00:30:49.152 [2024-11-20 10:04:19.769935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.152 [2024-11-20 10:04:19.769953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.152 qpair failed and we were unable to recover it. 00:30:49.152 [2024-11-20 10:04:19.770335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.152 [2024-11-20 10:04:19.770352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.152 qpair failed and we were unable to recover it. 00:30:49.152 [2024-11-20 10:04:19.770699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.152 [2024-11-20 10:04:19.770717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.152 qpair failed and we were unable to recover it. 00:30:49.152 [2024-11-20 10:04:19.771042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.152 [2024-11-20 10:04:19.771059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.152 qpair failed and we were unable to recover it. 00:30:49.152 [2024-11-20 10:04:19.771396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.152 [2024-11-20 10:04:19.771415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.152 qpair failed and we were unable to recover it. 00:30:49.152 [2024-11-20 10:04:19.771757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.152 [2024-11-20 10:04:19.771774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.152 qpair failed and we were unable to recover it. 00:30:49.152 [2024-11-20 10:04:19.772113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.152 [2024-11-20 10:04:19.772131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.152 qpair failed and we were unable to recover it. 00:30:49.152 [2024-11-20 10:04:19.772443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.152 [2024-11-20 10:04:19.772461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.152 qpair failed and we were unable to recover it. 00:30:49.152 [2024-11-20 10:04:19.772667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.152 [2024-11-20 10:04:19.772685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.152 qpair failed and we were unable to recover it. 00:30:49.152 [2024-11-20 10:04:19.773027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.152 [2024-11-20 10:04:19.773046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.152 qpair failed and we were unable to recover it. 00:30:49.152 [2024-11-20 10:04:19.773390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.152 [2024-11-20 10:04:19.773408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.152 qpair failed and we were unable to recover it. 00:30:49.152 [2024-11-20 10:04:19.773595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.152 [2024-11-20 10:04:19.773613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.152 qpair failed and we were unable to recover it. 00:30:49.152 [2024-11-20 10:04:19.773961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.152 [2024-11-20 10:04:19.773979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.152 qpair failed and we were unable to recover it. 00:30:49.152 [2024-11-20 10:04:19.774290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.152 [2024-11-20 10:04:19.774307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.152 qpair failed and we were unable to recover it. 00:30:49.152 [2024-11-20 10:04:19.774642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.152 [2024-11-20 10:04:19.774660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.152 qpair failed and we were unable to recover it. 00:30:49.152 [2024-11-20 10:04:19.775009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.152 [2024-11-20 10:04:19.775027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.152 qpair failed and we were unable to recover it. 00:30:49.152 [2024-11-20 10:04:19.775341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.152 [2024-11-20 10:04:19.775358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.152 qpair failed and we were unable to recover it. 00:30:49.152 [2024-11-20 10:04:19.775706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.152 [2024-11-20 10:04:19.775722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.152 qpair failed and we were unable to recover it. 00:30:49.152 [2024-11-20 10:04:19.776073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.152 [2024-11-20 10:04:19.776090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.152 qpair failed and we were unable to recover it. 00:30:49.152 [2024-11-20 10:04:19.776444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.152 [2024-11-20 10:04:19.776462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.152 qpair failed and we were unable to recover it. 00:30:49.152 [2024-11-20 10:04:19.776791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.152 [2024-11-20 10:04:19.776810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.152 qpair failed and we were unable to recover it. 00:30:49.152 [2024-11-20 10:04:19.777143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.152 [2024-11-20 10:04:19.777166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.152 qpair failed and we were unable to recover it. 00:30:49.152 [2024-11-20 10:04:19.777473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.153 [2024-11-20 10:04:19.777491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.153 qpair failed and we were unable to recover it. 00:30:49.153 [2024-11-20 10:04:19.777829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.153 [2024-11-20 10:04:19.777848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.153 qpair failed and we were unable to recover it. 00:30:49.153 [2024-11-20 10:04:19.778185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.153 [2024-11-20 10:04:19.778207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.153 qpair failed and we were unable to recover it. 00:30:49.153 [2024-11-20 10:04:19.778582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.153 [2024-11-20 10:04:19.778599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.153 qpair failed and we were unable to recover it. 00:30:49.153 [2024-11-20 10:04:19.778939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.153 [2024-11-20 10:04:19.778957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.153 qpair failed and we were unable to recover it. 00:30:49.153 [2024-11-20 10:04:19.779302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.153 [2024-11-20 10:04:19.779320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.153 qpair failed and we were unable to recover it. 00:30:49.153 [2024-11-20 10:04:19.779674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.153 [2024-11-20 10:04:19.779692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.153 qpair failed and we were unable to recover it. 00:30:49.153 [2024-11-20 10:04:19.780026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.153 [2024-11-20 10:04:19.780044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.153 qpair failed and we were unable to recover it. 00:30:49.153 [2024-11-20 10:04:19.780383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.153 [2024-11-20 10:04:19.780402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.153 qpair failed and we were unable to recover it. 00:30:49.153 [2024-11-20 10:04:19.780774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.153 [2024-11-20 10:04:19.780790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.153 qpair failed and we were unable to recover it. 00:30:49.153 [2024-11-20 10:04:19.781156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.153 [2024-11-20 10:04:19.781181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.153 qpair failed and we were unable to recover it. 00:30:49.153 [2024-11-20 10:04:19.781517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.153 [2024-11-20 10:04:19.781534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.153 qpair failed and we were unable to recover it. 00:30:49.153 [2024-11-20 10:04:19.781880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.153 [2024-11-20 10:04:19.781899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.153 qpair failed and we were unable to recover it. 00:30:49.153 [2024-11-20 10:04:19.782215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.153 [2024-11-20 10:04:19.782233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.153 qpair failed and we were unable to recover it. 00:30:49.153 [2024-11-20 10:04:19.782438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.153 [2024-11-20 10:04:19.782457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.153 qpair failed and we were unable to recover it. 00:30:49.153 [2024-11-20 10:04:19.782801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.153 [2024-11-20 10:04:19.782818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.153 qpair failed and we were unable to recover it. 00:30:49.153 [2024-11-20 10:04:19.783154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.153 [2024-11-20 10:04:19.783180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.153 qpair failed and we were unable to recover it. 00:30:49.153 [2024-11-20 10:04:19.783497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.153 [2024-11-20 10:04:19.783515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.153 qpair failed and we were unable to recover it. 00:30:49.153 [2024-11-20 10:04:19.783881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.153 [2024-11-20 10:04:19.783899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.153 qpair failed and we were unable to recover it. 00:30:49.153 [2024-11-20 10:04:19.784243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.153 [2024-11-20 10:04:19.784261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.153 qpair failed and we were unable to recover it. 00:30:49.153 [2024-11-20 10:04:19.784594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.153 [2024-11-20 10:04:19.784614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.153 qpair failed and we were unable to recover it. 00:30:49.153 [2024-11-20 10:04:19.784957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.153 [2024-11-20 10:04:19.784974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.153 qpair failed and we were unable to recover it. 00:30:49.153 [2024-11-20 10:04:19.785316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.153 [2024-11-20 10:04:19.785334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.153 qpair failed and we were unable to recover it. 00:30:49.153 [2024-11-20 10:04:19.785671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.153 [2024-11-20 10:04:19.785689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.153 qpair failed and we were unable to recover it. 00:30:49.153 [2024-11-20 10:04:19.786025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.153 [2024-11-20 10:04:19.786043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.153 qpair failed and we were unable to recover it. 00:30:49.153 [2024-11-20 10:04:19.786389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.153 [2024-11-20 10:04:19.786407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.153 qpair failed and we were unable to recover it. 00:30:49.153 [2024-11-20 10:04:19.786742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.153 [2024-11-20 10:04:19.786761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.153 qpair failed and we were unable to recover it. 00:30:49.153 [2024-11-20 10:04:19.787097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.153 [2024-11-20 10:04:19.787114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.153 qpair failed and we were unable to recover it. 00:30:49.153 [2024-11-20 10:04:19.787462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.153 [2024-11-20 10:04:19.787481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.153 qpair failed and we were unable to recover it. 00:30:49.153 [2024-11-20 10:04:19.787808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.153 [2024-11-20 10:04:19.787827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.153 qpair failed and we were unable to recover it. 00:30:49.153 [2024-11-20 10:04:19.788165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.153 [2024-11-20 10:04:19.788185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.153 qpair failed and we were unable to recover it. 00:30:49.153 [2024-11-20 10:04:19.788527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.153 [2024-11-20 10:04:19.788545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.153 qpair failed and we were unable to recover it. 00:30:49.153 [2024-11-20 10:04:19.788879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.153 [2024-11-20 10:04:19.788897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.153 qpair failed and we were unable to recover it. 00:30:49.153 [2024-11-20 10:04:19.789247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.153 [2024-11-20 10:04:19.789265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.153 qpair failed and we were unable to recover it. 00:30:49.153 [2024-11-20 10:04:19.789640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.153 [2024-11-20 10:04:19.789658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.153 qpair failed and we were unable to recover it. 00:30:49.153 [2024-11-20 10:04:19.789943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.153 [2024-11-20 10:04:19.789960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.153 qpair failed and we were unable to recover it. 00:30:49.153 [2024-11-20 10:04:19.790293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.153 [2024-11-20 10:04:19.790311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.153 qpair failed and we were unable to recover it. 00:30:49.153 [2024-11-20 10:04:19.790657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.153 [2024-11-20 10:04:19.790673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.154 qpair failed and we were unable to recover it. 00:30:49.154 [2024-11-20 10:04:19.791013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.154 [2024-11-20 10:04:19.791030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.154 qpair failed and we were unable to recover it. 00:30:49.154 [2024-11-20 10:04:19.791340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.154 [2024-11-20 10:04:19.791358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.154 qpair failed and we were unable to recover it. 00:30:49.154 [2024-11-20 10:04:19.791696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.154 [2024-11-20 10:04:19.791713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.154 qpair failed and we were unable to recover it. 00:30:49.154 [2024-11-20 10:04:19.792064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.154 [2024-11-20 10:04:19.792083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.154 qpair failed and we were unable to recover it. 00:30:49.154 [2024-11-20 10:04:19.792425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.154 [2024-11-20 10:04:19.792446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.154 qpair failed and we were unable to recover it. 00:30:49.154 [2024-11-20 10:04:19.792775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.154 [2024-11-20 10:04:19.792793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.154 qpair failed and we were unable to recover it. 00:30:49.154 [2024-11-20 10:04:19.793123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.154 [2024-11-20 10:04:19.793140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.154 qpair failed and we were unable to recover it. 00:30:49.154 [2024-11-20 10:04:19.793486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.154 [2024-11-20 10:04:19.793505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.154 qpair failed and we were unable to recover it. 00:30:49.154 [2024-11-20 10:04:19.793844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.154 [2024-11-20 10:04:19.793861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.154 qpair failed and we were unable to recover it. 00:30:49.154 [2024-11-20 10:04:19.794192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.154 [2024-11-20 10:04:19.794209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.154 qpair failed and we were unable to recover it. 00:30:49.154 [2024-11-20 10:04:19.794532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.154 [2024-11-20 10:04:19.794548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.154 qpair failed and we were unable to recover it. 00:30:49.154 [2024-11-20 10:04:19.794892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.154 [2024-11-20 10:04:19.794911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.154 qpair failed and we were unable to recover it. 00:30:49.154 [2024-11-20 10:04:19.795255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.154 [2024-11-20 10:04:19.795273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.154 qpair failed and we were unable to recover it. 00:30:49.154 [2024-11-20 10:04:19.795613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.154 [2024-11-20 10:04:19.795630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.154 qpair failed and we were unable to recover it. 00:30:49.154 [2024-11-20 10:04:19.796004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.154 [2024-11-20 10:04:19.796021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.154 qpair failed and we were unable to recover it. 00:30:49.154 [2024-11-20 10:04:19.796366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.154 [2024-11-20 10:04:19.796385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.154 qpair failed and we were unable to recover it. 00:30:49.154 [2024-11-20 10:04:19.796716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.154 [2024-11-20 10:04:19.796734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.154 qpair failed and we were unable to recover it. 00:30:49.154 [2024-11-20 10:04:19.797070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.154 [2024-11-20 10:04:19.797089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.154 qpair failed and we were unable to recover it. 00:30:49.154 [2024-11-20 10:04:19.797396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.154 [2024-11-20 10:04:19.797414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.154 qpair failed and we were unable to recover it. 00:30:49.154 [2024-11-20 10:04:19.797758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.154 [2024-11-20 10:04:19.797777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.154 qpair failed and we were unable to recover it. 00:30:49.154 [2024-11-20 10:04:19.798112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.154 [2024-11-20 10:04:19.798129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.154 qpair failed and we were unable to recover it. 00:30:49.154 [2024-11-20 10:04:19.798463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.154 [2024-11-20 10:04:19.798483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.154 qpair failed and we were unable to recover it. 00:30:49.154 [2024-11-20 10:04:19.798823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.154 [2024-11-20 10:04:19.798839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.154 qpair failed and we were unable to recover it. 00:30:49.154 [2024-11-20 10:04:19.799198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.154 [2024-11-20 10:04:19.799218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.154 qpair failed and we were unable to recover it. 00:30:49.154 [2024-11-20 10:04:19.799549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.154 [2024-11-20 10:04:19.799566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.154 qpair failed and we were unable to recover it. 00:30:49.154 [2024-11-20 10:04:19.799901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.154 [2024-11-20 10:04:19.799919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.154 qpair failed and we were unable to recover it. 00:30:49.154 [2024-11-20 10:04:19.800255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.154 [2024-11-20 10:04:19.800273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.154 qpair failed and we were unable to recover it. 00:30:49.154 [2024-11-20 10:04:19.800626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.154 [2024-11-20 10:04:19.800644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.154 qpair failed and we were unable to recover it. 00:30:49.154 [2024-11-20 10:04:19.800978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.154 [2024-11-20 10:04:19.800995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.154 qpair failed and we were unable to recover it. 00:30:49.154 [2024-11-20 10:04:19.801368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.154 [2024-11-20 10:04:19.801388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.154 qpair failed and we were unable to recover it. 00:30:49.154 [2024-11-20 10:04:19.801721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.154 [2024-11-20 10:04:19.801738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.154 qpair failed and we were unable to recover it. 00:30:49.154 [2024-11-20 10:04:19.801937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.154 [2024-11-20 10:04:19.801959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.154 qpair failed and we were unable to recover it. 00:30:49.154 [2024-11-20 10:04:19.802309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.154 [2024-11-20 10:04:19.802327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.154 qpair failed and we were unable to recover it. 00:30:49.154 [2024-11-20 10:04:19.802663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.154 [2024-11-20 10:04:19.802681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.154 qpair failed and we were unable to recover it. 00:30:49.154 [2024-11-20 10:04:19.803014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.154 [2024-11-20 10:04:19.803031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.154 qpair failed and we were unable to recover it. 00:30:49.154 [2024-11-20 10:04:19.803377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.154 [2024-11-20 10:04:19.803395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.154 qpair failed and we were unable to recover it. 00:30:49.154 [2024-11-20 10:04:19.803724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.154 [2024-11-20 10:04:19.803742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.154 qpair failed and we were unable to recover it. 00:30:49.154 [2024-11-20 10:04:19.804086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.155 [2024-11-20 10:04:19.804104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.155 qpair failed and we were unable to recover it. 00:30:49.155 [2024-11-20 10:04:19.804340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.155 [2024-11-20 10:04:19.804358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.155 qpair failed and we were unable to recover it. 00:30:49.155 [2024-11-20 10:04:19.804689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.155 [2024-11-20 10:04:19.804707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.155 qpair failed and we were unable to recover it. 00:30:49.155 [2024-11-20 10:04:19.805053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.155 [2024-11-20 10:04:19.805071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.155 qpair failed and we were unable to recover it. 00:30:49.155 [2024-11-20 10:04:19.805388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.155 [2024-11-20 10:04:19.805407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.155 qpair failed and we were unable to recover it. 00:30:49.155 [2024-11-20 10:04:19.805736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.155 [2024-11-20 10:04:19.805754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.155 qpair failed and we were unable to recover it. 00:30:49.155 [2024-11-20 10:04:19.805992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.155 [2024-11-20 10:04:19.806012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.155 qpair failed and we were unable to recover it. 00:30:49.155 [2024-11-20 10:04:19.806344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.155 [2024-11-20 10:04:19.806361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.155 qpair failed and we were unable to recover it. 00:30:49.155 [2024-11-20 10:04:19.806701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.155 [2024-11-20 10:04:19.806720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.155 qpair failed and we were unable to recover it. 00:30:49.155 [2024-11-20 10:04:19.807057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.155 [2024-11-20 10:04:19.807073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.155 qpair failed and we were unable to recover it. 00:30:49.155 [2024-11-20 10:04:19.807416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.155 [2024-11-20 10:04:19.807435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.155 qpair failed and we were unable to recover it. 00:30:49.155 [2024-11-20 10:04:19.807774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.155 [2024-11-20 10:04:19.807791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.155 qpair failed and we were unable to recover it. 00:30:49.155 [2024-11-20 10:04:19.808008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.155 [2024-11-20 10:04:19.808027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.155 qpair failed and we were unable to recover it. 00:30:49.155 [2024-11-20 10:04:19.808341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.155 [2024-11-20 10:04:19.808359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.155 qpair failed and we were unable to recover it. 00:30:49.155 [2024-11-20 10:04:19.808701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.155 [2024-11-20 10:04:19.808717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.155 qpair failed and we were unable to recover it. 00:30:49.155 [2024-11-20 10:04:19.809108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.155 [2024-11-20 10:04:19.809125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.155 qpair failed and we were unable to recover it. 00:30:49.155 [2024-11-20 10:04:19.809471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.155 [2024-11-20 10:04:19.809490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.155 qpair failed and we were unable to recover it. 00:30:49.155 [2024-11-20 10:04:19.809829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.155 [2024-11-20 10:04:19.809847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.155 qpair failed and we were unable to recover it. 00:30:49.155 [2024-11-20 10:04:19.810173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.155 [2024-11-20 10:04:19.810191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.155 qpair failed and we were unable to recover it. 00:30:49.155 [2024-11-20 10:04:19.810528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.155 [2024-11-20 10:04:19.810545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.155 qpair failed and we were unable to recover it. 00:30:49.155 [2024-11-20 10:04:19.810884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.155 [2024-11-20 10:04:19.810902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.155 qpair failed and we were unable to recover it. 00:30:49.155 [2024-11-20 10:04:19.811237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.155 [2024-11-20 10:04:19.811256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.155 qpair failed and we were unable to recover it. 00:30:49.155 [2024-11-20 10:04:19.811601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.155 [2024-11-20 10:04:19.811619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.155 qpair failed and we were unable to recover it. 00:30:49.155 [2024-11-20 10:04:19.811959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.155 [2024-11-20 10:04:19.811976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.155 qpair failed and we were unable to recover it. 00:30:49.155 [2024-11-20 10:04:19.812350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.155 [2024-11-20 10:04:19.812369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.155 qpair failed and we were unable to recover it. 00:30:49.155 [2024-11-20 10:04:19.812705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.155 [2024-11-20 10:04:19.812721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.155 qpair failed and we were unable to recover it. 00:30:49.155 [2024-11-20 10:04:19.813117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.155 [2024-11-20 10:04:19.813136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.155 qpair failed and we were unable to recover it. 00:30:49.155 [2024-11-20 10:04:19.813463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.155 [2024-11-20 10:04:19.813481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.155 qpair failed and we were unable to recover it. 00:30:49.155 [2024-11-20 10:04:19.813817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.155 [2024-11-20 10:04:19.813836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.155 qpair failed and we were unable to recover it. 00:30:49.155 [2024-11-20 10:04:19.814050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.155 [2024-11-20 10:04:19.814068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.155 qpair failed and we were unable to recover it. 00:30:49.155 [2024-11-20 10:04:19.814420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.155 [2024-11-20 10:04:19.814437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.155 qpair failed and we were unable to recover it. 00:30:49.155 [2024-11-20 10:04:19.814770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.155 [2024-11-20 10:04:19.814787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.155 qpair failed and we were unable to recover it. 00:30:49.155 [2024-11-20 10:04:19.815134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.156 [2024-11-20 10:04:19.815153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.156 qpair failed and we were unable to recover it. 00:30:49.156 [2024-11-20 10:04:19.815489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.156 [2024-11-20 10:04:19.815506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.156 qpair failed and we were unable to recover it. 00:30:49.156 [2024-11-20 10:04:19.815849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.156 [2024-11-20 10:04:19.815871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.156 qpair failed and we were unable to recover it. 00:30:49.156 [2024-11-20 10:04:19.816254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.156 [2024-11-20 10:04:19.816272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.156 qpair failed and we were unable to recover it. 00:30:49.156 [2024-11-20 10:04:19.816567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.156 [2024-11-20 10:04:19.816583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.156 qpair failed and we were unable to recover it. 00:30:49.156 [2024-11-20 10:04:19.816927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.156 [2024-11-20 10:04:19.816944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.156 qpair failed and we were unable to recover it. 00:30:49.156 [2024-11-20 10:04:19.817259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.156 [2024-11-20 10:04:19.817277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.156 qpair failed and we were unable to recover it. 00:30:49.156 [2024-11-20 10:04:19.817624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.156 [2024-11-20 10:04:19.817641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.156 qpair failed and we were unable to recover it. 00:30:49.156 [2024-11-20 10:04:19.817981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.156 [2024-11-20 10:04:19.818000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.156 qpair failed and we were unable to recover it. 00:30:49.156 [2024-11-20 10:04:19.818340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.156 [2024-11-20 10:04:19.818357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.156 qpair failed and we were unable to recover it. 00:30:49.156 [2024-11-20 10:04:19.818688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.156 [2024-11-20 10:04:19.818706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.156 qpair failed and we were unable to recover it. 00:30:49.156 [2024-11-20 10:04:19.819048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.156 [2024-11-20 10:04:19.819065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.156 qpair failed and we were unable to recover it. 00:30:49.156 [2024-11-20 10:04:19.819444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.156 [2024-11-20 10:04:19.819463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.156 qpair failed and we were unable to recover it. 00:30:49.156 [2024-11-20 10:04:19.819802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.156 [2024-11-20 10:04:19.819820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.156 qpair failed and we were unable to recover it. 00:30:49.156 [2024-11-20 10:04:19.820141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.156 [2024-11-20 10:04:19.820166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.156 qpair failed and we were unable to recover it. 00:30:49.156 [2024-11-20 10:04:19.820505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.156 [2024-11-20 10:04:19.820523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.156 qpair failed and we were unable to recover it. 00:30:49.156 [2024-11-20 10:04:19.820862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.156 [2024-11-20 10:04:19.820880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.156 qpair failed and we were unable to recover it. 00:30:49.156 [2024-11-20 10:04:19.821217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.156 [2024-11-20 10:04:19.821235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.156 qpair failed and we were unable to recover it. 00:30:49.156 [2024-11-20 10:04:19.821581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.156 [2024-11-20 10:04:19.821600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.156 qpair failed and we were unable to recover it. 00:30:49.156 [2024-11-20 10:04:19.821939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.156 [2024-11-20 10:04:19.821956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.156 qpair failed and we were unable to recover it. 00:30:49.156 [2024-11-20 10:04:19.822319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.156 [2024-11-20 10:04:19.822338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.156 qpair failed and we were unable to recover it. 00:30:49.156 [2024-11-20 10:04:19.822674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.156 [2024-11-20 10:04:19.822691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.156 qpair failed and we were unable to recover it. 00:30:49.156 [2024-11-20 10:04:19.823036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.156 [2024-11-20 10:04:19.823054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.156 qpair failed and we were unable to recover it. 00:30:49.156 [2024-11-20 10:04:19.823287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.156 [2024-11-20 10:04:19.823306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.156 qpair failed and we were unable to recover it. 00:30:49.156 [2024-11-20 10:04:19.823636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.156 [2024-11-20 10:04:19.823655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.156 qpair failed and we were unable to recover it. 00:30:49.156 [2024-11-20 10:04:19.823996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.156 [2024-11-20 10:04:19.824013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.156 qpair failed and we were unable to recover it. 00:30:49.156 [2024-11-20 10:04:19.824343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.156 [2024-11-20 10:04:19.824362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.156 qpair failed and we were unable to recover it. 00:30:49.156 [2024-11-20 10:04:19.824675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.156 [2024-11-20 10:04:19.824691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.156 qpair failed and we were unable to recover it. 00:30:49.156 [2024-11-20 10:04:19.825079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.156 [2024-11-20 10:04:19.825096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.156 qpair failed and we were unable to recover it. 00:30:49.156 [2024-11-20 10:04:19.825510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.156 [2024-11-20 10:04:19.825528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.156 qpair failed and we were unable to recover it. 00:30:49.156 [2024-11-20 10:04:19.825848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.156 [2024-11-20 10:04:19.825865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.156 qpair failed and we were unable to recover it. 00:30:49.156 [2024-11-20 10:04:19.826201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.156 [2024-11-20 10:04:19.826218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.156 qpair failed and we were unable to recover it. 00:30:49.156 [2024-11-20 10:04:19.826555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.156 [2024-11-20 10:04:19.826573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.156 qpair failed and we were unable to recover it. 00:30:49.156 [2024-11-20 10:04:19.826917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.156 [2024-11-20 10:04:19.826935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.156 qpair failed and we were unable to recover it. 00:30:49.156 [2024-11-20 10:04:19.827284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.156 [2024-11-20 10:04:19.827302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.156 qpair failed and we were unable to recover it. 00:30:49.156 [2024-11-20 10:04:19.827655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.156 [2024-11-20 10:04:19.827672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.156 qpair failed and we were unable to recover it. 00:30:49.156 [2024-11-20 10:04:19.828002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.156 [2024-11-20 10:04:19.828021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.156 qpair failed and we were unable to recover it. 00:30:49.156 [2024-11-20 10:04:19.828201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.156 [2024-11-20 10:04:19.828220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.156 qpair failed and we were unable to recover it. 00:30:49.157 [2024-11-20 10:04:19.828552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.157 [2024-11-20 10:04:19.828569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.157 qpair failed and we were unable to recover it. 00:30:49.157 [2024-11-20 10:04:19.828913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.157 [2024-11-20 10:04:19.828932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.157 qpair failed and we were unable to recover it. 00:30:49.157 [2024-11-20 10:04:19.829265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.157 [2024-11-20 10:04:19.829283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.157 qpair failed and we were unable to recover it. 00:30:49.157 [2024-11-20 10:04:19.829638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.157 [2024-11-20 10:04:19.829657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.157 qpair failed and we were unable to recover it. 00:30:49.157 [2024-11-20 10:04:19.830005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.157 [2024-11-20 10:04:19.830026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.157 qpair failed and we were unable to recover it. 00:30:49.157 [2024-11-20 10:04:19.830373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.157 [2024-11-20 10:04:19.830392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.157 qpair failed and we were unable to recover it. 00:30:49.157 [2024-11-20 10:04:19.830728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.157 [2024-11-20 10:04:19.830745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.157 qpair failed and we were unable to recover it. 00:30:49.157 [2024-11-20 10:04:19.831080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.157 [2024-11-20 10:04:19.831098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.157 qpair failed and we were unable to recover it. 00:30:49.157 [2024-11-20 10:04:19.831425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.157 [2024-11-20 10:04:19.831443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.157 qpair failed and we were unable to recover it. 00:30:49.157 [2024-11-20 10:04:19.831662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.157 [2024-11-20 10:04:19.831678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.157 qpair failed and we were unable to recover it. 00:30:49.157 [2024-11-20 10:04:19.832008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.157 [2024-11-20 10:04:19.832026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.157 qpair failed and we were unable to recover it. 00:30:49.157 [2024-11-20 10:04:19.832359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.157 [2024-11-20 10:04:19.832376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.157 qpair failed and we were unable to recover it. 00:30:49.157 [2024-11-20 10:04:19.832730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.157 [2024-11-20 10:04:19.832747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.157 qpair failed and we were unable to recover it. 00:30:49.157 [2024-11-20 10:04:19.833136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.157 [2024-11-20 10:04:19.833153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.157 qpair failed and we were unable to recover it. 00:30:49.157 [2024-11-20 10:04:19.833509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.157 [2024-11-20 10:04:19.833528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.157 qpair failed and we were unable to recover it. 00:30:49.157 [2024-11-20 10:04:19.833865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.157 [2024-11-20 10:04:19.833884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.157 qpair failed and we were unable to recover it. 00:30:49.157 [2024-11-20 10:04:19.834212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.157 [2024-11-20 10:04:19.834230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.157 qpair failed and we were unable to recover it. 00:30:49.157 [2024-11-20 10:04:19.834565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.157 [2024-11-20 10:04:19.834582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.157 qpair failed and we were unable to recover it. 00:30:49.157 [2024-11-20 10:04:19.834920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.157 [2024-11-20 10:04:19.834937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.157 qpair failed and we were unable to recover it. 00:30:49.157 [2024-11-20 10:04:19.835288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.157 [2024-11-20 10:04:19.835305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.157 qpair failed and we were unable to recover it. 00:30:49.157 [2024-11-20 10:04:19.835661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.157 [2024-11-20 10:04:19.835678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.157 qpair failed and we were unable to recover it. 00:30:49.157 [2024-11-20 10:04:19.836032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.157 [2024-11-20 10:04:19.836051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.157 qpair failed and we were unable to recover it. 00:30:49.157 [2024-11-20 10:04:19.836392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.157 [2024-11-20 10:04:19.836410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.157 qpair failed and we were unable to recover it. 00:30:49.157 [2024-11-20 10:04:19.836706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.157 [2024-11-20 10:04:19.836722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.157 qpair failed and we were unable to recover it. 00:30:49.157 [2024-11-20 10:04:19.837058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.157 [2024-11-20 10:04:19.837075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.157 qpair failed and we were unable to recover it. 00:30:49.157 [2024-11-20 10:04:19.837416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.157 [2024-11-20 10:04:19.837435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.157 qpair failed and we were unable to recover it. 00:30:49.157 [2024-11-20 10:04:19.837771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.157 [2024-11-20 10:04:19.837788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.157 qpair failed and we were unable to recover it. 00:30:49.157 [2024-11-20 10:04:19.838137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.157 [2024-11-20 10:04:19.838156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.157 qpair failed and we were unable to recover it. 00:30:49.157 [2024-11-20 10:04:19.838504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.157 [2024-11-20 10:04:19.838521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.157 qpair failed and we were unable to recover it. 00:30:49.157 [2024-11-20 10:04:19.838900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.157 [2024-11-20 10:04:19.838917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.157 qpair failed and we were unable to recover it. 00:30:49.157 [2024-11-20 10:04:19.839336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.157 [2024-11-20 10:04:19.839353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.157 qpair failed and we were unable to recover it. 00:30:49.157 [2024-11-20 10:04:19.839755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.157 [2024-11-20 10:04:19.839773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.157 qpair failed and we were unable to recover it. 00:30:49.157 [2024-11-20 10:04:19.840092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.157 [2024-11-20 10:04:19.840109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.157 qpair failed and we were unable to recover it. 00:30:49.157 [2024-11-20 10:04:19.840447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.157 [2024-11-20 10:04:19.840464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.157 qpair failed and we were unable to recover it. 00:30:49.157 [2024-11-20 10:04:19.840804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.157 [2024-11-20 10:04:19.840821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.157 qpair failed and we were unable to recover it. 00:30:49.157 [2024-11-20 10:04:19.841199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.157 [2024-11-20 10:04:19.841217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.157 qpair failed and we were unable to recover it. 00:30:49.157 [2024-11-20 10:04:19.841537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.157 [2024-11-20 10:04:19.841555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.157 qpair failed and we were unable to recover it. 00:30:49.157 [2024-11-20 10:04:19.841893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.158 [2024-11-20 10:04:19.841911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.158 qpair failed and we were unable to recover it. 00:30:49.158 [2024-11-20 10:04:19.842246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.158 [2024-11-20 10:04:19.842264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.158 qpair failed and we were unable to recover it. 00:30:49.158 [2024-11-20 10:04:19.842598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.158 [2024-11-20 10:04:19.842614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.158 qpair failed and we were unable to recover it. 00:30:49.158 [2024-11-20 10:04:19.842962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.158 [2024-11-20 10:04:19.842978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.158 qpair failed and we were unable to recover it. 00:30:49.158 [2024-11-20 10:04:19.843360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.158 [2024-11-20 10:04:19.843380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.158 qpair failed and we were unable to recover it. 00:30:49.158 [2024-11-20 10:04:19.843714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.158 [2024-11-20 10:04:19.843731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.158 qpair failed and we were unable to recover it. 00:30:49.158 [2024-11-20 10:04:19.844071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.158 [2024-11-20 10:04:19.844089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.158 qpair failed and we were unable to recover it. 00:30:49.158 [2024-11-20 10:04:19.844312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.158 [2024-11-20 10:04:19.844333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.158 qpair failed and we were unable to recover it. 00:30:49.158 [2024-11-20 10:04:19.844665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.158 [2024-11-20 10:04:19.844682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.158 qpair failed and we were unable to recover it. 00:30:49.158 [2024-11-20 10:04:19.845014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.158 [2024-11-20 10:04:19.845030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.158 qpair failed and we were unable to recover it. 00:30:49.158 [2024-11-20 10:04:19.845383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.158 [2024-11-20 10:04:19.845402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.158 qpair failed and we were unable to recover it. 00:30:49.158 [2024-11-20 10:04:19.845727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.158 [2024-11-20 10:04:19.845744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.158 qpair failed and we were unable to recover it. 00:30:49.158 [2024-11-20 10:04:19.846084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.158 [2024-11-20 10:04:19.846102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.158 qpair failed and we were unable to recover it. 00:30:49.158 [2024-11-20 10:04:19.846450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.158 [2024-11-20 10:04:19.846467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.158 qpair failed and we were unable to recover it. 00:30:49.158 [2024-11-20 10:04:19.846801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.158 [2024-11-20 10:04:19.846817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.158 qpair failed and we were unable to recover it. 00:30:49.158 [2024-11-20 10:04:19.847182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.158 [2024-11-20 10:04:19.847199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.158 qpair failed and we were unable to recover it. 00:30:49.158 [2024-11-20 10:04:19.847543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.158 [2024-11-20 10:04:19.847561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.158 qpair failed and we were unable to recover it. 00:30:49.158 [2024-11-20 10:04:19.847891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.158 [2024-11-20 10:04:19.847908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.158 qpair failed and we were unable to recover it. 00:30:49.158 [2024-11-20 10:04:19.848248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.158 [2024-11-20 10:04:19.848265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.158 qpair failed and we were unable to recover it. 00:30:49.158 [2024-11-20 10:04:19.848600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.158 [2024-11-20 10:04:19.848617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.158 qpair failed and we were unable to recover it. 00:30:49.158 [2024-11-20 10:04:19.848957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.158 [2024-11-20 10:04:19.848975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.158 qpair failed and we were unable to recover it. 00:30:49.158 [2024-11-20 10:04:19.849313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.158 [2024-11-20 10:04:19.849330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.158 qpair failed and we were unable to recover it. 00:30:49.158 [2024-11-20 10:04:19.849679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.158 [2024-11-20 10:04:19.849696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.158 qpair failed and we were unable to recover it. 00:30:49.158 [2024-11-20 10:04:19.850020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.158 [2024-11-20 10:04:19.850037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.158 qpair failed and we were unable to recover it. 00:30:49.158 [2024-11-20 10:04:19.850389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.158 [2024-11-20 10:04:19.850409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.158 qpair failed and we were unable to recover it. 00:30:49.158 [2024-11-20 10:04:19.850737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.158 [2024-11-20 10:04:19.850755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.158 qpair failed and we were unable to recover it. 00:30:49.158 [2024-11-20 10:04:19.851073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.158 [2024-11-20 10:04:19.851090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.158 qpair failed and we were unable to recover it. 00:30:49.158 [2024-11-20 10:04:19.851450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.158 [2024-11-20 10:04:19.851468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.158 qpair failed and we were unable to recover it. 00:30:49.158 [2024-11-20 10:04:19.851807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.158 [2024-11-20 10:04:19.851832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.158 qpair failed and we were unable to recover it. 00:30:49.158 [2024-11-20 10:04:19.852169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.158 [2024-11-20 10:04:19.852187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.158 qpair failed and we were unable to recover it. 00:30:49.158 [2024-11-20 10:04:19.852531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.158 [2024-11-20 10:04:19.852549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.158 qpair failed and we were unable to recover it. 00:30:49.158 [2024-11-20 10:04:19.852883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.158 [2024-11-20 10:04:19.852900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.158 qpair failed and we were unable to recover it. 00:30:49.158 [2024-11-20 10:04:19.853223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.158 [2024-11-20 10:04:19.853240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.158 qpair failed and we were unable to recover it. 00:30:49.158 [2024-11-20 10:04:19.853569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.158 [2024-11-20 10:04:19.853587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.158 qpair failed and we were unable to recover it. 00:30:49.158 [2024-11-20 10:04:19.853818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.158 [2024-11-20 10:04:19.853834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.158 qpair failed and we were unable to recover it. 00:30:49.158 [2024-11-20 10:04:19.854210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.158 [2024-11-20 10:04:19.854227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.158 qpair failed and we were unable to recover it. 00:30:49.158 [2024-11-20 10:04:19.854560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.158 [2024-11-20 10:04:19.854578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.158 qpair failed and we were unable to recover it. 00:30:49.158 [2024-11-20 10:04:19.854949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.159 [2024-11-20 10:04:19.854967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.159 qpair failed and we were unable to recover it. 00:30:49.159 [2024-11-20 10:04:19.855331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.159 [2024-11-20 10:04:19.855351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.159 qpair failed and we were unable to recover it. 00:30:49.159 [2024-11-20 10:04:19.855560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.159 [2024-11-20 10:04:19.855576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.159 qpair failed and we were unable to recover it. 00:30:49.159 [2024-11-20 10:04:19.855879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.159 [2024-11-20 10:04:19.855896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.159 qpair failed and we were unable to recover it. 00:30:49.159 [2024-11-20 10:04:19.856223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.159 [2024-11-20 10:04:19.856243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.159 qpair failed and we were unable to recover it. 00:30:49.159 [2024-11-20 10:04:19.856591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.159 [2024-11-20 10:04:19.856610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.159 qpair failed and we were unable to recover it. 00:30:49.159 [2024-11-20 10:04:19.856934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.159 [2024-11-20 10:04:19.856952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.159 qpair failed and we were unable to recover it. 00:30:49.159 [2024-11-20 10:04:19.857291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.159 [2024-11-20 10:04:19.857309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.159 qpair failed and we were unable to recover it. 00:30:49.159 [2024-11-20 10:04:19.857650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.159 [2024-11-20 10:04:19.857668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.159 qpair failed and we were unable to recover it. 00:30:49.159 [2024-11-20 10:04:19.858003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.159 [2024-11-20 10:04:19.858021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.159 qpair failed and we were unable to recover it. 00:30:49.159 [2024-11-20 10:04:19.858340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.159 [2024-11-20 10:04:19.858363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.159 qpair failed and we were unable to recover it. 00:30:49.159 [2024-11-20 10:04:19.858706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.159 [2024-11-20 10:04:19.858723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.159 qpair failed and we were unable to recover it. 00:30:49.159 [2024-11-20 10:04:19.859069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.159 [2024-11-20 10:04:19.859087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.159 qpair failed and we were unable to recover it. 00:30:49.159 [2024-11-20 10:04:19.859429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.159 [2024-11-20 10:04:19.859450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.159 qpair failed and we were unable to recover it. 00:30:49.159 [2024-11-20 10:04:19.859786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.159 [2024-11-20 10:04:19.859803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.159 qpair failed and we were unable to recover it. 00:30:49.159 [2024-11-20 10:04:19.860144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.159 [2024-11-20 10:04:19.860171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.159 qpair failed and we were unable to recover it. 00:30:49.159 [2024-11-20 10:04:19.860519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.159 [2024-11-20 10:04:19.860537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.159 qpair failed and we were unable to recover it. 00:30:49.159 [2024-11-20 10:04:19.860868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.159 [2024-11-20 10:04:19.860886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.159 qpair failed and we were unable to recover it. 00:30:49.159 [2024-11-20 10:04:19.861212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.159 [2024-11-20 10:04:19.861230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.159 qpair failed and we were unable to recover it. 00:30:49.159 [2024-11-20 10:04:19.861619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.159 [2024-11-20 10:04:19.861637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.159 qpair failed and we were unable to recover it. 00:30:49.159 [2024-11-20 10:04:19.861972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.159 [2024-11-20 10:04:19.861991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.159 qpair failed and we were unable to recover it. 00:30:49.159 [2024-11-20 10:04:19.862336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.159 [2024-11-20 10:04:19.862354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.159 qpair failed and we were unable to recover it. 00:30:49.159 [2024-11-20 10:04:19.862710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.159 [2024-11-20 10:04:19.862730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.159 qpair failed and we were unable to recover it. 00:30:49.159 [2024-11-20 10:04:19.863054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.159 [2024-11-20 10:04:19.863071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.159 qpair failed and we were unable to recover it. 00:30:49.159 [2024-11-20 10:04:19.863299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.159 [2024-11-20 10:04:19.863317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.159 qpair failed and we were unable to recover it. 00:30:49.159 [2024-11-20 10:04:19.863652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.159 [2024-11-20 10:04:19.863670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.159 qpair failed and we were unable to recover it. 00:30:49.159 [2024-11-20 10:04:19.864021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.159 [2024-11-20 10:04:19.864039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.159 qpair failed and we were unable to recover it. 00:30:49.159 [2024-11-20 10:04:19.864377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.159 [2024-11-20 10:04:19.864396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.159 qpair failed and we were unable to recover it. 00:30:49.159 [2024-11-20 10:04:19.864726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.159 [2024-11-20 10:04:19.864745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.159 qpair failed and we were unable to recover it. 00:30:49.159 [2024-11-20 10:04:19.865108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.159 [2024-11-20 10:04:19.865125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.159 qpair failed and we were unable to recover it. 00:30:49.159 [2024-11-20 10:04:19.865456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.159 [2024-11-20 10:04:19.865475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.159 qpair failed and we were unable to recover it. 00:30:49.159 [2024-11-20 10:04:19.865809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.159 [2024-11-20 10:04:19.865827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.159 qpair failed and we were unable to recover it. 00:30:49.159 [2024-11-20 10:04:19.866172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.159 [2024-11-20 10:04:19.866192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.159 qpair failed and we were unable to recover it. 00:30:49.159 [2024-11-20 10:04:19.866526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.159 [2024-11-20 10:04:19.866543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.159 qpair failed and we were unable to recover it. 00:30:49.159 [2024-11-20 10:04:19.866883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.159 [2024-11-20 10:04:19.866901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.159 qpair failed and we were unable to recover it. 00:30:49.159 [2024-11-20 10:04:19.867119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.159 [2024-11-20 10:04:19.867136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.159 qpair failed and we were unable to recover it. 00:30:49.159 [2024-11-20 10:04:19.867476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.159 [2024-11-20 10:04:19.867495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.159 qpair failed and we were unable to recover it. 00:30:49.159 [2024-11-20 10:04:19.867835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.159 [2024-11-20 10:04:19.867852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.159 qpair failed and we were unable to recover it. 00:30:49.159 [2024-11-20 10:04:19.868197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.160 [2024-11-20 10:04:19.868216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.160 qpair failed and we were unable to recover it. 00:30:49.160 [2024-11-20 10:04:19.868565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.160 [2024-11-20 10:04:19.868582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.160 qpair failed and we were unable to recover it. 00:30:49.160 [2024-11-20 10:04:19.868918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.160 [2024-11-20 10:04:19.868937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.160 qpair failed and we were unable to recover it. 00:30:49.160 [2024-11-20 10:04:19.869277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.160 [2024-11-20 10:04:19.869295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.160 qpair failed and we were unable to recover it. 00:30:49.160 [2024-11-20 10:04:19.869651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.160 [2024-11-20 10:04:19.869668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.160 qpair failed and we were unable to recover it. 00:30:49.160 [2024-11-20 10:04:19.870009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.160 [2024-11-20 10:04:19.870026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.160 qpair failed and we were unable to recover it. 00:30:49.160 [2024-11-20 10:04:19.870338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.160 [2024-11-20 10:04:19.870356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.160 qpair failed and we were unable to recover it. 00:30:49.160 [2024-11-20 10:04:19.870696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.160 [2024-11-20 10:04:19.870715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.160 qpair failed and we were unable to recover it. 00:30:49.160 [2024-11-20 10:04:19.871066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.160 [2024-11-20 10:04:19.871085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.160 qpair failed and we were unable to recover it. 00:30:49.160 [2024-11-20 10:04:19.871426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.160 [2024-11-20 10:04:19.871445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.160 qpair failed and we were unable to recover it. 00:30:49.160 [2024-11-20 10:04:19.871781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.160 [2024-11-20 10:04:19.871799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.160 qpair failed and we were unable to recover it. 00:30:49.160 [2024-11-20 10:04:19.872140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.160 [2024-11-20 10:04:19.872182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.160 qpair failed and we were unable to recover it. 00:30:49.160 [2024-11-20 10:04:19.872509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.160 [2024-11-20 10:04:19.872533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.160 qpair failed and we were unable to recover it. 00:30:49.160 [2024-11-20 10:04:19.872843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.160 [2024-11-20 10:04:19.872861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.160 qpair failed and we were unable to recover it. 00:30:49.160 [2024-11-20 10:04:19.873198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.160 [2024-11-20 10:04:19.873216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.160 qpair failed and we were unable to recover it. 00:30:49.160 [2024-11-20 10:04:19.873423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.160 [2024-11-20 10:04:19.873439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.160 qpair failed and we were unable to recover it. 00:30:49.160 [2024-11-20 10:04:19.873761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.160 [2024-11-20 10:04:19.873778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.160 qpair failed and we were unable to recover it. 00:30:49.160 [2024-11-20 10:04:19.874108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.160 [2024-11-20 10:04:19.874128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.160 qpair failed and we were unable to recover it. 00:30:49.160 [2024-11-20 10:04:19.874466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.160 [2024-11-20 10:04:19.874484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.160 qpair failed and we were unable to recover it. 00:30:49.160 [2024-11-20 10:04:19.874687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.160 [2024-11-20 10:04:19.874705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.160 qpair failed and we were unable to recover it. 00:30:49.160 [2024-11-20 10:04:19.875041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.160 [2024-11-20 10:04:19.875058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.160 qpair failed and we were unable to recover it. 00:30:49.160 [2024-11-20 10:04:19.875409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.160 [2024-11-20 10:04:19.875429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.160 qpair failed and we were unable to recover it. 00:30:49.160 [2024-11-20 10:04:19.875783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.160 [2024-11-20 10:04:19.875801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.160 qpair failed and we were unable to recover it. 00:30:49.160 [2024-11-20 10:04:19.876143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.160 [2024-11-20 10:04:19.876172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.160 qpair failed and we were unable to recover it. 00:30:49.160 [2024-11-20 10:04:19.876490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.160 [2024-11-20 10:04:19.876509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.160 qpair failed and we were unable to recover it. 00:30:49.160 [2024-11-20 10:04:19.876845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.160 [2024-11-20 10:04:19.876864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.160 qpair failed and we were unable to recover it. 00:30:49.160 [2024-11-20 10:04:19.877205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.160 [2024-11-20 10:04:19.877223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.160 qpair failed and we were unable to recover it. 00:30:49.160 [2024-11-20 10:04:19.877528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.160 [2024-11-20 10:04:19.877549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.160 qpair failed and we were unable to recover it. 00:30:49.160 [2024-11-20 10:04:19.877772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.160 [2024-11-20 10:04:19.877790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.160 qpair failed and we were unable to recover it. 00:30:49.160 [2024-11-20 10:04:19.878129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.160 [2024-11-20 10:04:19.878150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.160 qpair failed and we were unable to recover it. 00:30:49.160 [2024-11-20 10:04:19.878519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.160 [2024-11-20 10:04:19.878537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.160 qpair failed and we were unable to recover it. 00:30:49.160 [2024-11-20 10:04:19.878866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.160 [2024-11-20 10:04:19.878885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.160 qpair failed and we were unable to recover it. 00:30:49.160 [2024-11-20 10:04:19.879226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.160 [2024-11-20 10:04:19.879245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.160 qpair failed and we were unable to recover it. 00:30:49.160 [2024-11-20 10:04:19.879625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.160 [2024-11-20 10:04:19.879643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.161 qpair failed and we were unable to recover it. 00:30:49.161 [2024-11-20 10:04:19.879989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.161 [2024-11-20 10:04:19.880008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.161 qpair failed and we were unable to recover it. 00:30:49.161 [2024-11-20 10:04:19.880356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.161 [2024-11-20 10:04:19.880375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.161 qpair failed and we were unable to recover it. 00:30:49.161 [2024-11-20 10:04:19.880704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.161 [2024-11-20 10:04:19.880721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.161 qpair failed and we were unable to recover it. 00:30:49.161 [2024-11-20 10:04:19.881060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.161 [2024-11-20 10:04:19.881078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.161 qpair failed and we were unable to recover it. 00:30:49.161 [2024-11-20 10:04:19.881434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.161 [2024-11-20 10:04:19.881453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.161 qpair failed and we were unable to recover it. 00:30:49.161 [2024-11-20 10:04:19.881794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.161 [2024-11-20 10:04:19.881812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.161 qpair failed and we were unable to recover it. 00:30:49.161 [2024-11-20 10:04:19.882062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.161 [2024-11-20 10:04:19.882081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.161 qpair failed and we were unable to recover it. 00:30:49.161 [2024-11-20 10:04:19.882375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.161 [2024-11-20 10:04:19.882392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.161 qpair failed and we were unable to recover it. 00:30:49.161 [2024-11-20 10:04:19.882739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.161 [2024-11-20 10:04:19.882758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.161 qpair failed and we were unable to recover it. 00:30:49.161 [2024-11-20 10:04:19.883087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.161 [2024-11-20 10:04:19.883106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.161 qpair failed and we were unable to recover it. 00:30:49.161 [2024-11-20 10:04:19.883463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.161 [2024-11-20 10:04:19.883482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.161 qpair failed and we were unable to recover it. 00:30:49.161 [2024-11-20 10:04:19.883808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.161 [2024-11-20 10:04:19.883826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.161 qpair failed and we were unable to recover it. 00:30:49.161 [2024-11-20 10:04:19.884167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.161 [2024-11-20 10:04:19.884186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.161 qpair failed and we were unable to recover it. 00:30:49.161 [2024-11-20 10:04:19.884519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.161 [2024-11-20 10:04:19.884536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.161 qpair failed and we were unable to recover it. 00:30:49.161 [2024-11-20 10:04:19.884834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.161 [2024-11-20 10:04:19.884855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.161 qpair failed and we were unable to recover it. 00:30:49.161 [2024-11-20 10:04:19.885200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.161 [2024-11-20 10:04:19.885218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.161 qpair failed and we were unable to recover it. 00:30:49.161 [2024-11-20 10:04:19.885556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.161 [2024-11-20 10:04:19.885574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.161 qpair failed and we were unable to recover it. 00:30:49.161 [2024-11-20 10:04:19.885908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.161 [2024-11-20 10:04:19.885924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.161 qpair failed and we were unable to recover it. 00:30:49.161 [2024-11-20 10:04:19.886281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.161 [2024-11-20 10:04:19.886304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.161 qpair failed and we were unable to recover it. 00:30:49.161 [2024-11-20 10:04:19.886675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.161 [2024-11-20 10:04:19.886693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.161 qpair failed and we were unable to recover it. 00:30:49.161 [2024-11-20 10:04:19.887032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.161 [2024-11-20 10:04:19.887051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.161 qpair failed and we were unable to recover it. 00:30:49.161 [2024-11-20 10:04:19.887380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.161 [2024-11-20 10:04:19.887399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.161 qpair failed and we were unable to recover it. 00:30:49.161 [2024-11-20 10:04:19.887611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.161 [2024-11-20 10:04:19.887628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.161 qpair failed and we were unable to recover it. 00:30:49.161 [2024-11-20 10:04:19.887988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.161 [2024-11-20 10:04:19.888005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.161 qpair failed and we were unable to recover it. 00:30:49.161 [2024-11-20 10:04:19.888344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.161 [2024-11-20 10:04:19.888363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.161 qpair failed and we were unable to recover it. 00:30:49.161 [2024-11-20 10:04:19.888648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.161 [2024-11-20 10:04:19.888665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.161 qpair failed and we were unable to recover it. 00:30:49.161 [2024-11-20 10:04:19.889008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.161 [2024-11-20 10:04:19.889027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.161 qpair failed and we were unable to recover it. 00:30:49.161 [2024-11-20 10:04:19.889344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.161 [2024-11-20 10:04:19.889362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.161 qpair failed and we were unable to recover it. 00:30:49.161 [2024-11-20 10:04:19.889703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.161 [2024-11-20 10:04:19.889723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.161 qpair failed and we were unable to recover it. 00:30:49.161 [2024-11-20 10:04:19.890056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.161 [2024-11-20 10:04:19.890074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.161 qpair failed and we were unable to recover it. 00:30:49.161 [2024-11-20 10:04:19.890287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.161 [2024-11-20 10:04:19.890305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.161 qpair failed and we were unable to recover it. 00:30:49.161 [2024-11-20 10:04:19.890542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.161 [2024-11-20 10:04:19.890559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.161 qpair failed and we were unable to recover it. 00:30:49.161 [2024-11-20 10:04:19.890906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.161 [2024-11-20 10:04:19.890925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.161 qpair failed and we were unable to recover it. 00:30:49.161 [2024-11-20 10:04:19.891255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.161 [2024-11-20 10:04:19.891273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.161 qpair failed and we were unable to recover it. 00:30:49.161 [2024-11-20 10:04:19.891596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.161 [2024-11-20 10:04:19.891613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.161 qpair failed and we were unable to recover it. 00:30:49.161 [2024-11-20 10:04:19.891836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.161 [2024-11-20 10:04:19.891855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.161 qpair failed and we were unable to recover it. 00:30:49.161 [2024-11-20 10:04:19.892228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.161 [2024-11-20 10:04:19.892248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.161 qpair failed and we were unable to recover it. 00:30:49.162 [2024-11-20 10:04:19.892608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.162 [2024-11-20 10:04:19.892625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.162 qpair failed and we were unable to recover it. 00:30:49.162 [2024-11-20 10:04:19.892968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.162 [2024-11-20 10:04:19.892986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.162 qpair failed and we were unable to recover it. 00:30:49.162 [2024-11-20 10:04:19.893326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.162 [2024-11-20 10:04:19.893344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.162 qpair failed and we were unable to recover it. 00:30:49.162 [2024-11-20 10:04:19.893714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.162 [2024-11-20 10:04:19.893733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.162 qpair failed and we were unable to recover it. 00:30:49.162 [2024-11-20 10:04:19.894068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.162 [2024-11-20 10:04:19.894084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.162 qpair failed and we were unable to recover it. 00:30:49.162 [2024-11-20 10:04:19.894430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.162 [2024-11-20 10:04:19.894449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.162 qpair failed and we were unable to recover it. 00:30:49.162 [2024-11-20 10:04:19.894778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.162 [2024-11-20 10:04:19.894794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.162 qpair failed and we were unable to recover it. 00:30:49.162 [2024-11-20 10:04:19.895131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.162 [2024-11-20 10:04:19.895150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.162 qpair failed and we were unable to recover it. 00:30:49.162 [2024-11-20 10:04:19.895485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.162 [2024-11-20 10:04:19.895503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.162 qpair failed and we were unable to recover it. 00:30:49.162 [2024-11-20 10:04:19.895850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.162 [2024-11-20 10:04:19.895869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.162 qpair failed and we were unable to recover it. 00:30:49.162 [2024-11-20 10:04:19.896200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.162 [2024-11-20 10:04:19.896219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.162 qpair failed and we were unable to recover it. 00:30:49.162 [2024-11-20 10:04:19.896535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.162 [2024-11-20 10:04:19.896552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.162 qpair failed and we were unable to recover it. 00:30:49.162 [2024-11-20 10:04:19.897229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.162 [2024-11-20 10:04:19.897247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.162 qpair failed and we were unable to recover it. 00:30:49.162 [2024-11-20 10:04:19.897587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.162 [2024-11-20 10:04:19.897604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.162 qpair failed and we were unable to recover it. 00:30:49.162 [2024-11-20 10:04:19.898026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.162 [2024-11-20 10:04:19.898043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.162 qpair failed and we were unable to recover it. 00:30:49.162 [2024-11-20 10:04:19.898353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.162 [2024-11-20 10:04:19.898371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.162 qpair failed and we were unable to recover it. 00:30:49.162 [2024-11-20 10:04:19.898709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.162 [2024-11-20 10:04:19.898725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.162 qpair failed and we were unable to recover it. 00:30:49.162 [2024-11-20 10:04:19.898937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.162 [2024-11-20 10:04:19.898953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.162 qpair failed and we were unable to recover it. 00:30:49.162 [2024-11-20 10:04:19.899181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.162 [2024-11-20 10:04:19.899198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.162 qpair failed and we were unable to recover it. 00:30:49.162 [2024-11-20 10:04:19.899537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.162 [2024-11-20 10:04:19.899553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.162 qpair failed and we were unable to recover it. 00:30:49.162 [2024-11-20 10:04:19.899902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.162 [2024-11-20 10:04:19.899918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.162 qpair failed and we were unable to recover it. 00:30:49.162 [2024-11-20 10:04:19.900139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.162 [2024-11-20 10:04:19.900165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.162 qpair failed and we were unable to recover it. 00:30:49.162 [2024-11-20 10:04:19.900532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.162 [2024-11-20 10:04:19.900551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.162 qpair failed and we were unable to recover it. 00:30:49.162 [2024-11-20 10:04:19.900893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.162 [2024-11-20 10:04:19.900910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.162 qpair failed and we were unable to recover it. 00:30:49.162 [2024-11-20 10:04:19.901132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.162 [2024-11-20 10:04:19.901148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.162 qpair failed and we were unable to recover it. 00:30:49.162 [2024-11-20 10:04:19.901494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.162 [2024-11-20 10:04:19.901511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.162 qpair failed and we were unable to recover it. 00:30:49.162 [2024-11-20 10:04:19.901838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.162 [2024-11-20 10:04:19.901856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.162 qpair failed and we were unable to recover it. 00:30:49.162 [2024-11-20 10:04:19.902214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.162 [2024-11-20 10:04:19.902232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.162 qpair failed and we were unable to recover it. 00:30:49.162 [2024-11-20 10:04:19.902570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.162 [2024-11-20 10:04:19.902588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.162 qpair failed and we were unable to recover it. 00:30:49.162 [2024-11-20 10:04:19.902913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.162 [2024-11-20 10:04:19.902931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.162 qpair failed and we were unable to recover it. 00:30:49.162 [2024-11-20 10:04:19.903306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.162 [2024-11-20 10:04:19.903324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.162 qpair failed and we were unable to recover it. 00:30:49.162 [2024-11-20 10:04:19.903653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.162 [2024-11-20 10:04:19.903671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.162 qpair failed and we were unable to recover it. 00:30:49.162 [2024-11-20 10:04:19.904020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.162 [2024-11-20 10:04:19.904036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.162 qpair failed and we were unable to recover it. 00:30:49.162 [2024-11-20 10:04:19.904367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.162 [2024-11-20 10:04:19.904384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.162 qpair failed and we were unable to recover it. 00:30:49.162 [2024-11-20 10:04:19.904727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.162 [2024-11-20 10:04:19.904743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.162 qpair failed and we were unable to recover it. 00:30:49.162 [2024-11-20 10:04:19.905080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.162 [2024-11-20 10:04:19.905096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.162 qpair failed and we were unable to recover it. 00:30:49.162 [2024-11-20 10:04:19.905317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.162 [2024-11-20 10:04:19.905335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.162 qpair failed and we were unable to recover it. 00:30:49.162 [2024-11-20 10:04:19.905724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.162 [2024-11-20 10:04:19.905743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.162 qpair failed and we were unable to recover it. 00:30:49.163 [2024-11-20 10:04:19.906078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.163 [2024-11-20 10:04:19.906095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.163 qpair failed and we were unable to recover it. 00:30:49.163 [2024-11-20 10:04:19.906310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.163 [2024-11-20 10:04:19.906327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.163 qpair failed and we were unable to recover it. 00:30:49.163 [2024-11-20 10:04:19.906687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.163 [2024-11-20 10:04:19.906704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.163 qpair failed and we were unable to recover it. 00:30:49.163 [2024-11-20 10:04:19.907044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.163 [2024-11-20 10:04:19.907063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.163 qpair failed and we were unable to recover it. 00:30:49.163 [2024-11-20 10:04:19.907372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.163 [2024-11-20 10:04:19.907391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.163 qpair failed and we were unable to recover it. 00:30:49.163 [2024-11-20 10:04:19.907633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.163 [2024-11-20 10:04:19.907650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.163 qpair failed and we were unable to recover it. 00:30:49.163 [2024-11-20 10:04:19.907996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.163 [2024-11-20 10:04:19.908012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.163 qpair failed and we were unable to recover it. 00:30:49.163 [2024-11-20 10:04:19.908386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.163 [2024-11-20 10:04:19.908404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.163 qpair failed and we were unable to recover it. 00:30:49.163 [2024-11-20 10:04:19.908738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.163 [2024-11-20 10:04:19.908754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.163 qpair failed and we were unable to recover it. 00:30:49.163 [2024-11-20 10:04:19.909091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.163 [2024-11-20 10:04:19.909108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.163 qpair failed and we were unable to recover it. 00:30:49.163 [2024-11-20 10:04:19.909329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.163 [2024-11-20 10:04:19.909348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.163 qpair failed and we were unable to recover it. 00:30:49.163 [2024-11-20 10:04:19.909683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.163 [2024-11-20 10:04:19.909700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.163 qpair failed and we were unable to recover it. 00:30:49.163 [2024-11-20 10:04:19.910039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.163 [2024-11-20 10:04:19.910057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.163 qpair failed and we were unable to recover it. 00:30:49.163 [2024-11-20 10:04:19.910403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.163 [2024-11-20 10:04:19.910420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.163 qpair failed and we were unable to recover it. 00:30:49.163 [2024-11-20 10:04:19.910742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.163 [2024-11-20 10:04:19.910761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.163 qpair failed and we were unable to recover it. 00:30:49.163 [2024-11-20 10:04:19.911089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.163 [2024-11-20 10:04:19.911106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.163 qpair failed and we were unable to recover it. 00:30:49.163 [2024-11-20 10:04:19.911519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.163 [2024-11-20 10:04:19.911537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.163 qpair failed and we were unable to recover it. 00:30:49.163 [2024-11-20 10:04:19.911792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.163 [2024-11-20 10:04:19.911809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.163 qpair failed and we were unable to recover it. 00:30:49.163 [2024-11-20 10:04:19.912171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.163 [2024-11-20 10:04:19.912189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.163 qpair failed and we were unable to recover it. 00:30:49.163 [2024-11-20 10:04:19.912535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.163 [2024-11-20 10:04:19.912554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.163 qpair failed and we were unable to recover it. 00:30:49.163 [2024-11-20 10:04:19.912893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.163 [2024-11-20 10:04:19.912912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.163 qpair failed and we were unable to recover it. 00:30:49.163 [2024-11-20 10:04:19.913140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.163 [2024-11-20 10:04:19.913165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.163 qpair failed and we were unable to recover it. 00:30:49.163 [2024-11-20 10:04:19.913493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.163 [2024-11-20 10:04:19.913510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.163 qpair failed and we were unable to recover it. 00:30:49.163 [2024-11-20 10:04:19.913696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.163 [2024-11-20 10:04:19.913718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.163 qpair failed and we were unable to recover it. 00:30:49.163 [2024-11-20 10:04:19.914081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.163 [2024-11-20 10:04:19.914098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.163 qpair failed and we were unable to recover it. 00:30:49.163 [2024-11-20 10:04:19.914458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.163 [2024-11-20 10:04:19.914477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.163 qpair failed and we were unable to recover it. 00:30:49.163 [2024-11-20 10:04:19.914855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.163 [2024-11-20 10:04:19.914873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.163 qpair failed and we were unable to recover it. 00:30:49.163 [2024-11-20 10:04:19.915223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.163 [2024-11-20 10:04:19.915242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.163 qpair failed and we were unable to recover it. 00:30:49.163 [2024-11-20 10:04:19.915583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.163 [2024-11-20 10:04:19.915600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.163 qpair failed and we were unable to recover it. 00:30:49.163 [2024-11-20 10:04:19.915943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.163 [2024-11-20 10:04:19.915962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.163 qpair failed and we were unable to recover it. 00:30:49.163 [2024-11-20 10:04:19.916307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.163 [2024-11-20 10:04:19.916325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.163 qpair failed and we were unable to recover it. 00:30:49.163 [2024-11-20 10:04:19.916556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.163 [2024-11-20 10:04:19.916572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.163 qpair failed and we were unable to recover it. 00:30:49.163 [2024-11-20 10:04:19.916941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.163 [2024-11-20 10:04:19.916959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.163 qpair failed and we were unable to recover it. 00:30:49.163 [2024-11-20 10:04:19.917299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.163 [2024-11-20 10:04:19.917317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.163 qpair failed and we were unable to recover it. 00:30:49.163 [2024-11-20 10:04:19.917637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.163 [2024-11-20 10:04:19.917654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.163 qpair failed and we were unable to recover it. 00:30:49.163 [2024-11-20 10:04:19.918023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.163 [2024-11-20 10:04:19.918040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.163 qpair failed and we were unable to recover it. 00:30:49.163 [2024-11-20 10:04:19.918396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.163 [2024-11-20 10:04:19.918417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.163 qpair failed and we were unable to recover it. 00:30:49.163 [2024-11-20 10:04:19.918613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.163 [2024-11-20 10:04:19.918634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.163 qpair failed and we were unable to recover it. 00:30:49.164 [2024-11-20 10:04:19.919024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.164 [2024-11-20 10:04:19.919042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.164 qpair failed and we were unable to recover it. 00:30:49.164 [2024-11-20 10:04:19.919382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.164 [2024-11-20 10:04:19.919399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.164 qpair failed and we were unable to recover it. 00:30:49.164 [2024-11-20 10:04:19.919735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.164 [2024-11-20 10:04:19.919751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.164 qpair failed and we were unable to recover it. 00:30:49.164 [2024-11-20 10:04:19.919998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.164 [2024-11-20 10:04:19.920014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.164 qpair failed and we were unable to recover it. 00:30:49.164 [2024-11-20 10:04:19.920341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.164 [2024-11-20 10:04:19.920358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.164 qpair failed and we were unable to recover it. 00:30:49.164 [2024-11-20 10:04:19.920686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.164 [2024-11-20 10:04:19.920704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.164 qpair failed and we were unable to recover it. 00:30:49.164 [2024-11-20 10:04:19.921041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.164 [2024-11-20 10:04:19.921060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.164 qpair failed and we were unable to recover it. 00:30:49.164 [2024-11-20 10:04:19.921391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.164 [2024-11-20 10:04:19.921408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.164 qpair failed and we were unable to recover it. 00:30:49.164 [2024-11-20 10:04:19.921740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.164 [2024-11-20 10:04:19.921758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.164 qpair failed and we were unable to recover it. 00:30:49.164 [2024-11-20 10:04:19.922117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.164 [2024-11-20 10:04:19.922136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.164 qpair failed and we were unable to recover it. 00:30:49.164 [2024-11-20 10:04:19.922490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.164 [2024-11-20 10:04:19.922507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.164 qpair failed and we were unable to recover it. 00:30:49.164 [2024-11-20 10:04:19.922851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.164 [2024-11-20 10:04:19.922870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.164 qpair failed and we were unable to recover it. 00:30:49.164 [2024-11-20 10:04:19.923208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.164 [2024-11-20 10:04:19.923226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.164 qpair failed and we were unable to recover it. 00:30:49.164 [2024-11-20 10:04:19.923553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.164 [2024-11-20 10:04:19.923569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.164 qpair failed and we were unable to recover it. 00:30:49.164 [2024-11-20 10:04:19.923901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.164 [2024-11-20 10:04:19.923918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.164 qpair failed and we were unable to recover it. 00:30:49.164 [2024-11-20 10:04:19.924267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.164 [2024-11-20 10:04:19.924286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.164 qpair failed and we were unable to recover it. 00:30:49.164 [2024-11-20 10:04:19.924622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.164 [2024-11-20 10:04:19.924640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.164 qpair failed and we were unable to recover it. 00:30:49.164 [2024-11-20 10:04:19.924956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.164 [2024-11-20 10:04:19.924975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.164 qpair failed and we were unable to recover it. 00:30:49.164 [2024-11-20 10:04:19.925290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.164 [2024-11-20 10:04:19.925308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.164 qpair failed and we were unable to recover it. 00:30:49.164 [2024-11-20 10:04:19.925663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.164 [2024-11-20 10:04:19.925682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.164 qpair failed and we were unable to recover it. 00:30:49.164 [2024-11-20 10:04:19.926026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.164 [2024-11-20 10:04:19.926043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.164 qpair failed and we were unable to recover it. 00:30:49.164 [2024-11-20 10:04:19.926383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.164 [2024-11-20 10:04:19.926403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.164 qpair failed and we were unable to recover it. 00:30:49.164 [2024-11-20 10:04:19.926724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.164 [2024-11-20 10:04:19.926741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.164 qpair failed and we were unable to recover it. 00:30:49.164 [2024-11-20 10:04:19.927121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.164 [2024-11-20 10:04:19.927138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.164 qpair failed and we were unable to recover it. 00:30:49.164 [2024-11-20 10:04:19.927455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.164 [2024-11-20 10:04:19.927473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.164 qpair failed and we were unable to recover it. 00:30:49.164 [2024-11-20 10:04:19.927687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.164 [2024-11-20 10:04:19.927707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.164 qpair failed and we were unable to recover it. 00:30:49.164 [2024-11-20 10:04:19.927931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.164 [2024-11-20 10:04:19.927947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.164 qpair failed and we were unable to recover it. 00:30:49.164 [2024-11-20 10:04:19.928301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.164 [2024-11-20 10:04:19.928318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.164 qpair failed and we were unable to recover it. 00:30:49.164 [2024-11-20 10:04:19.928652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.164 [2024-11-20 10:04:19.928672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.164 qpair failed and we were unable to recover it. 00:30:49.164 [2024-11-20 10:04:19.929003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.164 [2024-11-20 10:04:19.929019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.164 qpair failed and we were unable to recover it. 00:30:49.164 [2024-11-20 10:04:19.929358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.164 [2024-11-20 10:04:19.929375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.164 qpair failed and we were unable to recover it. 00:30:49.164 [2024-11-20 10:04:19.929589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.164 [2024-11-20 10:04:19.929606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.164 qpair failed and we were unable to recover it. 00:30:49.164 [2024-11-20 10:04:19.929791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.164 [2024-11-20 10:04:19.929810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.164 qpair failed and we were unable to recover it. 00:30:49.165 [2024-11-20 10:04:19.930154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.165 [2024-11-20 10:04:19.930178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.165 qpair failed and we were unable to recover it. 00:30:49.165 [2024-11-20 10:04:19.930417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.165 [2024-11-20 10:04:19.930433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.165 qpair failed and we were unable to recover it. 00:30:49.165 [2024-11-20 10:04:19.930773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.165 [2024-11-20 10:04:19.930791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.165 qpair failed and we were unable to recover it. 00:30:49.165 [2024-11-20 10:04:19.931143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.165 [2024-11-20 10:04:19.931175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.165 qpair failed and we were unable to recover it. 00:30:49.165 [2024-11-20 10:04:19.931519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.165 [2024-11-20 10:04:19.931536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.165 qpair failed and we were unable to recover it. 00:30:49.165 [2024-11-20 10:04:19.931871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.165 [2024-11-20 10:04:19.931887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.165 qpair failed and we were unable to recover it. 00:30:49.165 [2024-11-20 10:04:19.932235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.165 [2024-11-20 10:04:19.932252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.165 qpair failed and we were unable to recover it. 00:30:49.165 [2024-11-20 10:04:19.932555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.165 [2024-11-20 10:04:19.932572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.165 qpair failed and we were unable to recover it. 00:30:49.165 [2024-11-20 10:04:19.932698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.165 [2024-11-20 10:04:19.932716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.165 qpair failed and we were unable to recover it. 00:30:49.165 [2024-11-20 10:04:19.933123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.165 [2024-11-20 10:04:19.933140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.165 qpair failed and we were unable to recover it. 00:30:49.165 [2024-11-20 10:04:19.933477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.165 [2024-11-20 10:04:19.933495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.165 qpair failed and we were unable to recover it. 00:30:49.165 [2024-11-20 10:04:19.933719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.165 [2024-11-20 10:04:19.933736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.165 qpair failed and we were unable to recover it. 00:30:49.165 [2024-11-20 10:04:19.934056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.165 [2024-11-20 10:04:19.934072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.165 qpair failed and we were unable to recover it. 00:30:49.165 [2024-11-20 10:04:19.934390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.165 [2024-11-20 10:04:19.934408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.165 qpair failed and we were unable to recover it. 00:30:49.165 [2024-11-20 10:04:19.934639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.165 [2024-11-20 10:04:19.934656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.165 qpair failed and we were unable to recover it. 00:30:49.165 [2024-11-20 10:04:19.934961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.165 [2024-11-20 10:04:19.934989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.165 qpair failed and we were unable to recover it. 00:30:49.165 [2024-11-20 10:04:19.935304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.165 [2024-11-20 10:04:19.935322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.165 qpair failed and we were unable to recover it. 00:30:49.165 [2024-11-20 10:04:19.935654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.165 [2024-11-20 10:04:19.935673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.165 qpair failed and we were unable to recover it. 00:30:49.165 [2024-11-20 10:04:19.935808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.165 [2024-11-20 10:04:19.935826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.165 qpair failed and we were unable to recover it. 00:30:49.165 [2024-11-20 10:04:19.936176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.165 [2024-11-20 10:04:19.936194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.165 qpair failed and we were unable to recover it. 00:30:49.165 [2024-11-20 10:04:19.936539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.165 [2024-11-20 10:04:19.936558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.165 qpair failed and we were unable to recover it. 00:30:49.165 [2024-11-20 10:04:19.936769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.165 [2024-11-20 10:04:19.936787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.165 qpair failed and we were unable to recover it. 00:30:49.165 [2024-11-20 10:04:19.937123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.165 [2024-11-20 10:04:19.937141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.165 qpair failed and we were unable to recover it. 00:30:49.165 [2024-11-20 10:04:19.937362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.165 [2024-11-20 10:04:19.937380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.165 qpair failed and we were unable to recover it. 00:30:49.165 [2024-11-20 10:04:19.937711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.165 [2024-11-20 10:04:19.937728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.165 qpair failed and we were unable to recover it. 00:30:49.165 [2024-11-20 10:04:19.938071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.165 [2024-11-20 10:04:19.938088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.165 qpair failed and we were unable to recover it. 00:30:49.165 [2024-11-20 10:04:19.938406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.165 [2024-11-20 10:04:19.938424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.165 qpair failed and we were unable to recover it. 00:30:49.165 [2024-11-20 10:04:19.938762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.165 [2024-11-20 10:04:19.938778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.165 qpair failed and we were unable to recover it. 00:30:49.165 [2024-11-20 10:04:19.939106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.165 [2024-11-20 10:04:19.939123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.165 qpair failed and we were unable to recover it. 00:30:49.165 [2024-11-20 10:04:19.939457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.165 [2024-11-20 10:04:19.939477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.165 qpair failed and we were unable to recover it. 00:30:49.165 [2024-11-20 10:04:19.939864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.165 [2024-11-20 10:04:19.939882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.165 qpair failed and we were unable to recover it. 00:30:49.165 [2024-11-20 10:04:19.940218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.165 [2024-11-20 10:04:19.940236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.165 qpair failed and we were unable to recover it. 00:30:49.165 [2024-11-20 10:04:19.940456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.165 [2024-11-20 10:04:19.940476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.165 qpair failed and we were unable to recover it. 00:30:49.165 [2024-11-20 10:04:19.940700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.165 [2024-11-20 10:04:19.940716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.165 qpair failed and we were unable to recover it. 00:30:49.165 [2024-11-20 10:04:19.941054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.165 [2024-11-20 10:04:19.941071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.165 qpair failed and we were unable to recover it. 00:30:49.165 [2024-11-20 10:04:19.941421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.165 [2024-11-20 10:04:19.941440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.165 qpair failed and we were unable to recover it. 00:30:49.165 [2024-11-20 10:04:19.941794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.165 [2024-11-20 10:04:19.941813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.165 qpair failed and we were unable to recover it. 00:30:49.165 [2024-11-20 10:04:19.942165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.166 [2024-11-20 10:04:19.942183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.166 qpair failed and we were unable to recover it. 00:30:49.166 [2024-11-20 10:04:19.942525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.166 [2024-11-20 10:04:19.942543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.166 qpair failed and we were unable to recover it. 00:30:49.166 [2024-11-20 10:04:19.942894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.166 [2024-11-20 10:04:19.942912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.166 qpair failed and we were unable to recover it. 00:30:49.166 [2024-11-20 10:04:19.943234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.166 [2024-11-20 10:04:19.943252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.166 qpair failed and we were unable to recover it. 00:30:49.166 [2024-11-20 10:04:19.943472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.166 [2024-11-20 10:04:19.943489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.166 qpair failed and we were unable to recover it. 00:30:49.166 [2024-11-20 10:04:19.943833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.166 [2024-11-20 10:04:19.943849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.166 qpair failed and we were unable to recover it. 00:30:49.166 [2024-11-20 10:04:19.944074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.166 [2024-11-20 10:04:19.944091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.166 qpair failed and we were unable to recover it. 00:30:49.166 [2024-11-20 10:04:19.944375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.166 [2024-11-20 10:04:19.944392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.166 qpair failed and we were unable to recover it. 00:30:49.166 [2024-11-20 10:04:19.944749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.166 [2024-11-20 10:04:19.944766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.166 qpair failed and we were unable to recover it. 00:30:49.166 [2024-11-20 10:04:19.945105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.166 [2024-11-20 10:04:19.945124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.166 qpair failed and we were unable to recover it. 00:30:49.166 [2024-11-20 10:04:19.945464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.166 [2024-11-20 10:04:19.945482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.166 qpair failed and we were unable to recover it. 00:30:49.166 [2024-11-20 10:04:19.945820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.166 [2024-11-20 10:04:19.945835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.166 qpair failed and we were unable to recover it. 00:30:49.166 [2024-11-20 10:04:19.946184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.166 [2024-11-20 10:04:19.946201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.166 qpair failed and we were unable to recover it. 00:30:49.166 [2024-11-20 10:04:19.946531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.166 [2024-11-20 10:04:19.946549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.166 qpair failed and we were unable to recover it. 00:30:49.166 [2024-11-20 10:04:19.946893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.166 [2024-11-20 10:04:19.946911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.166 qpair failed and we were unable to recover it. 00:30:49.166 [2024-11-20 10:04:19.947129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.166 [2024-11-20 10:04:19.947147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.166 qpair failed and we were unable to recover it. 00:30:49.166 [2024-11-20 10:04:19.947494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.166 [2024-11-20 10:04:19.947512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.166 qpair failed and we were unable to recover it. 00:30:49.166 [2024-11-20 10:04:19.947853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.166 [2024-11-20 10:04:19.947870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.166 qpair failed and we were unable to recover it. 00:30:49.166 [2024-11-20 10:04:19.948213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.166 [2024-11-20 10:04:19.948230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.166 qpair failed and we were unable to recover it. 00:30:49.166 [2024-11-20 10:04:19.948429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.166 [2024-11-20 10:04:19.948447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.166 qpair failed and we were unable to recover it. 00:30:49.166 [2024-11-20 10:04:19.948801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.166 [2024-11-20 10:04:19.948819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.166 qpair failed and we were unable to recover it. 00:30:49.166 [2024-11-20 10:04:19.949195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.166 [2024-11-20 10:04:19.949214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.166 qpair failed and we were unable to recover it. 00:30:49.166 [2024-11-20 10:04:19.949590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.166 [2024-11-20 10:04:19.949609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.166 qpair failed and we were unable to recover it. 00:30:49.166 [2024-11-20 10:04:19.949929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.166 [2024-11-20 10:04:19.949947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.166 qpair failed and we were unable to recover it. 00:30:49.166 [2024-11-20 10:04:19.950309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.166 [2024-11-20 10:04:19.950328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.166 qpair failed and we were unable to recover it. 00:30:49.166 [2024-11-20 10:04:19.950541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.166 [2024-11-20 10:04:19.950559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.166 qpair failed and we were unable to recover it. 00:30:49.166 [2024-11-20 10:04:19.950790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.166 [2024-11-20 10:04:19.950809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.166 qpair failed and we were unable to recover it. 00:30:49.166 [2024-11-20 10:04:19.951095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.166 [2024-11-20 10:04:19.951114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.166 qpair failed and we were unable to recover it. 00:30:49.166 [2024-11-20 10:04:19.951450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.166 [2024-11-20 10:04:19.951470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.166 qpair failed and we were unable to recover it. 00:30:49.166 [2024-11-20 10:04:19.951694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.166 [2024-11-20 10:04:19.951712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.166 qpair failed and we were unable to recover it. 00:30:49.166 [2024-11-20 10:04:19.951905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.166 [2024-11-20 10:04:19.951923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.166 qpair failed and we were unable to recover it. 00:30:49.166 [2024-11-20 10:04:19.952262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.166 [2024-11-20 10:04:19.952280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.166 qpair failed and we were unable to recover it. 00:30:49.166 [2024-11-20 10:04:19.952362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.166 [2024-11-20 10:04:19.952379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.166 qpair failed and we were unable to recover it. 00:30:49.166 [2024-11-20 10:04:19.952607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.166 [2024-11-20 10:04:19.952625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.166 qpair failed and we were unable to recover it. 00:30:49.166 [2024-11-20 10:04:19.952962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.166 [2024-11-20 10:04:19.952980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.166 qpair failed and we were unable to recover it. 00:30:49.166 [2024-11-20 10:04:19.953348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.166 [2024-11-20 10:04:19.953370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.166 qpair failed and we were unable to recover it. 00:30:49.166 [2024-11-20 10:04:19.953689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.166 [2024-11-20 10:04:19.953709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.166 qpair failed and we were unable to recover it. 00:30:49.167 [2024-11-20 10:04:19.954043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.167 [2024-11-20 10:04:19.954061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.167 qpair failed and we were unable to recover it. 00:30:49.167 [2024-11-20 10:04:19.954374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.167 [2024-11-20 10:04:19.954392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.167 qpair failed and we were unable to recover it. 00:30:49.167 [2024-11-20 10:04:19.954726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.167 [2024-11-20 10:04:19.954744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.167 qpair failed and we were unable to recover it. 00:30:49.167 [2024-11-20 10:04:19.955074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.167 [2024-11-20 10:04:19.955092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.167 qpair failed and we were unable to recover it. 00:30:49.167 [2024-11-20 10:04:19.955447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.167 [2024-11-20 10:04:19.955465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.167 qpair failed and we were unable to recover it. 00:30:49.167 [2024-11-20 10:04:19.955802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.167 [2024-11-20 10:04:19.955821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.167 qpair failed and we were unable to recover it. 00:30:49.167 [2024-11-20 10:04:19.956023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.167 [2024-11-20 10:04:19.956041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.167 qpair failed and we were unable to recover it. 00:30:49.167 [2024-11-20 10:04:19.956376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.167 [2024-11-20 10:04:19.956395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.167 qpair failed and we were unable to recover it. 00:30:49.167 [2024-11-20 10:04:19.956774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.167 [2024-11-20 10:04:19.956792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.167 qpair failed and we were unable to recover it. 00:30:49.167 [2024-11-20 10:04:19.957014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.167 [2024-11-20 10:04:19.957032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.167 qpair failed and we were unable to recover it. 00:30:49.167 [2024-11-20 10:04:19.957380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.167 [2024-11-20 10:04:19.957398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.167 qpair failed and we were unable to recover it. 00:30:49.167 [2024-11-20 10:04:19.957721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.167 [2024-11-20 10:04:19.957739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.167 qpair failed and we were unable to recover it. 00:30:49.167 [2024-11-20 10:04:19.958085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.167 [2024-11-20 10:04:19.958104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.167 qpair failed and we were unable to recover it. 00:30:49.167 [2024-11-20 10:04:19.958414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.167 [2024-11-20 10:04:19.958433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.167 qpair failed and we were unable to recover it. 00:30:49.167 [2024-11-20 10:04:19.958779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.167 [2024-11-20 10:04:19.958798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.167 qpair failed and we were unable to recover it. 00:30:49.167 [2024-11-20 10:04:19.959128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.167 [2024-11-20 10:04:19.959145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.167 qpair failed and we were unable to recover it. 00:30:49.167 [2024-11-20 10:04:19.959375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.167 [2024-11-20 10:04:19.959394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.167 qpair failed and we were unable to recover it. 00:30:49.167 [2024-11-20 10:04:19.959741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.167 [2024-11-20 10:04:19.959760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.167 qpair failed and we were unable to recover it. 00:30:49.167 [2024-11-20 10:04:19.960100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.167 [2024-11-20 10:04:19.960119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.167 qpair failed and we were unable to recover it. 00:30:49.167 [2024-11-20 10:04:19.960364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.167 [2024-11-20 10:04:19.960383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.167 qpair failed and we were unable to recover it. 00:30:49.167 [2024-11-20 10:04:19.960728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.167 [2024-11-20 10:04:19.960746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.167 qpair failed and we were unable to recover it. 00:30:49.167 [2024-11-20 10:04:19.961116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.167 [2024-11-20 10:04:19.961134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.167 qpair failed and we were unable to recover it. 00:30:49.167 [2024-11-20 10:04:19.961513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.167 [2024-11-20 10:04:19.961532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.167 qpair failed and we were unable to recover it. 00:30:49.167 [2024-11-20 10:04:19.961858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.167 [2024-11-20 10:04:19.961875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.167 qpair failed and we were unable to recover it. 00:30:49.167 [2024-11-20 10:04:19.962087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.167 [2024-11-20 10:04:19.962104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.167 qpair failed and we were unable to recover it. 00:30:49.167 [2024-11-20 10:04:19.962453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.167 [2024-11-20 10:04:19.962472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.167 qpair failed and we were unable to recover it. 00:30:49.167 [2024-11-20 10:04:19.962751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.167 [2024-11-20 10:04:19.962769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.167 qpair failed and we were unable to recover it. 00:30:49.167 [2024-11-20 10:04:19.963098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.167 [2024-11-20 10:04:19.963114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.167 qpair failed and we were unable to recover it. 00:30:49.167 [2024-11-20 10:04:19.963458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.167 [2024-11-20 10:04:19.963475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.167 qpair failed and we were unable to recover it. 00:30:49.167 [2024-11-20 10:04:19.963666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.167 [2024-11-20 10:04:19.963683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.167 qpair failed and we were unable to recover it. 00:30:49.167 [2024-11-20 10:04:19.964027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.167 [2024-11-20 10:04:19.964044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.167 qpair failed and we were unable to recover it. 00:30:49.167 [2024-11-20 10:04:19.964383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.167 [2024-11-20 10:04:19.964400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.167 qpair failed and we were unable to recover it. 00:30:49.167 [2024-11-20 10:04:19.964594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.167 [2024-11-20 10:04:19.964611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.167 qpair failed and we were unable to recover it. 00:30:49.167 [2024-11-20 10:04:19.964961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.167 [2024-11-20 10:04:19.964978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.167 qpair failed and we were unable to recover it. 00:30:49.167 [2024-11-20 10:04:19.965325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.167 [2024-11-20 10:04:19.965342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.167 qpair failed and we were unable to recover it. 00:30:49.167 [2024-11-20 10:04:19.965722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.167 [2024-11-20 10:04:19.965739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.167 qpair failed and we were unable to recover it. 00:30:49.167 [2024-11-20 10:04:19.966058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.167 [2024-11-20 10:04:19.966075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.167 qpair failed and we were unable to recover it. 00:30:49.167 [2024-11-20 10:04:19.966426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.167 [2024-11-20 10:04:19.966445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.167 qpair failed and we were unable to recover it. 00:30:49.168 [2024-11-20 10:04:19.966788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.168 [2024-11-20 10:04:19.966808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.168 qpair failed and we were unable to recover it. 00:30:49.168 [2024-11-20 10:04:19.966998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.168 [2024-11-20 10:04:19.967014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.168 qpair failed and we were unable to recover it. 00:30:49.168 [2024-11-20 10:04:19.967348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.168 [2024-11-20 10:04:19.967366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.168 qpair failed and we were unable to recover it. 00:30:49.168 [2024-11-20 10:04:19.967719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.168 [2024-11-20 10:04:19.967739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.168 qpair failed and we were unable to recover it. 00:30:49.168 [2024-11-20 10:04:19.967998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.168 [2024-11-20 10:04:19.968015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.168 qpair failed and we were unable to recover it. 00:30:49.168 [2024-11-20 10:04:19.968384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.168 [2024-11-20 10:04:19.968401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.168 qpair failed and we were unable to recover it. 00:30:49.168 [2024-11-20 10:04:19.968757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.168 [2024-11-20 10:04:19.968775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.168 qpair failed and we were unable to recover it. 00:30:49.168 [2024-11-20 10:04:19.969127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.168 [2024-11-20 10:04:19.969145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.168 qpair failed and we were unable to recover it. 00:30:49.168 [2024-11-20 10:04:19.969364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.168 [2024-11-20 10:04:19.969382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.168 qpair failed and we were unable to recover it. 00:30:49.168 [2024-11-20 10:04:19.969580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.168 [2024-11-20 10:04:19.969596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.168 qpair failed and we were unable to recover it. 00:30:49.168 [2024-11-20 10:04:19.969953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.168 [2024-11-20 10:04:19.969971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.168 qpair failed and we were unable to recover it. 00:30:49.168 [2024-11-20 10:04:19.970152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.168 [2024-11-20 10:04:19.970176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.168 qpair failed and we were unable to recover it. 00:30:49.168 [2024-11-20 10:04:19.970414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.168 [2024-11-20 10:04:19.970430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.168 qpair failed and we were unable to recover it. 00:30:49.168 [2024-11-20 10:04:19.970805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.168 [2024-11-20 10:04:19.970823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.168 qpair failed and we were unable to recover it. 00:30:49.168 [2024-11-20 10:04:19.970898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.168 [2024-11-20 10:04:19.970912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.168 qpair failed and we were unable to recover it. 00:30:49.168 [2024-11-20 10:04:19.971085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.168 [2024-11-20 10:04:19.971102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.168 qpair failed and we were unable to recover it. 00:30:49.168 [2024-11-20 10:04:19.971473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.168 [2024-11-20 10:04:19.971491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.168 qpair failed and we were unable to recover it. 00:30:49.168 [2024-11-20 10:04:19.971846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.168 [2024-11-20 10:04:19.971864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.168 qpair failed and we were unable to recover it. 00:30:49.168 [2024-11-20 10:04:19.972194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.168 [2024-11-20 10:04:19.972212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.168 qpair failed and we were unable to recover it. 00:30:49.168 [2024-11-20 10:04:19.972418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.168 [2024-11-20 10:04:19.972436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.168 qpair failed and we were unable to recover it. 00:30:49.168 [2024-11-20 10:04:19.972780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.168 [2024-11-20 10:04:19.972797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.168 qpair failed and we were unable to recover it. 00:30:49.168 [2024-11-20 10:04:19.973187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.168 [2024-11-20 10:04:19.973205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.168 qpair failed and we were unable to recover it. 00:30:49.168 [2024-11-20 10:04:19.973519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.168 [2024-11-20 10:04:19.973537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.168 qpair failed and we were unable to recover it. 00:30:49.168 [2024-11-20 10:04:19.973894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.168 [2024-11-20 10:04:19.973912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.168 qpair failed and we were unable to recover it. 00:30:49.168 [2024-11-20 10:04:19.974255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.168 [2024-11-20 10:04:19.974273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.168 qpair failed and we were unable to recover it. 00:30:49.168 [2024-11-20 10:04:19.974668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.168 [2024-11-20 10:04:19.974686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.168 qpair failed and we were unable to recover it. 00:30:49.168 [2024-11-20 10:04:19.975023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.168 [2024-11-20 10:04:19.975040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.168 qpair failed and we were unable to recover it. 00:30:49.168 [2024-11-20 10:04:19.975226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.168 [2024-11-20 10:04:19.975247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.168 qpair failed and we were unable to recover it. 00:30:49.168 [2024-11-20 10:04:19.975463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.168 [2024-11-20 10:04:19.975480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.168 qpair failed and we were unable to recover it. 00:30:49.168 [2024-11-20 10:04:19.975830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.168 [2024-11-20 10:04:19.975847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.168 qpair failed and we were unable to recover it. 00:30:49.168 [2024-11-20 10:04:19.976175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.168 [2024-11-20 10:04:19.976193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.168 qpair failed and we were unable to recover it. 00:30:49.168 [2024-11-20 10:04:19.976419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.168 [2024-11-20 10:04:19.976435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.168 qpair failed and we were unable to recover it. 00:30:49.168 [2024-11-20 10:04:19.976676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.168 [2024-11-20 10:04:19.976692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.168 qpair failed and we were unable to recover it. 00:30:49.168 [2024-11-20 10:04:19.977024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.168 [2024-11-20 10:04:19.977041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.168 qpair failed and we were unable to recover it. 00:30:49.168 [2024-11-20 10:04:19.977382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.168 [2024-11-20 10:04:19.977399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.168 qpair failed and we were unable to recover it. 00:30:49.168 [2024-11-20 10:04:19.977632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.168 [2024-11-20 10:04:19.977649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.168 qpair failed and we were unable to recover it. 00:30:49.168 [2024-11-20 10:04:19.977981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.168 [2024-11-20 10:04:19.977999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.168 qpair failed and we were unable to recover it. 00:30:49.169 [2024-11-20 10:04:19.978329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.169 [2024-11-20 10:04:19.978348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.169 qpair failed and we were unable to recover it. 00:30:49.169 [2024-11-20 10:04:19.978674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.169 [2024-11-20 10:04:19.978693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.169 qpair failed and we were unable to recover it. 00:30:49.169 [2024-11-20 10:04:19.979034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.169 [2024-11-20 10:04:19.979051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.169 qpair failed and we were unable to recover it. 00:30:49.169 [2024-11-20 10:04:19.979245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.169 [2024-11-20 10:04:19.979263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.169 qpair failed and we were unable to recover it. 00:30:49.169 [2024-11-20 10:04:19.979524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.169 [2024-11-20 10:04:19.979542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.169 qpair failed and we were unable to recover it. 00:30:49.169 [2024-11-20 10:04:19.979884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.169 [2024-11-20 10:04:19.979902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.169 qpair failed and we were unable to recover it. 00:30:49.169 [2024-11-20 10:04:19.980105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.169 [2024-11-20 10:04:19.980122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.169 qpair failed and we were unable to recover it. 00:30:49.169 [2024-11-20 10:04:19.980460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.169 [2024-11-20 10:04:19.980479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.169 qpair failed and we were unable to recover it. 00:30:49.169 [2024-11-20 10:04:19.980818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.169 [2024-11-20 10:04:19.980837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.169 qpair failed and we were unable to recover it. 00:30:49.169 [2024-11-20 10:04:19.981182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.169 [2024-11-20 10:04:19.981199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.169 qpair failed and we were unable to recover it. 00:30:49.169 [2024-11-20 10:04:19.981535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.169 [2024-11-20 10:04:19.981553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.169 qpair failed and we were unable to recover it. 00:30:49.169 [2024-11-20 10:04:19.981895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.169 [2024-11-20 10:04:19.981913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.169 qpair failed and we were unable to recover it. 00:30:49.169 [2024-11-20 10:04:19.982247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.169 [2024-11-20 10:04:19.982266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.169 qpair failed and we were unable to recover it. 00:30:49.169 [2024-11-20 10:04:19.982478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.169 [2024-11-20 10:04:19.982496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.169 qpair failed and we were unable to recover it. 00:30:49.169 [2024-11-20 10:04:19.982838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.169 [2024-11-20 10:04:19.982855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.169 qpair failed and we were unable to recover it. 00:30:49.169 [2024-11-20 10:04:19.983192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.169 [2024-11-20 10:04:19.983210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.169 qpair failed and we were unable to recover it. 00:30:49.169 [2024-11-20 10:04:19.983547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.169 [2024-11-20 10:04:19.983564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.169 qpair failed and we were unable to recover it. 00:30:49.169 [2024-11-20 10:04:19.983889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.169 [2024-11-20 10:04:19.983907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.169 qpair failed and we were unable to recover it. 00:30:49.169 [2024-11-20 10:04:19.983983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.169 [2024-11-20 10:04:19.984000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.169 qpair failed and we were unable to recover it. 00:30:49.169 [2024-11-20 10:04:19.984305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.169 [2024-11-20 10:04:19.984323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.169 qpair failed and we were unable to recover it. 00:30:49.169 [2024-11-20 10:04:19.984659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.169 [2024-11-20 10:04:19.984677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.169 qpair failed and we were unable to recover it. 00:30:49.169 [2024-11-20 10:04:19.984857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.169 [2024-11-20 10:04:19.984875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.169 qpair failed and we were unable to recover it. 00:30:49.169 [2024-11-20 10:04:19.985055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.169 [2024-11-20 10:04:19.985071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.169 qpair failed and we were unable to recover it. 00:30:49.169 [2024-11-20 10:04:19.985430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.169 [2024-11-20 10:04:19.985448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.169 qpair failed and we were unable to recover it. 00:30:49.169 [2024-11-20 10:04:19.985793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.169 [2024-11-20 10:04:19.985811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.169 qpair failed and we were unable to recover it. 00:30:49.169 [2024-11-20 10:04:19.986154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.169 [2024-11-20 10:04:19.986178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.169 qpair failed and we were unable to recover it. 00:30:49.169 [2024-11-20 10:04:19.986527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.169 [2024-11-20 10:04:19.986544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.169 qpair failed and we were unable to recover it. 00:30:49.169 [2024-11-20 10:04:19.986872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.169 [2024-11-20 10:04:19.986886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.169 qpair failed and we were unable to recover it. 00:30:49.169 [2024-11-20 10:04:19.987228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.169 [2024-11-20 10:04:19.987244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.169 qpair failed and we were unable to recover it. 00:30:49.169 [2024-11-20 10:04:19.987463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.169 [2024-11-20 10:04:19.987477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.169 qpair failed and we were unable to recover it. 00:30:49.169 [2024-11-20 10:04:19.987805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.169 [2024-11-20 10:04:19.987823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.169 qpair failed and we were unable to recover it. 00:30:49.169 [2024-11-20 10:04:19.988167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.169 [2024-11-20 10:04:19.988182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.169 qpair failed and we were unable to recover it. 00:30:49.169 [2024-11-20 10:04:19.988529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.169 [2024-11-20 10:04:19.988543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.169 qpair failed and we were unable to recover it. 00:30:49.170 [2024-11-20 10:04:19.988881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.170 [2024-11-20 10:04:19.988895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.170 qpair failed and we were unable to recover it. 00:30:49.170 [2024-11-20 10:04:19.989245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.170 [2024-11-20 10:04:19.989260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.170 qpair failed and we were unable to recover it. 00:30:49.170 [2024-11-20 10:04:19.989652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.170 [2024-11-20 10:04:19.989666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.170 qpair failed and we were unable to recover it. 00:30:49.170 [2024-11-20 10:04:19.990009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.170 [2024-11-20 10:04:19.990026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.170 qpair failed and we were unable to recover it. 00:30:49.170 [2024-11-20 10:04:19.990345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.170 [2024-11-20 10:04:19.990362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.170 qpair failed and we were unable to recover it. 00:30:49.170 [2024-11-20 10:04:19.990687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.170 [2024-11-20 10:04:19.990704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.170 qpair failed and we were unable to recover it. 00:30:49.170 [2024-11-20 10:04:19.991028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.170 [2024-11-20 10:04:19.991045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.170 qpair failed and we were unable to recover it. 00:30:49.170 [2024-11-20 10:04:19.991380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.170 [2024-11-20 10:04:19.991399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.170 qpair failed and we were unable to recover it. 00:30:49.170 [2024-11-20 10:04:19.991733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.170 [2024-11-20 10:04:19.991752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.170 qpair failed and we were unable to recover it. 00:30:49.170 [2024-11-20 10:04:19.991971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.170 [2024-11-20 10:04:19.991988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.170 qpair failed and we were unable to recover it. 00:30:49.170 [2024-11-20 10:04:19.992318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.170 [2024-11-20 10:04:19.992337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.170 qpair failed and we were unable to recover it. 00:30:49.170 [2024-11-20 10:04:19.992680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.170 [2024-11-20 10:04:19.992698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.170 qpair failed and we were unable to recover it. 00:30:49.170 [2024-11-20 10:04:19.993043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.170 [2024-11-20 10:04:19.993061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.170 qpair failed and we were unable to recover it. 00:30:49.170 [2024-11-20 10:04:19.993377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.170 [2024-11-20 10:04:19.993395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.170 qpair failed and we were unable to recover it. 00:30:49.170 [2024-11-20 10:04:19.993586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.170 [2024-11-20 10:04:19.993604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.170 qpair failed and we were unable to recover it. 00:30:49.170 [2024-11-20 10:04:19.993807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.170 [2024-11-20 10:04:19.993826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.170 qpair failed and we were unable to recover it. 00:30:49.170 [2024-11-20 10:04:19.994171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.170 [2024-11-20 10:04:19.994189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.170 qpair failed and we were unable to recover it. 00:30:49.170 [2024-11-20 10:04:19.994530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.170 [2024-11-20 10:04:19.994549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.170 qpair failed and we were unable to recover it. 00:30:49.170 [2024-11-20 10:04:19.994870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.170 [2024-11-20 10:04:19.994889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.170 qpair failed and we were unable to recover it. 00:30:49.170 [2024-11-20 10:04:19.995224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.170 [2024-11-20 10:04:19.995243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.170 qpair failed and we were unable to recover it. 00:30:49.170 [2024-11-20 10:04:19.995586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.170 [2024-11-20 10:04:19.995604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.170 qpair failed and we were unable to recover it. 00:30:49.170 [2024-11-20 10:04:19.995951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.170 [2024-11-20 10:04:19.995969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.170 qpair failed and we were unable to recover it. 00:30:49.170 [2024-11-20 10:04:19.996308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.170 [2024-11-20 10:04:19.996328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.170 qpair failed and we were unable to recover it. 00:30:49.170 [2024-11-20 10:04:19.996660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.170 [2024-11-20 10:04:19.996681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.170 qpair failed and we were unable to recover it. 00:30:49.170 [2024-11-20 10:04:19.996897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.170 [2024-11-20 10:04:19.996917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.170 qpair failed and we were unable to recover it. 00:30:49.170 [2024-11-20 10:04:19.997247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.170 [2024-11-20 10:04:19.997267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.170 qpair failed and we were unable to recover it. 00:30:49.170 [2024-11-20 10:04:19.997593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.170 [2024-11-20 10:04:19.997611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.170 qpair failed and we were unable to recover it. 00:30:49.170 [2024-11-20 10:04:19.997797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.170 [2024-11-20 10:04:19.997817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.170 qpair failed and we were unable to recover it. 00:30:49.170 [2024-11-20 10:04:19.998115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.170 [2024-11-20 10:04:19.998133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.170 qpair failed and we were unable to recover it. 00:30:49.170 [2024-11-20 10:04:19.998391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.170 [2024-11-20 10:04:19.998409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.170 qpair failed and we were unable to recover it. 00:30:49.170 [2024-11-20 10:04:19.998761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.170 [2024-11-20 10:04:19.998779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.170 qpair failed and we were unable to recover it. 00:30:49.170 [2024-11-20 10:04:19.999092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.170 [2024-11-20 10:04:19.999110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.170 qpair failed and we were unable to recover it. 00:30:49.170 [2024-11-20 10:04:19.999456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.170 [2024-11-20 10:04:19.999474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.170 qpair failed and we were unable to recover it. 00:30:49.170 [2024-11-20 10:04:19.999829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.170 [2024-11-20 10:04:19.999847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.170 qpair failed and we were unable to recover it. 00:30:49.170 [2024-11-20 10:04:20.000187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.170 [2024-11-20 10:04:20.000206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.170 qpair failed and we were unable to recover it. 00:30:49.170 [2024-11-20 10:04:20.000506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.170 [2024-11-20 10:04:20.000523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.170 qpair failed and we were unable to recover it. 00:30:49.170 [2024-11-20 10:04:20.000864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.170 [2024-11-20 10:04:20.000882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.170 qpair failed and we were unable to recover it. 00:30:49.170 [2024-11-20 10:04:20.001209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.170 [2024-11-20 10:04:20.001232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.170 qpair failed and we were unable to recover it. 00:30:49.171 [2024-11-20 10:04:20.001567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.171 [2024-11-20 10:04:20.001587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.171 qpair failed and we were unable to recover it. 00:30:49.171 [2024-11-20 10:04:20.002347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.171 [2024-11-20 10:04:20.002374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.171 qpair failed and we were unable to recover it. 00:30:49.171 [2024-11-20 10:04:20.002608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.171 [2024-11-20 10:04:20.002628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.171 qpair failed and we were unable to recover it. 00:30:49.171 [2024-11-20 10:04:20.002976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.171 [2024-11-20 10:04:20.002994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.171 qpair failed and we were unable to recover it. 00:30:49.171 [2024-11-20 10:04:20.003329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.171 [2024-11-20 10:04:20.003348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.171 qpair failed and we were unable to recover it. 00:30:49.171 [2024-11-20 10:04:20.003698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.171 [2024-11-20 10:04:20.003716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.171 qpair failed and we were unable to recover it. 00:30:49.171 [2024-11-20 10:04:20.004051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.171 [2024-11-20 10:04:20.004070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.171 qpair failed and we were unable to recover it. 00:30:49.171 [2024-11-20 10:04:20.004416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.171 [2024-11-20 10:04:20.004435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.171 qpair failed and we were unable to recover it. 00:30:49.171 [2024-11-20 10:04:20.004770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.171 [2024-11-20 10:04:20.004789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.171 qpair failed and we were unable to recover it. 00:30:49.171 [2024-11-20 10:04:20.005157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.171 [2024-11-20 10:04:20.005185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.171 qpair failed and we were unable to recover it. 00:30:49.171 [2024-11-20 10:04:20.005523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.171 [2024-11-20 10:04:20.005541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.171 qpair failed and we were unable to recover it. 00:30:49.171 [2024-11-20 10:04:20.005906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.171 [2024-11-20 10:04:20.005925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.171 qpair failed and we were unable to recover it. 00:30:49.171 [2024-11-20 10:04:20.006238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.171 [2024-11-20 10:04:20.006256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.171 qpair failed and we were unable to recover it. 00:30:49.171 [2024-11-20 10:04:20.006607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.171 [2024-11-20 10:04:20.006625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.171 qpair failed and we were unable to recover it. 00:30:49.171 [2024-11-20 10:04:20.006957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.171 [2024-11-20 10:04:20.006975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.171 qpair failed and we were unable to recover it. 00:30:49.171 [2024-11-20 10:04:20.007314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.171 [2024-11-20 10:04:20.007334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.171 qpair failed and we were unable to recover it. 00:30:49.171 [2024-11-20 10:04:20.007673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.171 [2024-11-20 10:04:20.007691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.171 qpair failed and we were unable to recover it. 00:30:49.171 [2024-11-20 10:04:20.007887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.171 [2024-11-20 10:04:20.007907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.171 qpair failed and we were unable to recover it. 00:30:49.171 [2024-11-20 10:04:20.008270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.171 [2024-11-20 10:04:20.008288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.171 qpair failed and we were unable to recover it. 00:30:49.171 [2024-11-20 10:04:20.008562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.171 [2024-11-20 10:04:20.008582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.171 qpair failed and we were unable to recover it. 00:30:49.171 [2024-11-20 10:04:20.008901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.171 [2024-11-20 10:04:20.008919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.171 qpair failed and we were unable to recover it. 00:30:49.171 [2024-11-20 10:04:20.009152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.171 [2024-11-20 10:04:20.009179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.171 qpair failed and we were unable to recover it. 00:30:49.171 [2024-11-20 10:04:20.009411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.171 [2024-11-20 10:04:20.009429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.171 qpair failed and we were unable to recover it. 00:30:49.171 [2024-11-20 10:04:20.009777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.171 [2024-11-20 10:04:20.009795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.171 qpair failed and we were unable to recover it. 00:30:49.171 [2024-11-20 10:04:20.010005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.171 [2024-11-20 10:04:20.010026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.171 qpair failed and we were unable to recover it. 00:30:49.171 [2024-11-20 10:04:20.010329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.171 [2024-11-20 10:04:20.010348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.171 qpair failed and we were unable to recover it. 00:30:49.171 [2024-11-20 10:04:20.010574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.171 [2024-11-20 10:04:20.010591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.171 qpair failed and we were unable to recover it. 00:30:49.171 [2024-11-20 10:04:20.010923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.171 [2024-11-20 10:04:20.010941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.171 qpair failed and we were unable to recover it. 00:30:49.171 [2024-11-20 10:04:20.011271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.171 [2024-11-20 10:04:20.011289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.171 qpair failed and we were unable to recover it. 00:30:49.171 [2024-11-20 10:04:20.011613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.171 [2024-11-20 10:04:20.011630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.171 qpair failed and we were unable to recover it. 00:30:49.171 [2024-11-20 10:04:20.011962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.171 [2024-11-20 10:04:20.011980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.171 qpair failed and we were unable to recover it. 00:30:49.171 [2024-11-20 10:04:20.012325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.171 [2024-11-20 10:04:20.012345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.171 qpair failed and we were unable to recover it. 00:30:49.171 [2024-11-20 10:04:20.012696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.171 [2024-11-20 10:04:20.012712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.171 qpair failed and we were unable to recover it. 00:30:49.171 [2024-11-20 10:04:20.013050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.171 [2024-11-20 10:04:20.013068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.171 qpair failed and we were unable to recover it. 00:30:49.171 [2024-11-20 10:04:20.013257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.171 [2024-11-20 10:04:20.013277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.171 qpair failed and we were unable to recover it. 00:30:49.171 [2024-11-20 10:04:20.013616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.171 [2024-11-20 10:04:20.013633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.171 qpair failed and we were unable to recover it. 00:30:49.171 [2024-11-20 10:04:20.013954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.171 [2024-11-20 10:04:20.013972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.171 qpair failed and we were unable to recover it. 00:30:49.171 [2024-11-20 10:04:20.014291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.171 [2024-11-20 10:04:20.014309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.172 qpair failed and we were unable to recover it. 00:30:49.172 [2024-11-20 10:04:20.014644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.172 [2024-11-20 10:04:20.014663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.172 qpair failed and we were unable to recover it. 00:30:49.172 [2024-11-20 10:04:20.014875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.172 [2024-11-20 10:04:20.014904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.172 qpair failed and we were unable to recover it. 00:30:49.172 [2024-11-20 10:04:20.015107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.172 [2024-11-20 10:04:20.015124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.172 qpair failed and we were unable to recover it. 00:30:49.172 [2024-11-20 10:04:20.015353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.172 [2024-11-20 10:04:20.015372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.172 qpair failed and we were unable to recover it. 00:30:49.172 [2024-11-20 10:04:20.015697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.172 [2024-11-20 10:04:20.015715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.172 qpair failed and we were unable to recover it. 00:30:49.172 [2024-11-20 10:04:20.016052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.172 [2024-11-20 10:04:20.016069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.172 qpair failed and we were unable to recover it. 00:30:49.172 [2024-11-20 10:04:20.016419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.172 [2024-11-20 10:04:20.016436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.172 qpair failed and we were unable to recover it. 00:30:49.172 [2024-11-20 10:04:20.016774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.172 [2024-11-20 10:04:20.016793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.172 qpair failed and we were unable to recover it. 00:30:49.172 [2024-11-20 10:04:20.017012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.172 [2024-11-20 10:04:20.017030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.172 qpair failed and we were unable to recover it. 00:30:49.172 [2024-11-20 10:04:20.017387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.172 [2024-11-20 10:04:20.017405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.172 qpair failed and we were unable to recover it. 00:30:49.172 [2024-11-20 10:04:20.017746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.172 [2024-11-20 10:04:20.017763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.172 qpair failed and we were unable to recover it. 00:30:49.172 [2024-11-20 10:04:20.018100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.172 [2024-11-20 10:04:20.018119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.172 qpair failed and we were unable to recover it. 00:30:49.172 [2024-11-20 10:04:20.018453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.172 [2024-11-20 10:04:20.018472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.172 qpair failed and we were unable to recover it. 00:30:49.172 [2024-11-20 10:04:20.018788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.172 [2024-11-20 10:04:20.018806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.172 qpair failed and we were unable to recover it. 00:30:49.172 [2024-11-20 10:04:20.019139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.172 [2024-11-20 10:04:20.019156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.172 qpair failed and we were unable to recover it. 00:30:49.172 [2024-11-20 10:04:20.019527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.172 [2024-11-20 10:04:20.019546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.172 qpair failed and we were unable to recover it. 00:30:49.172 [2024-11-20 10:04:20.019870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.172 [2024-11-20 10:04:20.019887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.172 qpair failed and we were unable to recover it. 00:30:49.172 [2024-11-20 10:04:20.020247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.172 [2024-11-20 10:04:20.020264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.172 qpair failed and we were unable to recover it. 00:30:49.172 [2024-11-20 10:04:20.020492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.172 [2024-11-20 10:04:20.020510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.172 qpair failed and we were unable to recover it. 00:30:49.172 [2024-11-20 10:04:20.020838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.172 [2024-11-20 10:04:20.020856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.172 qpair failed and we were unable to recover it. 00:30:49.172 [2024-11-20 10:04:20.021193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.172 [2024-11-20 10:04:20.021210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.172 qpair failed and we were unable to recover it. 00:30:49.172 [2024-11-20 10:04:20.021539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.172 [2024-11-20 10:04:20.021557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.172 qpair failed and we were unable to recover it. 00:30:49.172 [2024-11-20 10:04:20.021889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.172 [2024-11-20 10:04:20.021908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.172 qpair failed and we were unable to recover it. 00:30:49.172 [2024-11-20 10:04:20.022241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.172 [2024-11-20 10:04:20.022259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.172 qpair failed and we were unable to recover it. 00:30:49.172 [2024-11-20 10:04:20.022585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.172 [2024-11-20 10:04:20.022602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.172 qpair failed and we were unable to recover it. 00:30:49.172 [2024-11-20 10:04:20.022948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.172 [2024-11-20 10:04:20.022965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.172 qpair failed and we were unable to recover it. 00:30:49.172 [2024-11-20 10:04:20.023318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.172 [2024-11-20 10:04:20.023338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.172 qpair failed and we were unable to recover it. 00:30:49.172 [2024-11-20 10:04:20.023574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.172 [2024-11-20 10:04:20.023591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.172 qpair failed and we were unable to recover it. 00:30:49.172 [2024-11-20 10:04:20.023814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.172 [2024-11-20 10:04:20.023830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.172 qpair failed and we were unable to recover it. 00:30:49.172 [2024-11-20 10:04:20.024197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.172 [2024-11-20 10:04:20.024215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.172 qpair failed and we were unable to recover it. 00:30:49.172 [2024-11-20 10:04:20.024552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.172 [2024-11-20 10:04:20.024568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.172 qpair failed and we were unable to recover it. 00:30:49.172 [2024-11-20 10:04:20.024905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.172 [2024-11-20 10:04:20.024921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.172 qpair failed and we were unable to recover it. 00:30:49.172 [2024-11-20 10:04:20.025258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.172 [2024-11-20 10:04:20.025278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.172 qpair failed and we were unable to recover it. 00:30:49.172 [2024-11-20 10:04:20.025637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.172 [2024-11-20 10:04:20.025655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.172 qpair failed and we were unable to recover it. 00:30:49.172 [2024-11-20 10:04:20.025991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.172 [2024-11-20 10:04:20.026009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.172 qpair failed and we were unable to recover it. 00:30:49.172 [2024-11-20 10:04:20.026332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.172 [2024-11-20 10:04:20.026350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.172 qpair failed and we were unable to recover it. 00:30:49.172 [2024-11-20 10:04:20.026689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.172 [2024-11-20 10:04:20.026708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.172 qpair failed and we were unable to recover it. 00:30:49.172 [2024-11-20 10:04:20.027040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.173 [2024-11-20 10:04:20.027058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.173 qpair failed and we were unable to recover it. 00:30:49.173 [2024-11-20 10:04:20.027397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.173 [2024-11-20 10:04:20.027417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.173 qpair failed and we were unable to recover it. 00:30:49.173 [2024-11-20 10:04:20.027787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.173 [2024-11-20 10:04:20.027804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.173 qpair failed and we were unable to recover it. 00:30:49.173 [2024-11-20 10:04:20.028145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.173 [2024-11-20 10:04:20.028171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.173 qpair failed and we were unable to recover it. 00:30:49.173 [2024-11-20 10:04:20.028528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.173 [2024-11-20 10:04:20.028549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.173 qpair failed and we were unable to recover it. 00:30:49.173 [2024-11-20 10:04:20.028876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.173 [2024-11-20 10:04:20.028894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.173 qpair failed and we were unable to recover it. 00:30:49.173 [2024-11-20 10:04:20.029227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.173 [2024-11-20 10:04:20.029247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.173 qpair failed and we were unable to recover it. 00:30:49.173 [2024-11-20 10:04:20.029572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.173 [2024-11-20 10:04:20.029588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.173 qpair failed and we were unable to recover it. 00:30:49.173 [2024-11-20 10:04:20.029934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.173 [2024-11-20 10:04:20.029950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.173 qpair failed and we were unable to recover it. 00:30:49.173 [2024-11-20 10:04:20.030290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.173 [2024-11-20 10:04:20.030308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.173 qpair failed and we were unable to recover it. 00:30:49.173 [2024-11-20 10:04:20.030519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.173 [2024-11-20 10:04:20.030540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.173 qpair failed and we were unable to recover it. 00:30:49.173 [2024-11-20 10:04:20.030846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.173 [2024-11-20 10:04:20.030864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.173 qpair failed and we were unable to recover it. 00:30:49.173 [2024-11-20 10:04:20.031186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.173 [2024-11-20 10:04:20.031204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.173 qpair failed and we were unable to recover it. 00:30:49.173 [2024-11-20 10:04:20.031456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.173 [2024-11-20 10:04:20.031473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.173 qpair failed and we were unable to recover it. 00:30:49.173 [2024-11-20 10:04:20.031757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.173 [2024-11-20 10:04:20.031774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.173 qpair failed and we were unable to recover it. 00:30:49.173 [2024-11-20 10:04:20.032108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.173 [2024-11-20 10:04:20.032125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.173 qpair failed and we were unable to recover it. 00:30:49.173 [2024-11-20 10:04:20.032441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.173 [2024-11-20 10:04:20.032459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.173 qpair failed and we were unable to recover it. 00:30:49.173 [2024-11-20 10:04:20.032676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.173 [2024-11-20 10:04:20.032694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.173 qpair failed and we were unable to recover it. 00:30:49.173 [2024-11-20 10:04:20.033036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.173 [2024-11-20 10:04:20.033053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.173 qpair failed and we were unable to recover it. 00:30:49.173 [2024-11-20 10:04:20.033399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.173 [2024-11-20 10:04:20.033416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.173 qpair failed and we were unable to recover it. 00:30:49.173 [2024-11-20 10:04:20.033746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.173 [2024-11-20 10:04:20.033763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.173 qpair failed and we were unable to recover it. 00:30:49.173 [2024-11-20 10:04:20.034101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.173 [2024-11-20 10:04:20.034118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.173 qpair failed and we were unable to recover it. 00:30:49.173 [2024-11-20 10:04:20.034368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.173 [2024-11-20 10:04:20.034385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.173 qpair failed and we were unable to recover it. 00:30:49.173 [2024-11-20 10:04:20.034748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.173 [2024-11-20 10:04:20.034767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.173 qpair failed and we were unable to recover it. 00:30:49.173 [2024-11-20 10:04:20.035123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.173 [2024-11-20 10:04:20.035141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.173 qpair failed and we were unable to recover it. 00:30:49.173 [2024-11-20 10:04:20.035476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.173 [2024-11-20 10:04:20.035496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.173 qpair failed and we were unable to recover it. 00:30:49.173 [2024-11-20 10:04:20.035830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.173 [2024-11-20 10:04:20.035848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.173 qpair failed and we were unable to recover it. 00:30:49.173 [2024-11-20 10:04:20.036185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.173 [2024-11-20 10:04:20.036202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.173 qpair failed and we were unable to recover it. 00:30:49.173 [2024-11-20 10:04:20.036536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.173 [2024-11-20 10:04:20.036553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.173 qpair failed and we were unable to recover it. 00:30:49.173 [2024-11-20 10:04:20.036876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.173 [2024-11-20 10:04:20.036895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.173 qpair failed and we were unable to recover it. 00:30:49.173 [2024-11-20 10:04:20.037231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.173 [2024-11-20 10:04:20.037249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.173 qpair failed and we were unable to recover it. 00:30:49.173 [2024-11-20 10:04:20.037563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.173 [2024-11-20 10:04:20.037580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.173 qpair failed and we were unable to recover it. 00:30:49.173 [2024-11-20 10:04:20.037903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.173 [2024-11-20 10:04:20.037921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.173 qpair failed and we were unable to recover it. 00:30:49.173 [2024-11-20 10:04:20.038279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.173 [2024-11-20 10:04:20.038297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.173 qpair failed and we were unable to recover it. 00:30:49.173 [2024-11-20 10:04:20.038635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.174 [2024-11-20 10:04:20.038652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.174 qpair failed and we were unable to recover it. 00:30:49.174 [2024-11-20 10:04:20.038981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.174 [2024-11-20 10:04:20.038999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.174 qpair failed and we were unable to recover it. 00:30:49.174 [2024-11-20 10:04:20.039392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.174 [2024-11-20 10:04:20.039410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.174 qpair failed and we were unable to recover it. 00:30:49.174 [2024-11-20 10:04:20.039739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.174 [2024-11-20 10:04:20.039756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.174 qpair failed and we were unable to recover it. 00:30:49.174 [2024-11-20 10:04:20.039975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.174 [2024-11-20 10:04:20.039994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.174 qpair failed and we were unable to recover it. 00:30:49.174 [2024-11-20 10:04:20.040326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.174 [2024-11-20 10:04:20.040348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.174 qpair failed and we were unable to recover it. 00:30:49.174 [2024-11-20 10:04:20.040662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.174 [2024-11-20 10:04:20.040681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.174 qpair failed and we were unable to recover it. 00:30:49.174 [2024-11-20 10:04:20.040881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.174 [2024-11-20 10:04:20.040901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.174 qpair failed and we were unable to recover it. 00:30:49.174 [2024-11-20 10:04:20.041256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.174 [2024-11-20 10:04:20.041281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.174 qpair failed and we were unable to recover it. 00:30:49.174 [2024-11-20 10:04:20.041661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.174 [2024-11-20 10:04:20.041681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.174 qpair failed and we were unable to recover it. 00:30:49.174 [2024-11-20 10:04:20.041997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.174 [2024-11-20 10:04:20.042018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.174 qpair failed and we were unable to recover it. 00:30:49.174 [2024-11-20 10:04:20.042343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.174 [2024-11-20 10:04:20.042362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.174 qpair failed and we were unable to recover it. 00:30:49.174 [2024-11-20 10:04:20.042699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.174 [2024-11-20 10:04:20.042717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.174 qpair failed and we were unable to recover it. 00:30:49.174 [2024-11-20 10:04:20.043059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.174 [2024-11-20 10:04:20.043077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.174 qpair failed and we were unable to recover it. 00:30:49.174 [2024-11-20 10:04:20.043465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.174 [2024-11-20 10:04:20.043483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.174 qpair failed and we were unable to recover it. 00:30:49.174 [2024-11-20 10:04:20.043822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.174 [2024-11-20 10:04:20.043841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.174 qpair failed and we were unable to recover it. 00:30:49.174 [2024-11-20 10:04:20.044187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.174 [2024-11-20 10:04:20.044206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.174 qpair failed and we were unable to recover it. 00:30:49.174 [2024-11-20 10:04:20.044408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.174 [2024-11-20 10:04:20.044426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.174 qpair failed and we were unable to recover it. 00:30:49.174 [2024-11-20 10:04:20.044784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.174 [2024-11-20 10:04:20.044802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.174 qpair failed and we were unable to recover it. 00:30:49.174 [2024-11-20 10:04:20.045130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.174 [2024-11-20 10:04:20.045150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.174 qpair failed and we were unable to recover it. 00:30:49.174 [2024-11-20 10:04:20.045494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.174 [2024-11-20 10:04:20.045512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.174 qpair failed and we were unable to recover it. 00:30:49.174 [2024-11-20 10:04:20.045856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.174 [2024-11-20 10:04:20.045873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.174 qpair failed and we were unable to recover it. 00:30:49.174 [2024-11-20 10:04:20.046253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.174 [2024-11-20 10:04:20.046272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.174 qpair failed and we were unable to recover it. 00:30:49.174 [2024-11-20 10:04:20.046615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.174 [2024-11-20 10:04:20.046632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.174 qpair failed and we were unable to recover it. 00:30:49.174 [2024-11-20 10:04:20.046963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.174 [2024-11-20 10:04:20.046982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.174 qpair failed and we were unable to recover it. 00:30:49.174 [2024-11-20 10:04:20.047327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.174 [2024-11-20 10:04:20.047345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.174 qpair failed and we were unable to recover it. 00:30:49.174 [2024-11-20 10:04:20.047708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.174 [2024-11-20 10:04:20.047728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.174 qpair failed and we were unable to recover it. 00:30:49.174 [2024-11-20 10:04:20.048071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.174 [2024-11-20 10:04:20.048094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.174 qpair failed and we were unable to recover it. 00:30:49.174 [2024-11-20 10:04:20.048318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.174 [2024-11-20 10:04:20.048337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.174 qpair failed and we were unable to recover it. 00:30:49.174 [2024-11-20 10:04:20.048625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.174 [2024-11-20 10:04:20.048644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.174 qpair failed and we were unable to recover it. 00:30:49.174 [2024-11-20 10:04:20.049011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.174 [2024-11-20 10:04:20.049029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.174 qpair failed and we were unable to recover it. 00:30:49.174 [2024-11-20 10:04:20.049331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.174 [2024-11-20 10:04:20.049348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.174 qpair failed and we were unable to recover it. 00:30:49.174 [2024-11-20 10:04:20.049684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.174 [2024-11-20 10:04:20.049701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.174 qpair failed and we were unable to recover it. 00:30:49.174 [2024-11-20 10:04:20.050038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.174 [2024-11-20 10:04:20.050056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.174 qpair failed and we were unable to recover it. 00:30:49.174 [2024-11-20 10:04:20.050376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.174 [2024-11-20 10:04:20.050394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.174 qpair failed and we were unable to recover it. 00:30:49.451 [2024-11-20 10:04:20.050765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.451 [2024-11-20 10:04:20.050786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.451 qpair failed and we were unable to recover it. 00:30:49.451 [2024-11-20 10:04:20.051119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.452 [2024-11-20 10:04:20.051139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.452 qpair failed and we were unable to recover it. 00:30:49.452 [2024-11-20 10:04:20.051487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.452 [2024-11-20 10:04:20.051505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.452 qpair failed and we were unable to recover it. 00:30:49.452 [2024-11-20 10:04:20.051836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.452 [2024-11-20 10:04:20.051854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.452 qpair failed and we were unable to recover it. 00:30:49.452 [2024-11-20 10:04:20.052202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.452 [2024-11-20 10:04:20.052220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.452 qpair failed and we were unable to recover it. 00:30:49.452 [2024-11-20 10:04:20.052514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.452 [2024-11-20 10:04:20.052532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.452 qpair failed and we were unable to recover it. 00:30:49.452 [2024-11-20 10:04:20.052863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.452 [2024-11-20 10:04:20.052883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.452 qpair failed and we were unable to recover it. 00:30:49.452 [2024-11-20 10:04:20.053210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.452 [2024-11-20 10:04:20.053229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.452 qpair failed and we were unable to recover it. 00:30:49.452 [2024-11-20 10:04:20.053571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.452 [2024-11-20 10:04:20.053589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.452 qpair failed and we were unable to recover it. 00:30:49.452 [2024-11-20 10:04:20.053923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.452 [2024-11-20 10:04:20.053941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.452 qpair failed and we were unable to recover it. 00:30:49.452 [2024-11-20 10:04:20.054282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.452 [2024-11-20 10:04:20.054302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.452 qpair failed and we were unable to recover it. 00:30:49.452 [2024-11-20 10:04:20.054626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.452 [2024-11-20 10:04:20.054645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.452 qpair failed and we were unable to recover it. 00:30:49.452 [2024-11-20 10:04:20.055037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.452 [2024-11-20 10:04:20.055055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.452 qpair failed and we were unable to recover it. 00:30:49.452 [2024-11-20 10:04:20.055368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.452 [2024-11-20 10:04:20.055386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.452 qpair failed and we were unable to recover it. 00:30:49.452 [2024-11-20 10:04:20.055731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.452 [2024-11-20 10:04:20.055749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.452 qpair failed and we were unable to recover it. 00:30:49.452 [2024-11-20 10:04:20.056094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.452 [2024-11-20 10:04:20.056116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.452 qpair failed and we were unable to recover it. 00:30:49.452 [2024-11-20 10:04:20.056443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.452 [2024-11-20 10:04:20.056461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.452 qpair failed and we were unable to recover it. 00:30:49.452 [2024-11-20 10:04:20.056800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.452 [2024-11-20 10:04:20.056818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.452 qpair failed and we were unable to recover it. 00:30:49.452 [2024-11-20 10:04:20.057172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.452 [2024-11-20 10:04:20.057192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.452 qpair failed and we were unable to recover it. 00:30:49.452 [2024-11-20 10:04:20.057493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.452 [2024-11-20 10:04:20.057511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.452 qpair failed and we were unable to recover it. 00:30:49.452 [2024-11-20 10:04:20.057848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.452 [2024-11-20 10:04:20.057866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.452 qpair failed and we were unable to recover it. 00:30:49.452 [2024-11-20 10:04:20.058210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.452 [2024-11-20 10:04:20.058231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.452 qpair failed and we were unable to recover it. 00:30:49.452 [2024-11-20 10:04:20.058562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.452 [2024-11-20 10:04:20.058579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.452 qpair failed and we were unable to recover it. 00:30:49.452 [2024-11-20 10:04:20.058910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.452 [2024-11-20 10:04:20.058929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.452 qpair failed and we were unable to recover it. 00:30:49.452 [2024-11-20 10:04:20.059249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.452 [2024-11-20 10:04:20.059268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.452 qpair failed and we were unable to recover it. 00:30:49.452 [2024-11-20 10:04:20.059641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.452 [2024-11-20 10:04:20.059658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.452 qpair failed and we were unable to recover it. 00:30:49.452 [2024-11-20 10:04:20.059963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.452 [2024-11-20 10:04:20.059981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.452 qpair failed and we were unable to recover it. 00:30:49.452 [2024-11-20 10:04:20.060302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.452 [2024-11-20 10:04:20.060319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.452 qpair failed and we were unable to recover it. 00:30:49.452 [2024-11-20 10:04:20.060512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.452 [2024-11-20 10:04:20.060530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.452 qpair failed and we were unable to recover it. 00:30:49.452 [2024-11-20 10:04:20.060925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.452 [2024-11-20 10:04:20.060941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.452 qpair failed and we were unable to recover it. 00:30:49.452 [2024-11-20 10:04:20.061175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.452 [2024-11-20 10:04:20.061193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.452 qpair failed and we were unable to recover it. 00:30:49.452 [2024-11-20 10:04:20.061404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.452 [2024-11-20 10:04:20.061420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.452 qpair failed and we were unable to recover it. 00:30:49.452 [2024-11-20 10:04:20.061763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.452 [2024-11-20 10:04:20.061780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.452 qpair failed and we were unable to recover it. 00:30:49.452 [2024-11-20 10:04:20.062117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.452 [2024-11-20 10:04:20.062134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.452 qpair failed and we were unable to recover it. 00:30:49.452 [2024-11-20 10:04:20.062475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.452 [2024-11-20 10:04:20.062495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.452 qpair failed and we were unable to recover it. 00:30:49.452 [2024-11-20 10:04:20.062856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.452 [2024-11-20 10:04:20.062875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.452 qpair failed and we were unable to recover it. 00:30:49.452 [2024-11-20 10:04:20.063082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.452 [2024-11-20 10:04:20.063100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.452 qpair failed and we were unable to recover it. 00:30:49.452 [2024-11-20 10:04:20.063444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.452 [2024-11-20 10:04:20.063463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.452 qpair failed and we were unable to recover it. 00:30:49.452 [2024-11-20 10:04:20.063800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.453 [2024-11-20 10:04:20.063818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.453 qpair failed and we were unable to recover it. 00:30:49.453 [2024-11-20 10:04:20.064144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.453 [2024-11-20 10:04:20.064169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.453 qpair failed and we were unable to recover it. 00:30:49.453 [2024-11-20 10:04:20.064512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.453 [2024-11-20 10:04:20.064531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.453 qpair failed and we were unable to recover it. 00:30:49.453 [2024-11-20 10:04:20.064870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.453 [2024-11-20 10:04:20.064887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.453 qpair failed and we were unable to recover it. 00:30:49.453 [2024-11-20 10:04:20.065232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.453 [2024-11-20 10:04:20.065251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.453 qpair failed and we were unable to recover it. 00:30:49.453 [2024-11-20 10:04:20.065634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.453 [2024-11-20 10:04:20.065651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.453 qpair failed and we were unable to recover it. 00:30:49.453 [2024-11-20 10:04:20.066001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.453 [2024-11-20 10:04:20.066020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.453 qpair failed and we were unable to recover it. 00:30:49.453 [2024-11-20 10:04:20.066231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.453 [2024-11-20 10:04:20.066251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.453 qpair failed and we were unable to recover it. 00:30:49.453 [2024-11-20 10:04:20.066442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.453 [2024-11-20 10:04:20.066462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.453 qpair failed and we were unable to recover it. 00:30:49.453 [2024-11-20 10:04:20.066706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.453 [2024-11-20 10:04:20.066723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.453 qpair failed and we were unable to recover it. 00:30:49.453 [2024-11-20 10:04:20.067054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.453 [2024-11-20 10:04:20.067072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.453 qpair failed and we were unable to recover it. 00:30:49.453 [2024-11-20 10:04:20.067408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.453 [2024-11-20 10:04:20.067426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.453 qpair failed and we were unable to recover it. 00:30:49.453 [2024-11-20 10:04:20.067766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.453 [2024-11-20 10:04:20.067785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.453 qpair failed and we were unable to recover it. 00:30:49.453 [2024-11-20 10:04:20.068128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.453 [2024-11-20 10:04:20.068147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.453 qpair failed and we were unable to recover it. 00:30:49.453 [2024-11-20 10:04:20.068535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.453 [2024-11-20 10:04:20.068554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.453 qpair failed and we were unable to recover it. 00:30:49.453 [2024-11-20 10:04:20.068789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.453 [2024-11-20 10:04:20.068807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.453 qpair failed and we were unable to recover it. 00:30:49.453 [2024-11-20 10:04:20.069135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.453 [2024-11-20 10:04:20.069153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.453 qpair failed and we were unable to recover it. 00:30:49.453 [2024-11-20 10:04:20.069527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.453 [2024-11-20 10:04:20.069549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.453 qpair failed and we were unable to recover it. 00:30:49.453 [2024-11-20 10:04:20.069886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.453 [2024-11-20 10:04:20.069904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.453 qpair failed and we were unable to recover it. 00:30:49.453 [2024-11-20 10:04:20.070235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.453 [2024-11-20 10:04:20.070252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.453 qpair failed and we were unable to recover it. 00:30:49.453 [2024-11-20 10:04:20.070591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.453 [2024-11-20 10:04:20.070607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.453 qpair failed and we were unable to recover it. 00:30:49.453 [2024-11-20 10:04:20.070959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.453 [2024-11-20 10:04:20.070976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.453 qpair failed and we were unable to recover it. 00:30:49.453 [2024-11-20 10:04:20.071305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.453 [2024-11-20 10:04:20.071324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.453 qpair failed and we were unable to recover it. 00:30:49.453 [2024-11-20 10:04:20.071664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.453 [2024-11-20 10:04:20.071682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.453 qpair failed and we were unable to recover it. 00:30:49.453 [2024-11-20 10:04:20.072067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.453 [2024-11-20 10:04:20.072085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.453 qpair failed and we were unable to recover it. 00:30:49.453 [2024-11-20 10:04:20.072288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.453 [2024-11-20 10:04:20.072305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.453 qpair failed and we were unable to recover it. 00:30:49.453 [2024-11-20 10:04:20.072665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.453 [2024-11-20 10:04:20.072682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.453 qpair failed and we were unable to recover it. 00:30:49.453 [2024-11-20 10:04:20.073022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.453 [2024-11-20 10:04:20.073042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.453 qpair failed and we were unable to recover it. 00:30:49.453 [2024-11-20 10:04:20.073380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.453 [2024-11-20 10:04:20.073398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.453 qpair failed and we were unable to recover it. 00:30:49.453 [2024-11-20 10:04:20.073744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.453 [2024-11-20 10:04:20.073761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.453 qpair failed and we were unable to recover it. 00:30:49.453 [2024-11-20 10:04:20.074105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.453 [2024-11-20 10:04:20.074122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.453 qpair failed and we were unable to recover it. 00:30:49.453 [2024-11-20 10:04:20.074483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.453 [2024-11-20 10:04:20.074502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.453 qpair failed and we were unable to recover it. 00:30:49.453 [2024-11-20 10:04:20.074843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.453 [2024-11-20 10:04:20.074861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.453 qpair failed and we were unable to recover it. 00:30:49.453 [2024-11-20 10:04:20.075178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.453 [2024-11-20 10:04:20.075196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.453 qpair failed and we were unable to recover it. 00:30:49.453 [2024-11-20 10:04:20.075531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.453 [2024-11-20 10:04:20.075547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.453 qpair failed and we were unable to recover it. 00:30:49.453 [2024-11-20 10:04:20.075890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.453 [2024-11-20 10:04:20.075906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.453 qpair failed and we were unable to recover it. 00:30:49.453 [2024-11-20 10:04:20.076239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.453 [2024-11-20 10:04:20.076258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.453 qpair failed and we were unable to recover it. 00:30:49.453 [2024-11-20 10:04:20.076616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.453 [2024-11-20 10:04:20.076633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.453 qpair failed and we were unable to recover it. 00:30:49.454 [2024-11-20 10:04:20.076951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.454 [2024-11-20 10:04:20.076969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.454 qpair failed and we were unable to recover it. 00:30:49.454 [2024-11-20 10:04:20.077307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.454 [2024-11-20 10:04:20.077325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.454 qpair failed and we were unable to recover it. 00:30:49.454 [2024-11-20 10:04:20.077654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.454 [2024-11-20 10:04:20.077671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.454 qpair failed and we were unable to recover it. 00:30:49.454 [2024-11-20 10:04:20.078019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.454 [2024-11-20 10:04:20.078037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.454 qpair failed and we were unable to recover it. 00:30:49.454 [2024-11-20 10:04:20.078371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.454 [2024-11-20 10:04:20.078390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.454 qpair failed and we were unable to recover it. 00:30:49.454 [2024-11-20 10:04:20.078619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.454 [2024-11-20 10:04:20.078637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.454 qpair failed and we were unable to recover it. 00:30:49.454 [2024-11-20 10:04:20.078860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.454 [2024-11-20 10:04:20.078878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.454 qpair failed and we were unable to recover it. 00:30:49.454 [2024-11-20 10:04:20.079195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.454 [2024-11-20 10:04:20.079213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.454 qpair failed and we were unable to recover it. 00:30:49.454 [2024-11-20 10:04:20.079587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.454 [2024-11-20 10:04:20.079604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.454 qpair failed and we were unable to recover it. 00:30:49.454 [2024-11-20 10:04:20.079943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.454 [2024-11-20 10:04:20.079961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.454 qpair failed and we were unable to recover it. 00:30:49.454 [2024-11-20 10:04:20.080337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.454 [2024-11-20 10:04:20.080355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.454 qpair failed and we were unable to recover it. 00:30:49.454 [2024-11-20 10:04:20.080697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.454 [2024-11-20 10:04:20.080714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.454 qpair failed and we were unable to recover it. 00:30:49.454 [2024-11-20 10:04:20.081035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.454 [2024-11-20 10:04:20.081053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.454 qpair failed and we were unable to recover it. 00:30:49.454 [2024-11-20 10:04:20.081386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.454 [2024-11-20 10:04:20.081405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.454 qpair failed and we were unable to recover it. 00:30:49.454 [2024-11-20 10:04:20.081743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.454 [2024-11-20 10:04:20.081760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.454 qpair failed and we were unable to recover it. 00:30:49.454 [2024-11-20 10:04:20.082114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.454 [2024-11-20 10:04:20.082132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.454 qpair failed and we were unable to recover it. 00:30:49.454 [2024-11-20 10:04:20.082470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.454 [2024-11-20 10:04:20.082487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.454 qpair failed and we were unable to recover it. 00:30:49.454 [2024-11-20 10:04:20.082826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.454 [2024-11-20 10:04:20.082843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.454 qpair failed and we were unable to recover it. 00:30:49.454 [2024-11-20 10:04:20.083211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.454 [2024-11-20 10:04:20.083229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.454 qpair failed and we were unable to recover it. 00:30:49.454 [2024-11-20 10:04:20.083569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.454 [2024-11-20 10:04:20.083593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.454 qpair failed and we were unable to recover it. 00:30:49.454 [2024-11-20 10:04:20.083927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.454 [2024-11-20 10:04:20.083945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.454 qpair failed and we were unable to recover it. 00:30:49.454 [2024-11-20 10:04:20.084283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.454 [2024-11-20 10:04:20.084303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.454 qpair failed and we were unable to recover it. 00:30:49.454 [2024-11-20 10:04:20.084635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.454 [2024-11-20 10:04:20.084652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.454 qpair failed and we were unable to recover it. 00:30:49.454 [2024-11-20 10:04:20.084997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.454 [2024-11-20 10:04:20.085015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.454 qpair failed and we were unable to recover it. 00:30:49.454 [2024-11-20 10:04:20.085331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.454 [2024-11-20 10:04:20.085350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.454 qpair failed and we were unable to recover it. 00:30:49.454 [2024-11-20 10:04:20.085691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.454 [2024-11-20 10:04:20.085709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.454 qpair failed and we were unable to recover it. 00:30:49.454 [2024-11-20 10:04:20.086061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.454 [2024-11-20 10:04:20.086078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.454 qpair failed and we were unable to recover it. 00:30:49.454 [2024-11-20 10:04:20.086393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.454 [2024-11-20 10:04:20.086412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.454 qpair failed and we were unable to recover it. 00:30:49.454 [2024-11-20 10:04:20.086750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.454 [2024-11-20 10:04:20.086767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.454 qpair failed and we were unable to recover it. 00:30:49.454 [2024-11-20 10:04:20.087104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.454 [2024-11-20 10:04:20.087122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.454 qpair failed and we were unable to recover it. 00:30:49.454 [2024-11-20 10:04:20.087459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.454 [2024-11-20 10:04:20.087477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.454 qpair failed and we were unable to recover it. 00:30:49.454 [2024-11-20 10:04:20.087819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.454 [2024-11-20 10:04:20.087838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.454 qpair failed and we were unable to recover it. 00:30:49.454 [2024-11-20 10:04:20.088177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.454 [2024-11-20 10:04:20.088194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.454 qpair failed and we were unable to recover it. 00:30:49.454 [2024-11-20 10:04:20.088531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.454 [2024-11-20 10:04:20.088548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.454 qpair failed and we were unable to recover it. 00:30:49.454 [2024-11-20 10:04:20.088869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.454 [2024-11-20 10:04:20.088886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.454 qpair failed and we were unable to recover it. 00:30:49.454 [2024-11-20 10:04:20.089230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.454 [2024-11-20 10:04:20.089248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.454 qpair failed and we were unable to recover it. 00:30:49.454 [2024-11-20 10:04:20.089586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.454 [2024-11-20 10:04:20.089603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.454 qpair failed and we were unable to recover it. 00:30:49.454 [2024-11-20 10:04:20.089938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.454 [2024-11-20 10:04:20.089955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.455 qpair failed and we were unable to recover it. 00:30:49.455 [2024-11-20 10:04:20.090291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.455 [2024-11-20 10:04:20.090309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.455 qpair failed and we were unable to recover it. 00:30:49.455 [2024-11-20 10:04:20.090644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.455 [2024-11-20 10:04:20.090663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.455 qpair failed and we were unable to recover it. 00:30:49.455 [2024-11-20 10:04:20.091002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.455 [2024-11-20 10:04:20.091020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.455 qpair failed and we were unable to recover it. 00:30:49.455 [2024-11-20 10:04:20.091335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.455 [2024-11-20 10:04:20.091355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.455 qpair failed and we were unable to recover it. 00:30:49.455 [2024-11-20 10:04:20.091576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.455 [2024-11-20 10:04:20.091593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.455 qpair failed and we were unable to recover it. 00:30:49.455 [2024-11-20 10:04:20.091948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.455 [2024-11-20 10:04:20.091967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.455 qpair failed and we were unable to recover it. 00:30:49.455 [2024-11-20 10:04:20.092296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.455 [2024-11-20 10:04:20.092314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.455 qpair failed and we were unable to recover it. 00:30:49.455 [2024-11-20 10:04:20.092659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.455 [2024-11-20 10:04:20.092678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.455 qpair failed and we were unable to recover it. 00:30:49.455 [2024-11-20 10:04:20.093029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.455 [2024-11-20 10:04:20.093047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.455 qpair failed and we were unable to recover it. 00:30:49.455 [2024-11-20 10:04:20.093376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.455 [2024-11-20 10:04:20.093395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.455 qpair failed and we were unable to recover it. 00:30:49.455 [2024-11-20 10:04:20.093730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.455 [2024-11-20 10:04:20.093747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.455 qpair failed and we were unable to recover it. 00:30:49.455 [2024-11-20 10:04:20.094088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.455 [2024-11-20 10:04:20.094106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.455 qpair failed and we were unable to recover it. 00:30:49.455 [2024-11-20 10:04:20.094435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.455 [2024-11-20 10:04:20.094453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.455 qpair failed and we were unable to recover it. 00:30:49.455 [2024-11-20 10:04:20.094791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.455 [2024-11-20 10:04:20.094809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.455 qpair failed and we were unable to recover it. 00:30:49.455 [2024-11-20 10:04:20.095144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.455 [2024-11-20 10:04:20.095171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.455 qpair failed and we were unable to recover it. 00:30:49.455 [2024-11-20 10:04:20.095389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.455 [2024-11-20 10:04:20.095408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.455 qpair failed and we were unable to recover it. 00:30:49.455 [2024-11-20 10:04:20.095756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.455 [2024-11-20 10:04:20.095775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.455 qpair failed and we were unable to recover it. 00:30:49.455 [2024-11-20 10:04:20.096098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.455 [2024-11-20 10:04:20.096116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.455 qpair failed and we were unable to recover it. 00:30:49.455 [2024-11-20 10:04:20.096649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.455 [2024-11-20 10:04:20.096667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.455 qpair failed and we were unable to recover it. 00:30:49.455 [2024-11-20 10:04:20.097004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.455 [2024-11-20 10:04:20.097023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.455 qpair failed and we were unable to recover it. 00:30:49.455 [2024-11-20 10:04:20.097328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.455 [2024-11-20 10:04:20.097347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.455 qpair failed and we were unable to recover it. 00:30:49.455 [2024-11-20 10:04:20.097661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.455 [2024-11-20 10:04:20.097683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.455 qpair failed and we were unable to recover it. 00:30:49.455 [2024-11-20 10:04:20.098056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.455 [2024-11-20 10:04:20.098075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.455 qpair failed and we were unable to recover it. 00:30:49.455 [2024-11-20 10:04:20.098408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.455 [2024-11-20 10:04:20.098425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.455 qpair failed and we were unable to recover it. 00:30:49.455 [2024-11-20 10:04:20.098750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.455 [2024-11-20 10:04:20.098769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.455 qpair failed and we were unable to recover it. 00:30:49.455 [2024-11-20 10:04:20.099090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.455 [2024-11-20 10:04:20.099108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.455 qpair failed and we were unable to recover it. 00:30:49.455 [2024-11-20 10:04:20.099435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.455 [2024-11-20 10:04:20.099454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.455 qpair failed and we were unable to recover it. 00:30:49.455 [2024-11-20 10:04:20.099781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.455 [2024-11-20 10:04:20.099800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.455 qpair failed and we were unable to recover it. 00:30:49.455 [2024-11-20 10:04:20.099994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.455 [2024-11-20 10:04:20.100013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.455 qpair failed and we were unable to recover it. 00:30:49.455 [2024-11-20 10:04:20.100339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.455 [2024-11-20 10:04:20.100357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.455 qpair failed and we were unable to recover it. 00:30:49.455 [2024-11-20 10:04:20.100685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.455 [2024-11-20 10:04:20.100701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.455 qpair failed and we were unable to recover it. 00:30:49.455 [2024-11-20 10:04:20.101035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.455 [2024-11-20 10:04:20.101052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.455 qpair failed and we were unable to recover it. 00:30:49.455 [2024-11-20 10:04:20.101299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.455 [2024-11-20 10:04:20.101316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.455 qpair failed and we were unable to recover it. 00:30:49.455 [2024-11-20 10:04:20.101537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.455 [2024-11-20 10:04:20.101555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.455 qpair failed and we were unable to recover it. 00:30:49.455 [2024-11-20 10:04:20.101891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.455 [2024-11-20 10:04:20.101908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.455 qpair failed and we were unable to recover it. 00:30:49.455 [2024-11-20 10:04:20.102241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.455 [2024-11-20 10:04:20.102258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.455 qpair failed and we were unable to recover it. 00:30:49.455 [2024-11-20 10:04:20.102614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.455 [2024-11-20 10:04:20.102632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.455 qpair failed and we were unable to recover it. 00:30:49.455 [2024-11-20 10:04:20.102963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.456 [2024-11-20 10:04:20.102981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.456 qpair failed and we were unable to recover it. 00:30:49.456 [2024-11-20 10:04:20.103314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.456 [2024-11-20 10:04:20.103333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.456 qpair failed and we were unable to recover it. 00:30:49.456 [2024-11-20 10:04:20.103746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.456 [2024-11-20 10:04:20.103763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.456 qpair failed and we were unable to recover it. 00:30:49.456 [2024-11-20 10:04:20.104092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.456 [2024-11-20 10:04:20.104110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.456 qpair failed and we were unable to recover it. 00:30:49.456 [2024-11-20 10:04:20.104430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.456 [2024-11-20 10:04:20.104448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.456 qpair failed and we were unable to recover it. 00:30:49.456 [2024-11-20 10:04:20.104797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.456 [2024-11-20 10:04:20.104815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.456 qpair failed and we were unable to recover it. 00:30:49.456 [2024-11-20 10:04:20.105143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.456 [2024-11-20 10:04:20.105171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.456 qpair failed and we were unable to recover it. 00:30:49.456 [2024-11-20 10:04:20.105503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.456 [2024-11-20 10:04:20.105521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.456 qpair failed and we were unable to recover it. 00:30:49.456 [2024-11-20 10:04:20.105866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.456 [2024-11-20 10:04:20.105884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.456 qpair failed and we were unable to recover it. 00:30:49.456 [2024-11-20 10:04:20.106224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.456 [2024-11-20 10:04:20.106243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.456 qpair failed and we were unable to recover it. 00:30:49.456 [2024-11-20 10:04:20.106583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.456 [2024-11-20 10:04:20.106600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.456 qpair failed and we were unable to recover it. 00:30:49.456 [2024-11-20 10:04:20.106939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.456 [2024-11-20 10:04:20.106958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.456 qpair failed and we were unable to recover it. 00:30:49.456 [2024-11-20 10:04:20.107192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.456 [2024-11-20 10:04:20.107210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.456 qpair failed and we were unable to recover it. 00:30:49.456 [2024-11-20 10:04:20.107438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.456 [2024-11-20 10:04:20.107455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.456 qpair failed and we were unable to recover it. 00:30:49.456 [2024-11-20 10:04:20.107808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.456 [2024-11-20 10:04:20.107827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.456 qpair failed and we were unable to recover it. 00:30:49.456 [2024-11-20 10:04:20.108170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.456 [2024-11-20 10:04:20.108188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.456 qpair failed and we were unable to recover it. 00:30:49.456 [2024-11-20 10:04:20.108502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.456 [2024-11-20 10:04:20.108520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.456 qpair failed and we were unable to recover it. 00:30:49.456 [2024-11-20 10:04:20.108862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.456 [2024-11-20 10:04:20.108879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.456 qpair failed and we were unable to recover it. 00:30:49.456 [2024-11-20 10:04:20.109288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.456 [2024-11-20 10:04:20.109307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.456 qpair failed and we were unable to recover it. 00:30:49.456 [2024-11-20 10:04:20.109666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.456 [2024-11-20 10:04:20.109685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.456 qpair failed and we were unable to recover it. 00:30:49.456 [2024-11-20 10:04:20.110044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.456 [2024-11-20 10:04:20.110062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.456 qpair failed and we were unable to recover it. 00:30:49.456 [2024-11-20 10:04:20.110442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.456 [2024-11-20 10:04:20.110461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.456 qpair failed and we were unable to recover it. 00:30:49.456 [2024-11-20 10:04:20.110803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.456 [2024-11-20 10:04:20.110820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.456 qpair failed and we were unable to recover it. 00:30:49.456 [2024-11-20 10:04:20.111153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.456 [2024-11-20 10:04:20.111178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.456 qpair failed and we were unable to recover it. 00:30:49.456 [2024-11-20 10:04:20.111491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.456 [2024-11-20 10:04:20.111513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.456 qpair failed and we were unable to recover it. 00:30:49.456 [2024-11-20 10:04:20.111716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.456 [2024-11-20 10:04:20.111734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.456 qpair failed and we were unable to recover it. 00:30:49.456 [2024-11-20 10:04:20.112085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.456 [2024-11-20 10:04:20.112103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.456 qpair failed and we were unable to recover it. 00:30:49.456 [2024-11-20 10:04:20.112447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.456 [2024-11-20 10:04:20.112466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.456 qpair failed and we were unable to recover it. 00:30:49.456 [2024-11-20 10:04:20.112809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.456 [2024-11-20 10:04:20.112828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.456 qpair failed and we were unable to recover it. 00:30:49.456 [2024-11-20 10:04:20.113028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.456 [2024-11-20 10:04:20.113048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.456 qpair failed and we were unable to recover it. 00:30:49.456 [2024-11-20 10:04:20.113406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.456 [2024-11-20 10:04:20.113423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.456 qpair failed and we were unable to recover it. 00:30:49.456 [2024-11-20 10:04:20.113750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.456 [2024-11-20 10:04:20.113769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.457 qpair failed and we were unable to recover it. 00:30:49.457 [2024-11-20 10:04:20.114093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.457 [2024-11-20 10:04:20.114111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.457 qpair failed and we were unable to recover it. 00:30:49.457 [2024-11-20 10:04:20.114450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.457 [2024-11-20 10:04:20.114468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.457 qpair failed and we were unable to recover it. 00:30:49.457 [2024-11-20 10:04:20.114847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.457 [2024-11-20 10:04:20.114866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.457 qpair failed and we were unable to recover it. 00:30:49.457 [2024-11-20 10:04:20.115199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.457 [2024-11-20 10:04:20.115217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.457 qpair failed and we were unable to recover it. 00:30:49.457 [2024-11-20 10:04:20.115579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.457 [2024-11-20 10:04:20.115597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.457 qpair failed and we were unable to recover it. 00:30:49.457 [2024-11-20 10:04:20.115929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.457 [2024-11-20 10:04:20.115948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.457 qpair failed and we were unable to recover it. 00:30:49.457 [2024-11-20 10:04:20.116144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.457 [2024-11-20 10:04:20.116170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.457 qpair failed and we were unable to recover it. 00:30:49.457 [2024-11-20 10:04:20.116505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.457 [2024-11-20 10:04:20.116524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.457 qpair failed and we were unable to recover it. 00:30:49.457 [2024-11-20 10:04:20.116856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.457 [2024-11-20 10:04:20.116874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.457 qpair failed and we were unable to recover it. 00:30:49.457 [2024-11-20 10:04:20.117215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.457 [2024-11-20 10:04:20.117234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.457 qpair failed and we were unable to recover it. 00:30:49.457 [2024-11-20 10:04:20.117584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.457 [2024-11-20 10:04:20.117601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.457 qpair failed and we were unable to recover it. 00:30:49.457 [2024-11-20 10:04:20.117941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.457 [2024-11-20 10:04:20.117959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.457 qpair failed and we were unable to recover it. 00:30:49.457 [2024-11-20 10:04:20.118265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.457 [2024-11-20 10:04:20.118283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.457 qpair failed and we were unable to recover it. 00:30:49.457 [2024-11-20 10:04:20.118645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.457 [2024-11-20 10:04:20.118664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.457 qpair failed and we were unable to recover it. 00:30:49.457 [2024-11-20 10:04:20.119002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.457 [2024-11-20 10:04:20.119021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.457 qpair failed and we were unable to recover it. 00:30:49.457 [2024-11-20 10:04:20.119333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.457 [2024-11-20 10:04:20.119350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.457 qpair failed and we were unable to recover it. 00:30:49.457 [2024-11-20 10:04:20.119695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.457 [2024-11-20 10:04:20.119712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.457 qpair failed and we were unable to recover it. 00:30:49.457 [2024-11-20 10:04:20.120051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.457 [2024-11-20 10:04:20.120069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.457 qpair failed and we were unable to recover it. 00:30:49.457 [2024-11-20 10:04:20.120402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.457 [2024-11-20 10:04:20.120419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.457 qpair failed and we were unable to recover it. 00:30:49.457 [2024-11-20 10:04:20.120759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.457 [2024-11-20 10:04:20.120777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.457 qpair failed and we were unable to recover it. 00:30:49.457 [2024-11-20 10:04:20.121124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.457 [2024-11-20 10:04:20.121143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.457 qpair failed and we were unable to recover it. 00:30:49.457 [2024-11-20 10:04:20.121485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.457 [2024-11-20 10:04:20.121503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.457 qpair failed and we were unable to recover it. 00:30:49.457 [2024-11-20 10:04:20.121815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.457 [2024-11-20 10:04:20.121834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.457 qpair failed and we were unable to recover it. 00:30:49.457 [2024-11-20 10:04:20.122172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.457 [2024-11-20 10:04:20.122191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.457 qpair failed and we were unable to recover it. 00:30:49.457 [2024-11-20 10:04:20.122535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.457 [2024-11-20 10:04:20.122552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.457 qpair failed and we were unable to recover it. 00:30:49.457 [2024-11-20 10:04:20.122731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.457 [2024-11-20 10:04:20.122749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.457 qpair failed and we were unable to recover it. 00:30:49.457 [2024-11-20 10:04:20.123092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.457 [2024-11-20 10:04:20.123109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.457 qpair failed and we were unable to recover it. 00:30:49.457 [2024-11-20 10:04:20.123447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.457 [2024-11-20 10:04:20.123465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.457 qpair failed and we were unable to recover it. 00:30:49.457 [2024-11-20 10:04:20.123810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.457 [2024-11-20 10:04:20.123827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.457 qpair failed and we were unable to recover it. 00:30:49.457 [2024-11-20 10:04:20.124172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.457 [2024-11-20 10:04:20.124191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.457 qpair failed and we were unable to recover it. 00:30:49.457 [2024-11-20 10:04:20.124537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.457 [2024-11-20 10:04:20.124554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.457 qpair failed and we were unable to recover it. 00:30:49.457 [2024-11-20 10:04:20.124884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.457 [2024-11-20 10:04:20.124902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.457 qpair failed and we were unable to recover it. 00:30:49.457 [2024-11-20 10:04:20.125245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.457 [2024-11-20 10:04:20.125268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.457 qpair failed and we were unable to recover it. 00:30:49.457 [2024-11-20 10:04:20.125590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.457 [2024-11-20 10:04:20.125609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.457 qpair failed and we were unable to recover it. 00:30:49.457 [2024-11-20 10:04:20.125945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.457 [2024-11-20 10:04:20.125964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.457 qpair failed and we were unable to recover it. 00:30:49.457 [2024-11-20 10:04:20.126304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.457 [2024-11-20 10:04:20.126321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.457 qpair failed and we were unable to recover it. 00:30:49.457 [2024-11-20 10:04:20.126541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.457 [2024-11-20 10:04:20.126559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.457 qpair failed and we were unable to recover it. 00:30:49.458 [2024-11-20 10:04:20.126893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.458 [2024-11-20 10:04:20.126910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.458 qpair failed and we were unable to recover it. 00:30:49.458 [2024-11-20 10:04:20.127246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.458 [2024-11-20 10:04:20.127265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.458 qpair failed and we were unable to recover it. 00:30:49.458 [2024-11-20 10:04:20.127646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.458 [2024-11-20 10:04:20.127664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.458 qpair failed and we were unable to recover it. 00:30:49.458 [2024-11-20 10:04:20.127993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.458 [2024-11-20 10:04:20.128011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.458 qpair failed and we were unable to recover it. 00:30:49.458 [2024-11-20 10:04:20.128333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.458 [2024-11-20 10:04:20.128351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.458 qpair failed and we were unable to recover it. 00:30:49.458 [2024-11-20 10:04:20.128693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.458 [2024-11-20 10:04:20.128711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.458 qpair failed and we were unable to recover it. 00:30:49.458 [2024-11-20 10:04:20.129052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.458 [2024-11-20 10:04:20.129070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.458 qpair failed and we were unable to recover it. 00:30:49.458 [2024-11-20 10:04:20.129398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.458 [2024-11-20 10:04:20.129418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.458 qpair failed and we were unable to recover it. 00:30:49.458 [2024-11-20 10:04:20.129750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.458 [2024-11-20 10:04:20.129768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.458 qpair failed and we were unable to recover it. 00:30:49.458 [2024-11-20 10:04:20.130104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.458 [2024-11-20 10:04:20.130122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.458 qpair failed and we were unable to recover it. 00:30:49.458 [2024-11-20 10:04:20.130461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.458 [2024-11-20 10:04:20.130480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.458 qpair failed and we were unable to recover it. 00:30:49.458 [2024-11-20 10:04:20.130826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.458 [2024-11-20 10:04:20.130844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.458 qpair failed and we were unable to recover it. 00:30:49.458 [2024-11-20 10:04:20.131177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.458 [2024-11-20 10:04:20.131195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.458 qpair failed and we were unable to recover it. 00:30:49.458 [2024-11-20 10:04:20.131535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.458 [2024-11-20 10:04:20.131552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.458 qpair failed and we were unable to recover it. 00:30:49.458 [2024-11-20 10:04:20.131891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.458 [2024-11-20 10:04:20.131910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.458 qpair failed and we were unable to recover it. 00:30:49.458 [2024-11-20 10:04:20.132125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.458 [2024-11-20 10:04:20.132144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.458 qpair failed and we were unable to recover it. 00:30:49.458 [2024-11-20 10:04:20.132469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.458 [2024-11-20 10:04:20.132492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.458 qpair failed and we were unable to recover it. 00:30:49.458 [2024-11-20 10:04:20.132835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.458 [2024-11-20 10:04:20.132853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.458 qpair failed and we were unable to recover it. 00:30:49.458 [2024-11-20 10:04:20.133224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.458 [2024-11-20 10:04:20.133243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.458 qpair failed and we were unable to recover it. 00:30:49.458 [2024-11-20 10:04:20.133581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.458 [2024-11-20 10:04:20.133598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.458 qpair failed and we were unable to recover it. 00:30:49.458 [2024-11-20 10:04:20.133925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.458 [2024-11-20 10:04:20.133942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.458 qpair failed and we were unable to recover it. 00:30:49.458 [2024-11-20 10:04:20.134244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.458 [2024-11-20 10:04:20.134261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.458 qpair failed and we were unable to recover it. 00:30:49.458 [2024-11-20 10:04:20.134480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.458 [2024-11-20 10:04:20.134497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.458 qpair failed and we were unable to recover it. 00:30:49.458 [2024-11-20 10:04:20.134862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.458 [2024-11-20 10:04:20.134879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.458 qpair failed and we were unable to recover it. 00:30:49.458 [2024-11-20 10:04:20.135225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.458 [2024-11-20 10:04:20.135242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.458 qpair failed and we were unable to recover it. 00:30:49.458 [2024-11-20 10:04:20.135442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.458 [2024-11-20 10:04:20.135461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.458 qpair failed and we were unable to recover it. 00:30:49.458 [2024-11-20 10:04:20.135791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.458 [2024-11-20 10:04:20.135808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.458 qpair failed and we were unable to recover it. 00:30:49.458 [2024-11-20 10:04:20.135993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.458 [2024-11-20 10:04:20.136011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.458 qpair failed and we were unable to recover it. 00:30:49.458 [2024-11-20 10:04:20.136377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.458 [2024-11-20 10:04:20.136396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.458 qpair failed and we were unable to recover it. 00:30:49.458 [2024-11-20 10:04:20.136727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.458 [2024-11-20 10:04:20.136744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.458 qpair failed and we were unable to recover it. 00:30:49.458 [2024-11-20 10:04:20.137077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.458 [2024-11-20 10:04:20.137095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.458 qpair failed and we were unable to recover it. 00:30:49.458 [2024-11-20 10:04:20.137497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.458 [2024-11-20 10:04:20.137515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.458 qpair failed and we were unable to recover it. 00:30:49.458 [2024-11-20 10:04:20.137851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.458 [2024-11-20 10:04:20.137870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.458 qpair failed and we were unable to recover it. 00:30:49.458 [2024-11-20 10:04:20.138205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.458 [2024-11-20 10:04:20.138223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.458 qpair failed and we were unable to recover it. 00:30:49.458 [2024-11-20 10:04:20.138569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.458 [2024-11-20 10:04:20.138588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.458 qpair failed and we were unable to recover it. 00:30:49.458 [2024-11-20 10:04:20.138911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.458 [2024-11-20 10:04:20.138932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.458 qpair failed and we were unable to recover it. 00:30:49.458 [2024-11-20 10:04:20.139276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.458 [2024-11-20 10:04:20.139295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.458 qpair failed and we were unable to recover it. 00:30:49.458 [2024-11-20 10:04:20.139634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.458 [2024-11-20 10:04:20.139652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.458 qpair failed and we were unable to recover it. 00:30:49.459 [2024-11-20 10:04:20.139979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.459 [2024-11-20 10:04:20.139997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.459 qpair failed and we were unable to recover it. 00:30:49.459 [2024-11-20 10:04:20.140340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.459 [2024-11-20 10:04:20.140358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.459 qpair failed and we were unable to recover it. 00:30:49.459 [2024-11-20 10:04:20.140656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.459 [2024-11-20 10:04:20.140673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.459 qpair failed and we were unable to recover it. 00:30:49.459 [2024-11-20 10:04:20.140987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.459 [2024-11-20 10:04:20.141005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.459 qpair failed and we were unable to recover it. 00:30:49.459 [2024-11-20 10:04:20.141353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.459 [2024-11-20 10:04:20.141370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.459 qpair failed and we were unable to recover it. 00:30:49.459 [2024-11-20 10:04:20.141732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.459 [2024-11-20 10:04:20.141750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.459 qpair failed and we were unable to recover it. 00:30:49.459 [2024-11-20 10:04:20.142080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.459 [2024-11-20 10:04:20.142098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.459 qpair failed and we were unable to recover it. 00:30:49.459 [2024-11-20 10:04:20.142435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.459 [2024-11-20 10:04:20.142452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.459 qpair failed and we were unable to recover it. 00:30:49.459 [2024-11-20 10:04:20.142790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.459 [2024-11-20 10:04:20.142809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.459 qpair failed and we were unable to recover it. 00:30:49.459 [2024-11-20 10:04:20.143133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.459 [2024-11-20 10:04:20.143151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.459 qpair failed and we were unable to recover it. 00:30:49.459 [2024-11-20 10:04:20.143493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.459 [2024-11-20 10:04:20.143511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.459 qpair failed and we were unable to recover it. 00:30:49.459 [2024-11-20 10:04:20.143851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.459 [2024-11-20 10:04:20.143870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.459 qpair failed and we were unable to recover it. 00:30:49.459 [2024-11-20 10:04:20.144210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.459 [2024-11-20 10:04:20.144228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.459 qpair failed and we were unable to recover it. 00:30:49.459 [2024-11-20 10:04:20.144542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.459 [2024-11-20 10:04:20.144558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.459 qpair failed and we were unable to recover it. 00:30:49.459 [2024-11-20 10:04:20.144895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.459 [2024-11-20 10:04:20.144912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.459 qpair failed and we were unable to recover it. 00:30:49.459 [2024-11-20 10:04:20.145120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.459 [2024-11-20 10:04:20.145139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.459 qpair failed and we were unable to recover it. 00:30:49.459 [2024-11-20 10:04:20.145464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.459 [2024-11-20 10:04:20.145482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.459 qpair failed and we were unable to recover it. 00:30:49.459 [2024-11-20 10:04:20.145827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.459 [2024-11-20 10:04:20.145844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.459 qpair failed and we were unable to recover it. 00:30:49.459 [2024-11-20 10:04:20.146184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.459 [2024-11-20 10:04:20.146200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.459 qpair failed and we were unable to recover it. 00:30:49.459 [2024-11-20 10:04:20.146536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.459 [2024-11-20 10:04:20.146554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.459 qpair failed and we were unable to recover it. 00:30:49.459 [2024-11-20 10:04:20.146770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.459 [2024-11-20 10:04:20.146788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.459 qpair failed and we were unable to recover it. 00:30:49.459 [2024-11-20 10:04:20.147103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.459 [2024-11-20 10:04:20.147121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.459 qpair failed and we were unable to recover it. 00:30:49.459 [2024-11-20 10:04:20.147462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.459 [2024-11-20 10:04:20.147481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.459 qpair failed and we were unable to recover it. 00:30:49.459 [2024-11-20 10:04:20.147817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.459 [2024-11-20 10:04:20.147835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.459 qpair failed and we were unable to recover it. 00:30:49.459 [2024-11-20 10:04:20.148178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.459 [2024-11-20 10:04:20.148197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.459 qpair failed and we were unable to recover it. 00:30:49.459 [2024-11-20 10:04:20.148533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.459 [2024-11-20 10:04:20.148551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.459 qpair failed and we were unable to recover it. 00:30:49.459 [2024-11-20 10:04:20.148879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.459 [2024-11-20 10:04:20.148897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.459 qpair failed and we were unable to recover it. 00:30:49.459 [2024-11-20 10:04:20.149242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.459 [2024-11-20 10:04:20.149262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.459 qpair failed and we were unable to recover it. 00:30:49.459 [2024-11-20 10:04:20.149609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.459 [2024-11-20 10:04:20.149627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.459 qpair failed and we were unable to recover it. 00:30:49.459 [2024-11-20 10:04:20.149972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.459 [2024-11-20 10:04:20.149990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.459 qpair failed and we were unable to recover it. 00:30:49.459 [2024-11-20 10:04:20.150203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.459 [2024-11-20 10:04:20.150224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.459 qpair failed and we were unable to recover it. 00:30:49.459 [2024-11-20 10:04:20.150452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.459 [2024-11-20 10:04:20.150471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.459 qpair failed and we were unable to recover it. 00:30:49.459 [2024-11-20 10:04:20.150821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.459 [2024-11-20 10:04:20.150840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.459 qpair failed and we were unable to recover it. 00:30:49.459 [2024-11-20 10:04:20.151191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.459 [2024-11-20 10:04:20.151210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.459 qpair failed and we were unable to recover it. 00:30:49.459 [2024-11-20 10:04:20.151538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.459 [2024-11-20 10:04:20.151556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.459 qpair failed and we were unable to recover it. 00:30:49.459 [2024-11-20 10:04:20.151897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.459 [2024-11-20 10:04:20.151916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.459 qpair failed and we were unable to recover it. 00:30:49.459 [2024-11-20 10:04:20.152236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.459 [2024-11-20 10:04:20.152254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.459 qpair failed and we were unable to recover it. 00:30:49.459 [2024-11-20 10:04:20.152522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.459 [2024-11-20 10:04:20.152541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.459 qpair failed and we were unable to recover it. 00:30:49.460 [2024-11-20 10:04:20.152873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.460 [2024-11-20 10:04:20.152892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.460 qpair failed and we were unable to recover it. 00:30:49.460 [2024-11-20 10:04:20.153228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.460 [2024-11-20 10:04:20.153248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.460 qpair failed and we were unable to recover it. 00:30:49.460 [2024-11-20 10:04:20.153597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.460 [2024-11-20 10:04:20.153616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.460 qpair failed and we were unable to recover it. 00:30:49.460 [2024-11-20 10:04:20.153966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.460 [2024-11-20 10:04:20.153983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.460 qpair failed and we were unable to recover it. 00:30:49.460 [2024-11-20 10:04:20.154316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.460 [2024-11-20 10:04:20.154335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.460 qpair failed and we were unable to recover it. 00:30:49.460 [2024-11-20 10:04:20.154528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.460 [2024-11-20 10:04:20.154548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.460 qpair failed and we were unable to recover it. 00:30:49.460 [2024-11-20 10:04:20.154888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.460 [2024-11-20 10:04:20.154906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.460 qpair failed and we were unable to recover it. 00:30:49.460 [2024-11-20 10:04:20.155220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.460 [2024-11-20 10:04:20.155240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.460 qpair failed and we were unable to recover it. 00:30:49.460 [2024-11-20 10:04:20.155598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.460 [2024-11-20 10:04:20.155617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.460 qpair failed and we were unable to recover it. 00:30:49.460 [2024-11-20 10:04:20.155950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.460 [2024-11-20 10:04:20.155969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.460 qpair failed and we were unable to recover it. 00:30:49.460 [2024-11-20 10:04:20.156314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.460 [2024-11-20 10:04:20.156333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.460 qpair failed and we were unable to recover it. 00:30:49.460 [2024-11-20 10:04:20.156689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.460 [2024-11-20 10:04:20.156709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.460 qpair failed and we were unable to recover it. 00:30:49.460 [2024-11-20 10:04:20.157042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.460 [2024-11-20 10:04:20.157061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.460 qpair failed and we were unable to recover it. 00:30:49.460 [2024-11-20 10:04:20.157379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.460 [2024-11-20 10:04:20.157399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.460 qpair failed and we were unable to recover it. 00:30:49.460 [2024-11-20 10:04:20.157751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.460 [2024-11-20 10:04:20.157770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.460 qpair failed and we were unable to recover it. 00:30:49.460 [2024-11-20 10:04:20.158119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.460 [2024-11-20 10:04:20.158138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.460 qpair failed and we were unable to recover it. 00:30:49.460 [2024-11-20 10:04:20.158497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.460 [2024-11-20 10:04:20.158515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.460 qpair failed and we were unable to recover it. 00:30:49.460 [2024-11-20 10:04:20.158848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.460 [2024-11-20 10:04:20.158867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.460 qpair failed and we were unable to recover it. 00:30:49.460 [2024-11-20 10:04:20.159093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.460 [2024-11-20 10:04:20.159112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.460 qpair failed and we were unable to recover it. 00:30:49.460 [2024-11-20 10:04:20.159434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.460 [2024-11-20 10:04:20.159452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.460 qpair failed and we were unable to recover it. 00:30:49.460 [2024-11-20 10:04:20.159783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.460 [2024-11-20 10:04:20.159802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.460 qpair failed and we were unable to recover it. 00:30:49.460 [2024-11-20 10:04:20.160108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.460 [2024-11-20 10:04:20.160126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.460 qpair failed and we were unable to recover it. 00:30:49.460 [2024-11-20 10:04:20.160371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.460 [2024-11-20 10:04:20.160390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.460 qpair failed and we were unable to recover it. 00:30:49.460 [2024-11-20 10:04:20.160712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.460 [2024-11-20 10:04:20.160730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.460 qpair failed and we were unable to recover it. 00:30:49.460 [2024-11-20 10:04:20.161088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.460 [2024-11-20 10:04:20.161107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.460 qpair failed and we were unable to recover it. 00:30:49.460 [2024-11-20 10:04:20.161367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.460 [2024-11-20 10:04:20.161386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.460 qpair failed and we were unable to recover it. 00:30:49.460 [2024-11-20 10:04:20.161708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.460 [2024-11-20 10:04:20.161731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.460 qpair failed and we were unable to recover it. 00:30:49.460 [2024-11-20 10:04:20.162037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.460 [2024-11-20 10:04:20.162056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.460 qpair failed and we were unable to recover it. 00:30:49.460 [2024-11-20 10:04:20.162386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.460 [2024-11-20 10:04:20.162405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.460 qpair failed and we were unable to recover it. 00:30:49.460 [2024-11-20 10:04:20.162777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.460 [2024-11-20 10:04:20.162795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.460 qpair failed and we were unable to recover it. 00:30:49.460 [2024-11-20 10:04:20.163095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.460 [2024-11-20 10:04:20.163113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.460 qpair failed and we were unable to recover it. 00:30:49.460 [2024-11-20 10:04:20.163317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.460 [2024-11-20 10:04:20.163337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.460 qpair failed and we were unable to recover it. 00:30:49.460 [2024-11-20 10:04:20.163659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.460 [2024-11-20 10:04:20.163678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.460 qpair failed and we were unable to recover it. 00:30:49.460 [2024-11-20 10:04:20.164025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.460 [2024-11-20 10:04:20.164043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.460 qpair failed and we were unable to recover it. 00:30:49.460 [2024-11-20 10:04:20.164392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.460 [2024-11-20 10:04:20.164411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.460 qpair failed and we were unable to recover it. 00:30:49.460 [2024-11-20 10:04:20.164741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.460 [2024-11-20 10:04:20.164759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.460 qpair failed and we were unable to recover it. 00:30:49.460 [2024-11-20 10:04:20.165131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.460 [2024-11-20 10:04:20.165149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.460 qpair failed and we were unable to recover it. 00:30:49.460 [2024-11-20 10:04:20.165560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.460 [2024-11-20 10:04:20.165577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.460 qpair failed and we were unable to recover it. 00:30:49.461 [2024-11-20 10:04:20.165768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.461 [2024-11-20 10:04:20.165787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.461 qpair failed and we were unable to recover it. 00:30:49.461 [2024-11-20 10:04:20.166043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.461 [2024-11-20 10:04:20.166062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.461 qpair failed and we were unable to recover it. 00:30:49.461 [2024-11-20 10:04:20.166395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.461 [2024-11-20 10:04:20.166413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.461 qpair failed and we were unable to recover it. 00:30:49.461 [2024-11-20 10:04:20.166767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.461 [2024-11-20 10:04:20.166786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.461 qpair failed and we were unable to recover it. 00:30:49.461 [2024-11-20 10:04:20.167099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.461 [2024-11-20 10:04:20.167116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.461 qpair failed and we were unable to recover it. 00:30:49.461 [2024-11-20 10:04:20.167510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.461 [2024-11-20 10:04:20.167528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.461 qpair failed and we were unable to recover it. 00:30:49.461 [2024-11-20 10:04:20.167660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.461 [2024-11-20 10:04:20.167676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.461 qpair failed and we were unable to recover it. 00:30:49.461 [2024-11-20 10:04:20.168004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.461 [2024-11-20 10:04:20.168022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.461 qpair failed and we were unable to recover it. 00:30:49.461 [2024-11-20 10:04:20.168341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.461 [2024-11-20 10:04:20.168358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.461 qpair failed and we were unable to recover it. 00:30:49.461 [2024-11-20 10:04:20.168683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.461 [2024-11-20 10:04:20.168700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.461 qpair failed and we were unable to recover it. 00:30:49.461 [2024-11-20 10:04:20.169074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.461 [2024-11-20 10:04:20.169091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.461 qpair failed and we were unable to recover it. 00:30:49.461 [2024-11-20 10:04:20.169449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.461 [2024-11-20 10:04:20.169469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.461 qpair failed and we were unable to recover it. 00:30:49.461 [2024-11-20 10:04:20.169654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.461 [2024-11-20 10:04:20.169674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.461 qpair failed and we were unable to recover it. 00:30:49.461 [2024-11-20 10:04:20.169919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.461 [2024-11-20 10:04:20.169938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.461 qpair failed and we were unable to recover it. 00:30:49.461 [2024-11-20 10:04:20.170315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.461 [2024-11-20 10:04:20.170333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.461 qpair failed and we were unable to recover it. 00:30:49.461 [2024-11-20 10:04:20.170698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.461 [2024-11-20 10:04:20.170716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.461 qpair failed and we were unable to recover it. 00:30:49.461 [2024-11-20 10:04:20.171048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.461 [2024-11-20 10:04:20.171066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.461 qpair failed and we were unable to recover it. 00:30:49.461 [2024-11-20 10:04:20.171385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.461 [2024-11-20 10:04:20.171403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.461 qpair failed and we were unable to recover it. 00:30:49.461 [2024-11-20 10:04:20.171759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.461 [2024-11-20 10:04:20.171775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.461 qpair failed and we were unable to recover it. 00:30:49.461 [2024-11-20 10:04:20.172110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.461 [2024-11-20 10:04:20.172127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.461 qpair failed and we were unable to recover it. 00:30:49.461 [2024-11-20 10:04:20.172452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.461 [2024-11-20 10:04:20.172470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.461 qpair failed and we were unable to recover it. 00:30:49.461 [2024-11-20 10:04:20.172827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.461 [2024-11-20 10:04:20.172845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.461 qpair failed and we were unable to recover it. 00:30:49.461 [2024-11-20 10:04:20.173182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.461 [2024-11-20 10:04:20.173199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.461 qpair failed and we were unable to recover it. 00:30:49.461 [2024-11-20 10:04:20.173461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.461 [2024-11-20 10:04:20.173477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.461 qpair failed and we were unable to recover it. 00:30:49.461 [2024-11-20 10:04:20.173705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.461 [2024-11-20 10:04:20.173722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.461 qpair failed and we were unable to recover it. 00:30:49.461 [2024-11-20 10:04:20.173954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.461 [2024-11-20 10:04:20.173970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.461 qpair failed and we were unable to recover it. 00:30:49.461 [2024-11-20 10:04:20.174151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.461 [2024-11-20 10:04:20.174174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.461 qpair failed and we were unable to recover it. 00:30:49.461 [2024-11-20 10:04:20.174519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.461 [2024-11-20 10:04:20.174536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.461 qpair failed and we were unable to recover it. 00:30:49.461 [2024-11-20 10:04:20.174756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.461 [2024-11-20 10:04:20.174778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.461 qpair failed and we were unable to recover it. 00:30:49.461 [2024-11-20 10:04:20.175136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.461 [2024-11-20 10:04:20.175153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.461 qpair failed and we were unable to recover it. 00:30:49.461 [2024-11-20 10:04:20.175484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.461 [2024-11-20 10:04:20.175503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.461 qpair failed and we were unable to recover it. 00:30:49.461 [2024-11-20 10:04:20.175828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.461 [2024-11-20 10:04:20.175847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.461 qpair failed and we were unable to recover it. 00:30:49.461 [2024-11-20 10:04:20.176174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.461 [2024-11-20 10:04:20.176192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.461 qpair failed and we were unable to recover it. 00:30:49.461 [2024-11-20 10:04:20.176545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.462 [2024-11-20 10:04:20.176562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.462 qpair failed and we were unable to recover it. 00:30:49.462 [2024-11-20 10:04:20.176914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.462 [2024-11-20 10:04:20.176933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.462 qpair failed and we were unable to recover it. 00:30:49.462 [2024-11-20 10:04:20.177257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.462 [2024-11-20 10:04:20.177276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.462 qpair failed and we were unable to recover it. 00:30:49.462 [2024-11-20 10:04:20.177637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.462 [2024-11-20 10:04:20.177655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.462 qpair failed and we were unable to recover it. 00:30:49.462 [2024-11-20 10:04:20.177987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.462 [2024-11-20 10:04:20.178004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.462 qpair failed and we were unable to recover it. 00:30:49.462 [2024-11-20 10:04:20.178334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.462 [2024-11-20 10:04:20.178352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.462 qpair failed and we were unable to recover it. 00:30:49.462 [2024-11-20 10:04:20.178462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.462 [2024-11-20 10:04:20.178477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:49.462 qpair failed and we were unable to recover it. 00:30:49.462 [2024-11-20 10:04:20.178716] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x936e00 is same with the state(6) to be set 00:30:49.462 [2024-11-20 10:04:20.179381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.462 [2024-11-20 10:04:20.179450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.462 qpair failed and we were unable to recover it. 00:30:49.462 [2024-11-20 10:04:20.179895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.462 [2024-11-20 10:04:20.179912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.462 qpair failed and we were unable to recover it. 00:30:49.462 [2024-11-20 10:04:20.180356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.462 [2024-11-20 10:04:20.180415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.462 qpair failed and we were unable to recover it. 00:30:49.462 [2024-11-20 10:04:20.180758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.462 [2024-11-20 10:04:20.180777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.462 qpair failed and we were unable to recover it. 00:30:49.462 [2024-11-20 10:04:20.181386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.462 [2024-11-20 10:04:20.181445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.462 qpair failed and we were unable to recover it. 00:30:49.462 [2024-11-20 10:04:20.181798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.462 [2024-11-20 10:04:20.181815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.462 qpair failed and we were unable to recover it. 00:30:49.462 [2024-11-20 10:04:20.182173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.462 [2024-11-20 10:04:20.182188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.462 qpair failed and we were unable to recover it. 00:30:49.462 [2024-11-20 10:04:20.182543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.462 [2024-11-20 10:04:20.182557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.462 qpair failed and we were unable to recover it. 00:30:49.462 [2024-11-20 10:04:20.182915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.462 [2024-11-20 10:04:20.182928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.462 qpair failed and we were unable to recover it. 00:30:49.462 [2024-11-20 10:04:20.183166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.462 [2024-11-20 10:04:20.183180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.462 qpair failed and we were unable to recover it. 00:30:49.462 [2024-11-20 10:04:20.183388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.462 [2024-11-20 10:04:20.183404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.462 qpair failed and we were unable to recover it. 00:30:49.462 [2024-11-20 10:04:20.183701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.462 [2024-11-20 10:04:20.183716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.462 qpair failed and we were unable to recover it. 00:30:49.462 [2024-11-20 10:04:20.183934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.462 [2024-11-20 10:04:20.183948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.462 qpair failed and we were unable to recover it. 00:30:49.462 [2024-11-20 10:04:20.184300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.462 [2024-11-20 10:04:20.184313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.462 qpair failed and we were unable to recover it. 00:30:49.462 [2024-11-20 10:04:20.184627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.462 [2024-11-20 10:04:20.184645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.462 qpair failed and we were unable to recover it. 00:30:49.462 [2024-11-20 10:04:20.184864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.462 [2024-11-20 10:04:20.184878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.462 qpair failed and we were unable to recover it. 00:30:49.462 [2024-11-20 10:04:20.185191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.462 [2024-11-20 10:04:20.185204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.462 qpair failed and we were unable to recover it. 00:30:49.462 [2024-11-20 10:04:20.185499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.462 [2024-11-20 10:04:20.185511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.462 qpair failed and we were unable to recover it. 00:30:49.462 [2024-11-20 10:04:20.185829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.462 [2024-11-20 10:04:20.185842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.462 qpair failed and we were unable to recover it. 00:30:49.462 [2024-11-20 10:04:20.186191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.462 [2024-11-20 10:04:20.186204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.462 qpair failed and we were unable to recover it. 00:30:49.462 [2024-11-20 10:04:20.186415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.462 [2024-11-20 10:04:20.186428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.462 qpair failed and we were unable to recover it. 00:30:49.462 [2024-11-20 10:04:20.186756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.462 [2024-11-20 10:04:20.186768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.462 qpair failed and we were unable to recover it. 00:30:49.462 [2024-11-20 10:04:20.186957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.462 [2024-11-20 10:04:20.186971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.462 qpair failed and we were unable to recover it. 00:30:49.462 [2024-11-20 10:04:20.187316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.462 [2024-11-20 10:04:20.187330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.462 qpair failed and we were unable to recover it. 00:30:49.462 [2024-11-20 10:04:20.187686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.462 [2024-11-20 10:04:20.187702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.462 qpair failed and we were unable to recover it. 00:30:49.462 [2024-11-20 10:04:20.188029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.462 [2024-11-20 10:04:20.188043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.462 qpair failed and we were unable to recover it. 00:30:49.462 [2024-11-20 10:04:20.188378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.462 [2024-11-20 10:04:20.188393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.462 qpair failed and we were unable to recover it. 00:30:49.462 [2024-11-20 10:04:20.188613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.462 [2024-11-20 10:04:20.188627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.462 qpair failed and we were unable to recover it. 00:30:49.462 [2024-11-20 10:04:20.188974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.462 [2024-11-20 10:04:20.188988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.462 qpair failed and we were unable to recover it. 00:30:49.462 [2024-11-20 10:04:20.189333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.462 [2024-11-20 10:04:20.189346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.462 qpair failed and we were unable to recover it. 00:30:49.462 [2024-11-20 10:04:20.189679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.463 [2024-11-20 10:04:20.189692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.463 qpair failed and we were unable to recover it. 00:30:49.463 [2024-11-20 10:04:20.189881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.463 [2024-11-20 10:04:20.189894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.463 qpair failed and we were unable to recover it. 00:30:49.463 [2024-11-20 10:04:20.190206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.463 [2024-11-20 10:04:20.190219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.463 qpair failed and we were unable to recover it. 00:30:49.463 [2024-11-20 10:04:20.190423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.463 [2024-11-20 10:04:20.190435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.463 qpair failed and we were unable to recover it. 00:30:49.463 [2024-11-20 10:04:20.190758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.463 [2024-11-20 10:04:20.190771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.463 qpair failed and we were unable to recover it. 00:30:49.463 [2024-11-20 10:04:20.190979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.463 [2024-11-20 10:04:20.190991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.463 qpair failed and we were unable to recover it. 00:30:49.463 [2024-11-20 10:04:20.191321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.463 [2024-11-20 10:04:20.191335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.463 qpair failed and we were unable to recover it. 00:30:49.463 [2024-11-20 10:04:20.191636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.463 [2024-11-20 10:04:20.191648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.463 qpair failed and we were unable to recover it. 00:30:49.463 [2024-11-20 10:04:20.191838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.463 [2024-11-20 10:04:20.191852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.463 qpair failed and we were unable to recover it. 00:30:49.463 [2024-11-20 10:04:20.192191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.463 [2024-11-20 10:04:20.192205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.463 qpair failed and we were unable to recover it. 00:30:49.463 [2024-11-20 10:04:20.192514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.463 [2024-11-20 10:04:20.192527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.463 qpair failed and we were unable to recover it. 00:30:49.463 [2024-11-20 10:04:20.192896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.463 [2024-11-20 10:04:20.192910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.463 qpair failed and we were unable to recover it. 00:30:49.463 [2024-11-20 10:04:20.193250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.463 [2024-11-20 10:04:20.193263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.463 qpair failed and we were unable to recover it. 00:30:49.463 [2024-11-20 10:04:20.193523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.463 [2024-11-20 10:04:20.193535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.463 qpair failed and we were unable to recover it. 00:30:49.463 [2024-11-20 10:04:20.193859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.463 [2024-11-20 10:04:20.193871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.463 qpair failed and we were unable to recover it. 00:30:49.463 [2024-11-20 10:04:20.194224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.463 [2024-11-20 10:04:20.194236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.463 qpair failed and we were unable to recover it. 00:30:49.463 [2024-11-20 10:04:20.194568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.463 [2024-11-20 10:04:20.194581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.463 qpair failed and we were unable to recover it. 00:30:49.463 [2024-11-20 10:04:20.194778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.463 [2024-11-20 10:04:20.194791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.463 qpair failed and we were unable to recover it. 00:30:49.463 [2024-11-20 10:04:20.194985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.463 [2024-11-20 10:04:20.194997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.463 qpair failed and we were unable to recover it. 00:30:49.463 [2024-11-20 10:04:20.195306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.463 [2024-11-20 10:04:20.195319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.463 qpair failed and we were unable to recover it. 00:30:49.463 [2024-11-20 10:04:20.195657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.463 [2024-11-20 10:04:20.195672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.463 qpair failed and we were unable to recover it. 00:30:49.463 [2024-11-20 10:04:20.196023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.463 [2024-11-20 10:04:20.196036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.463 qpair failed and we were unable to recover it. 00:30:49.463 [2024-11-20 10:04:20.196365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.463 [2024-11-20 10:04:20.196379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.463 qpair failed and we were unable to recover it. 00:30:49.463 [2024-11-20 10:04:20.196729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.463 [2024-11-20 10:04:20.196743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.463 qpair failed and we were unable to recover it. 00:30:49.463 [2024-11-20 10:04:20.197072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.463 [2024-11-20 10:04:20.197090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.463 qpair failed and we were unable to recover it. 00:30:49.463 [2024-11-20 10:04:20.197281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.463 [2024-11-20 10:04:20.197296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.463 qpair failed and we were unable to recover it. 00:30:49.463 [2024-11-20 10:04:20.197652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.463 [2024-11-20 10:04:20.197666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.463 qpair failed and we were unable to recover it. 00:30:49.463 [2024-11-20 10:04:20.197994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.463 [2024-11-20 10:04:20.198008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.463 qpair failed and we were unable to recover it. 00:30:49.463 [2024-11-20 10:04:20.198381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.463 [2024-11-20 10:04:20.198400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.463 qpair failed and we were unable to recover it. 00:30:49.463 [2024-11-20 10:04:20.198592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.463 [2024-11-20 10:04:20.198604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.463 qpair failed and we were unable to recover it. 00:30:49.463 [2024-11-20 10:04:20.198956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.463 [2024-11-20 10:04:20.198970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.463 qpair failed and we were unable to recover it. 00:30:49.463 [2024-11-20 10:04:20.199308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.463 [2024-11-20 10:04:20.199321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.463 qpair failed and we were unable to recover it. 00:30:49.463 [2024-11-20 10:04:20.199648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.463 [2024-11-20 10:04:20.199662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.463 qpair failed and we were unable to recover it. 00:30:49.463 [2024-11-20 10:04:20.200011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.463 [2024-11-20 10:04:20.200025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.463 qpair failed and we were unable to recover it. 00:30:49.463 [2024-11-20 10:04:20.200338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.463 [2024-11-20 10:04:20.200351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.463 qpair failed and we were unable to recover it. 00:30:49.463 [2024-11-20 10:04:20.200671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.463 [2024-11-20 10:04:20.200683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.463 qpair failed and we were unable to recover it. 00:30:49.463 [2024-11-20 10:04:20.201032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.463 [2024-11-20 10:04:20.201044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.463 qpair failed and we were unable to recover it. 00:30:49.463 [2024-11-20 10:04:20.201387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.463 [2024-11-20 10:04:20.201400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.463 qpair failed and we were unable to recover it. 00:30:49.463 [2024-11-20 10:04:20.201724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.464 [2024-11-20 10:04:20.201736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.464 qpair failed and we were unable to recover it. 00:30:49.464 [2024-11-20 10:04:20.202062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.464 [2024-11-20 10:04:20.202074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.464 qpair failed and we were unable to recover it. 00:30:49.464 [2024-11-20 10:04:20.202437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.464 [2024-11-20 10:04:20.202451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.464 qpair failed and we were unable to recover it. 00:30:49.464 [2024-11-20 10:04:20.202575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.464 [2024-11-20 10:04:20.202586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.464 qpair failed and we were unable to recover it. 00:30:49.464 [2024-11-20 10:04:20.202894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.464 [2024-11-20 10:04:20.202906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.464 qpair failed and we were unable to recover it. 00:30:49.464 [2024-11-20 10:04:20.203248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.464 [2024-11-20 10:04:20.203261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.464 qpair failed and we were unable to recover it. 00:30:49.464 [2024-11-20 10:04:20.203592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.464 [2024-11-20 10:04:20.203604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.464 qpair failed and we were unable to recover it. 00:30:49.464 [2024-11-20 10:04:20.203807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.464 [2024-11-20 10:04:20.203819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.464 qpair failed and we were unable to recover it. 00:30:49.464 [2024-11-20 10:04:20.204169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.464 [2024-11-20 10:04:20.204181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.464 qpair failed and we were unable to recover it. 00:30:49.464 [2024-11-20 10:04:20.204369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.464 [2024-11-20 10:04:20.204383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.464 qpair failed and we were unable to recover it. 00:30:49.464 [2024-11-20 10:04:20.204763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.464 [2024-11-20 10:04:20.204775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.464 qpair failed and we were unable to recover it. 00:30:49.464 [2024-11-20 10:04:20.205113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.464 [2024-11-20 10:04:20.205128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.464 qpair failed and we were unable to recover it. 00:30:49.464 [2024-11-20 10:04:20.205461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.464 [2024-11-20 10:04:20.205474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.464 qpair failed and we were unable to recover it. 00:30:49.464 [2024-11-20 10:04:20.205829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.464 [2024-11-20 10:04:20.205841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.464 qpair failed and we were unable to recover it. 00:30:49.464 [2024-11-20 10:04:20.206172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.464 [2024-11-20 10:04:20.206186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.464 qpair failed and we were unable to recover it. 00:30:49.464 [2024-11-20 10:04:20.206546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.464 [2024-11-20 10:04:20.206560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.464 qpair failed and we were unable to recover it. 00:30:49.464 [2024-11-20 10:04:20.206776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.464 [2024-11-20 10:04:20.206790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.464 qpair failed and we were unable to recover it. 00:30:49.464 [2024-11-20 10:04:20.207129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.464 [2024-11-20 10:04:20.207141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.464 qpair failed and we were unable to recover it. 00:30:49.464 [2024-11-20 10:04:20.207322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.464 [2024-11-20 10:04:20.207336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.464 qpair failed and we were unable to recover it. 00:30:49.464 [2024-11-20 10:04:20.207550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.464 [2024-11-20 10:04:20.207563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.464 qpair failed and we were unable to recover it. 00:30:49.464 [2024-11-20 10:04:20.207894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.464 [2024-11-20 10:04:20.207907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.464 qpair failed and we were unable to recover it. 00:30:49.464 [2024-11-20 10:04:20.208263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.464 [2024-11-20 10:04:20.208276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.464 qpair failed and we were unable to recover it. 00:30:49.464 [2024-11-20 10:04:20.208603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.464 [2024-11-20 10:04:20.208615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.464 qpair failed and we were unable to recover it. 00:30:49.464 [2024-11-20 10:04:20.208797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.464 [2024-11-20 10:04:20.208810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.464 qpair failed and we were unable to recover it. 00:30:49.464 [2024-11-20 10:04:20.209149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.464 [2024-11-20 10:04:20.209175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.464 qpair failed and we were unable to recover it. 00:30:49.464 [2024-11-20 10:04:20.209481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.464 [2024-11-20 10:04:20.209493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.464 qpair failed and we were unable to recover it. 00:30:49.464 [2024-11-20 10:04:20.209813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.464 [2024-11-20 10:04:20.209828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.464 qpair failed and we were unable to recover it. 00:30:49.464 [2024-11-20 10:04:20.210009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.464 [2024-11-20 10:04:20.210023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.464 qpair failed and we were unable to recover it. 00:30:49.464 [2024-11-20 10:04:20.210370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.464 [2024-11-20 10:04:20.210384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.464 qpair failed and we were unable to recover it. 00:30:49.464 [2024-11-20 10:04:20.210732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.464 [2024-11-20 10:04:20.210745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.464 qpair failed and we were unable to recover it. 00:30:49.464 [2024-11-20 10:04:20.211097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.464 [2024-11-20 10:04:20.211109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.464 qpair failed and we were unable to recover it. 00:30:49.464 [2024-11-20 10:04:20.211300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.464 [2024-11-20 10:04:20.211312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.464 qpair failed and we were unable to recover it. 00:30:49.464 [2024-11-20 10:04:20.211553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.464 [2024-11-20 10:04:20.211566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.464 qpair failed and we were unable to recover it. 00:30:49.464 [2024-11-20 10:04:20.211888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.464 [2024-11-20 10:04:20.211901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.464 qpair failed and we were unable to recover it. 00:30:49.464 [2024-11-20 10:04:20.212076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.464 [2024-11-20 10:04:20.212089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.464 qpair failed and we were unable to recover it. 00:30:49.464 [2024-11-20 10:04:20.212491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.464 [2024-11-20 10:04:20.212504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.464 qpair failed and we were unable to recover it. 00:30:49.464 [2024-11-20 10:04:20.212826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.464 [2024-11-20 10:04:20.212840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.464 qpair failed and we were unable to recover it. 00:30:49.464 [2024-11-20 10:04:20.213153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.464 [2024-11-20 10:04:20.213170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.464 qpair failed and we were unable to recover it. 00:30:49.464 [2024-11-20 10:04:20.213495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.465 [2024-11-20 10:04:20.213508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.465 qpair failed and we were unable to recover it. 00:30:49.465 [2024-11-20 10:04:20.213867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.465 [2024-11-20 10:04:20.213879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.465 qpair failed and we were unable to recover it. 00:30:49.465 [2024-11-20 10:04:20.214203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.465 [2024-11-20 10:04:20.214217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.465 qpair failed and we were unable to recover it. 00:30:49.465 [2024-11-20 10:04:20.214623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.465 [2024-11-20 10:04:20.214635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.465 qpair failed and we were unable to recover it. 00:30:49.465 [2024-11-20 10:04:20.214991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.465 [2024-11-20 10:04:20.215006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.465 qpair failed and we were unable to recover it. 00:30:49.465 [2024-11-20 10:04:20.215358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.465 [2024-11-20 10:04:20.215371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.465 qpair failed and we were unable to recover it. 00:30:49.465 [2024-11-20 10:04:20.215556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.465 [2024-11-20 10:04:20.215567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.465 qpair failed and we were unable to recover it. 00:30:49.465 [2024-11-20 10:04:20.215879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.465 [2024-11-20 10:04:20.215892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.465 qpair failed and we were unable to recover it. 00:30:49.465 [2024-11-20 10:04:20.216222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.465 [2024-11-20 10:04:20.216235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.465 qpair failed and we were unable to recover it. 00:30:49.465 [2024-11-20 10:04:20.216597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.465 [2024-11-20 10:04:20.216609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.465 qpair failed and we were unable to recover it. 00:30:49.465 [2024-11-20 10:04:20.216955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.465 [2024-11-20 10:04:20.216969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.465 qpair failed and we were unable to recover it. 00:30:49.465 [2024-11-20 10:04:20.217329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.465 [2024-11-20 10:04:20.217343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.465 qpair failed and we were unable to recover it. 00:30:49.465 [2024-11-20 10:04:20.217668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.465 [2024-11-20 10:04:20.217683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.465 qpair failed and we were unable to recover it. 00:30:49.465 [2024-11-20 10:04:20.218002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.465 [2024-11-20 10:04:20.218015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.465 qpair failed and we were unable to recover it. 00:30:49.465 [2024-11-20 10:04:20.218338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.465 [2024-11-20 10:04:20.218352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.465 qpair failed and we were unable to recover it. 00:30:49.465 [2024-11-20 10:04:20.218698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.465 [2024-11-20 10:04:20.218711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.465 qpair failed and we were unable to recover it. 00:30:49.465 [2024-11-20 10:04:20.218981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.465 [2024-11-20 10:04:20.218994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.465 qpair failed and we were unable to recover it. 00:30:49.465 [2024-11-20 10:04:20.219341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.465 [2024-11-20 10:04:20.219355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.465 qpair failed and we were unable to recover it. 00:30:49.465 [2024-11-20 10:04:20.219689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.465 [2024-11-20 10:04:20.219703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.465 qpair failed and we were unable to recover it. 00:30:49.465 [2024-11-20 10:04:20.220048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.465 [2024-11-20 10:04:20.220060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.465 qpair failed and we were unable to recover it. 00:30:49.465 [2024-11-20 10:04:20.220399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.465 [2024-11-20 10:04:20.220413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.465 qpair failed and we were unable to recover it. 00:30:49.465 [2024-11-20 10:04:20.220754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.465 [2024-11-20 10:04:20.220767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.465 qpair failed and we were unable to recover it. 00:30:49.465 [2024-11-20 10:04:20.221104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.465 [2024-11-20 10:04:20.221119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.465 qpair failed and we were unable to recover it. 00:30:49.465 [2024-11-20 10:04:20.221453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.465 [2024-11-20 10:04:20.221465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.465 qpair failed and we were unable to recover it. 00:30:49.465 [2024-11-20 10:04:20.221871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.465 [2024-11-20 10:04:20.221884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.465 qpair failed and we were unable to recover it. 00:30:49.465 [2024-11-20 10:04:20.222228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.465 [2024-11-20 10:04:20.222242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.465 qpair failed and we were unable to recover it. 00:30:49.465 [2024-11-20 10:04:20.222585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.465 [2024-11-20 10:04:20.222598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.465 qpair failed and we were unable to recover it. 00:30:49.465 [2024-11-20 10:04:20.222942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.465 [2024-11-20 10:04:20.222956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.465 qpair failed and we were unable to recover it. 00:30:49.465 [2024-11-20 10:04:20.223304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.465 [2024-11-20 10:04:20.223320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.465 qpair failed and we were unable to recover it. 00:30:49.465 [2024-11-20 10:04:20.223659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.465 [2024-11-20 10:04:20.223674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.465 qpair failed and we were unable to recover it. 00:30:49.465 [2024-11-20 10:04:20.224021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.465 [2024-11-20 10:04:20.224034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.465 qpair failed and we were unable to recover it. 00:30:49.465 [2024-11-20 10:04:20.224390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.465 [2024-11-20 10:04:20.224404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.465 qpair failed and we were unable to recover it. 00:30:49.465 [2024-11-20 10:04:20.224671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.465 [2024-11-20 10:04:20.224684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.465 qpair failed and we were unable to recover it. 00:30:49.465 [2024-11-20 10:04:20.225029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.466 [2024-11-20 10:04:20.225043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.466 qpair failed and we were unable to recover it. 00:30:49.466 [2024-11-20 10:04:20.225384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.466 [2024-11-20 10:04:20.225397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.466 qpair failed and we were unable to recover it. 00:30:49.466 [2024-11-20 10:04:20.225743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.466 [2024-11-20 10:04:20.225757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.466 qpair failed and we were unable to recover it. 00:30:49.466 [2024-11-20 10:04:20.226133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.466 [2024-11-20 10:04:20.226145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.466 qpair failed and we were unable to recover it. 00:30:49.466 [2024-11-20 10:04:20.226483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.466 [2024-11-20 10:04:20.226498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.466 qpair failed and we were unable to recover it. 00:30:49.466 [2024-11-20 10:04:20.226843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.466 [2024-11-20 10:04:20.226857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.466 qpair failed and we were unable to recover it. 00:30:49.466 [2024-11-20 10:04:20.227168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.466 [2024-11-20 10:04:20.227181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.466 qpair failed and we were unable to recover it. 00:30:49.466 [2024-11-20 10:04:20.227522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.466 [2024-11-20 10:04:20.227534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.466 qpair failed and we were unable to recover it. 00:30:49.466 [2024-11-20 10:04:20.227862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.466 [2024-11-20 10:04:20.227874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.466 qpair failed and we were unable to recover it. 00:30:49.466 [2024-11-20 10:04:20.228234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.466 [2024-11-20 10:04:20.228247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.466 qpair failed and we were unable to recover it. 00:30:49.466 [2024-11-20 10:04:20.228585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.466 [2024-11-20 10:04:20.228599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.466 qpair failed and we were unable to recover it. 00:30:49.466 [2024-11-20 10:04:20.228917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.466 [2024-11-20 10:04:20.228930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.466 qpair failed and we were unable to recover it. 00:30:49.466 [2024-11-20 10:04:20.229363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.466 [2024-11-20 10:04:20.229376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.466 qpair failed and we were unable to recover it. 00:30:49.466 [2024-11-20 10:04:20.229574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.466 [2024-11-20 10:04:20.229587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.466 qpair failed and we were unable to recover it. 00:30:49.466 [2024-11-20 10:04:20.229909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.466 [2024-11-20 10:04:20.229921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.466 qpair failed and we were unable to recover it. 00:30:49.466 [2024-11-20 10:04:20.230230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.466 [2024-11-20 10:04:20.230242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.466 qpair failed and we were unable to recover it. 00:30:49.466 [2024-11-20 10:04:20.230569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.466 [2024-11-20 10:04:20.230583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.466 qpair failed and we were unable to recover it. 00:30:49.466 [2024-11-20 10:04:20.230932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.466 [2024-11-20 10:04:20.230946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.466 qpair failed and we were unable to recover it. 00:30:49.466 [2024-11-20 10:04:20.231264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.466 [2024-11-20 10:04:20.231277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.466 qpair failed and we were unable to recover it. 00:30:49.466 [2024-11-20 10:04:20.231577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.466 [2024-11-20 10:04:20.231590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.466 qpair failed and we were unable to recover it. 00:30:49.466 [2024-11-20 10:04:20.231939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.466 [2024-11-20 10:04:20.231952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.466 qpair failed and we were unable to recover it. 00:30:49.466 [2024-11-20 10:04:20.232299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.466 [2024-11-20 10:04:20.232311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.466 qpair failed and we were unable to recover it. 00:30:49.466 [2024-11-20 10:04:20.232660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.466 [2024-11-20 10:04:20.232673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.466 qpair failed and we were unable to recover it. 00:30:49.466 [2024-11-20 10:04:20.232991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.466 [2024-11-20 10:04:20.233005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.466 qpair failed and we were unable to recover it. 00:30:49.466 [2024-11-20 10:04:20.233346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.466 [2024-11-20 10:04:20.233359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.466 qpair failed and we were unable to recover it. 00:30:49.466 [2024-11-20 10:04:20.233710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.466 [2024-11-20 10:04:20.233725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.466 qpair failed and we were unable to recover it. 00:30:49.466 [2024-11-20 10:04:20.234045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.466 [2024-11-20 10:04:20.234058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.466 qpair failed and we were unable to recover it. 00:30:49.466 [2024-11-20 10:04:20.234372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.466 [2024-11-20 10:04:20.234384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.466 qpair failed and we were unable to recover it. 00:30:49.466 [2024-11-20 10:04:20.234722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.466 [2024-11-20 10:04:20.234734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.466 qpair failed and we were unable to recover it. 00:30:49.466 [2024-11-20 10:04:20.235077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.466 [2024-11-20 10:04:20.235100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.466 qpair failed and we were unable to recover it. 00:30:49.466 [2024-11-20 10:04:20.235434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.466 [2024-11-20 10:04:20.235447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.466 qpair failed and we were unable to recover it. 00:30:49.466 [2024-11-20 10:04:20.235770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.466 [2024-11-20 10:04:20.235782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.466 qpair failed and we were unable to recover it. 00:30:49.466 [2024-11-20 10:04:20.236119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.466 [2024-11-20 10:04:20.236131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.466 qpair failed and we were unable to recover it. 00:30:49.466 [2024-11-20 10:04:20.236460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.466 [2024-11-20 10:04:20.236475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.466 qpair failed and we were unable to recover it. 00:30:49.466 [2024-11-20 10:04:20.236790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.466 [2024-11-20 10:04:20.236803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.466 qpair failed and we were unable to recover it. 00:30:49.466 [2024-11-20 10:04:20.237118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.466 [2024-11-20 10:04:20.237130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.466 qpair failed and we were unable to recover it. 00:30:49.466 [2024-11-20 10:04:20.237477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.466 [2024-11-20 10:04:20.237491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.466 qpair failed and we were unable to recover it. 00:30:49.466 [2024-11-20 10:04:20.237836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.466 [2024-11-20 10:04:20.237849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.466 qpair failed and we were unable to recover it. 00:30:49.466 [2024-11-20 10:04:20.238238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.467 [2024-11-20 10:04:20.238252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.467 qpair failed and we were unable to recover it. 00:30:49.467 [2024-11-20 10:04:20.238538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.467 [2024-11-20 10:04:20.238551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.467 qpair failed and we were unable to recover it. 00:30:49.467 [2024-11-20 10:04:20.238748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.467 [2024-11-20 10:04:20.238762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.467 qpair failed and we were unable to recover it. 00:30:49.467 [2024-11-20 10:04:20.239062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.467 [2024-11-20 10:04:20.239075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.467 qpair failed and we were unable to recover it. 00:30:49.467 [2024-11-20 10:04:20.239384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.467 [2024-11-20 10:04:20.239398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.467 qpair failed and we were unable to recover it. 00:30:49.467 [2024-11-20 10:04:20.239748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.467 [2024-11-20 10:04:20.239763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.467 qpair failed and we were unable to recover it. 00:30:49.467 [2024-11-20 10:04:20.240080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.467 [2024-11-20 10:04:20.240094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.467 qpair failed and we were unable to recover it. 00:30:49.467 [2024-11-20 10:04:20.240442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.467 [2024-11-20 10:04:20.240454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.467 qpair failed and we were unable to recover it. 00:30:49.467 [2024-11-20 10:04:20.240810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.467 [2024-11-20 10:04:20.240824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.467 qpair failed and we were unable to recover it. 00:30:49.467 [2024-11-20 10:04:20.241125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.467 [2024-11-20 10:04:20.241138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.467 qpair failed and we were unable to recover it. 00:30:49.467 [2024-11-20 10:04:20.241492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.467 [2024-11-20 10:04:20.241506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.467 qpair failed and we were unable to recover it. 00:30:49.467 [2024-11-20 10:04:20.241834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.467 [2024-11-20 10:04:20.241847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.467 qpair failed and we were unable to recover it. 00:30:49.467 [2024-11-20 10:04:20.242171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.467 [2024-11-20 10:04:20.242184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.467 qpair failed and we were unable to recover it. 00:30:49.467 [2024-11-20 10:04:20.242533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.467 [2024-11-20 10:04:20.242545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.467 qpair failed and we were unable to recover it. 00:30:49.467 [2024-11-20 10:04:20.242862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.467 [2024-11-20 10:04:20.242874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.467 qpair failed and we were unable to recover it. 00:30:49.467 [2024-11-20 10:04:20.243178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.467 [2024-11-20 10:04:20.243191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.467 qpair failed and we were unable to recover it. 00:30:49.467 [2024-11-20 10:04:20.243374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.467 [2024-11-20 10:04:20.243386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.467 qpair failed and we were unable to recover it. 00:30:49.467 [2024-11-20 10:04:20.243693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.467 [2024-11-20 10:04:20.243705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.467 qpair failed and we were unable to recover it. 00:30:49.467 [2024-11-20 10:04:20.244019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.467 [2024-11-20 10:04:20.244031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.467 qpair failed and we were unable to recover it. 00:30:49.467 [2024-11-20 10:04:20.244341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.467 [2024-11-20 10:04:20.244354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.467 qpair failed and we were unable to recover it. 00:30:49.467 [2024-11-20 10:04:20.244672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.467 [2024-11-20 10:04:20.244684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.467 qpair failed and we were unable to recover it. 00:30:49.467 [2024-11-20 10:04:20.245027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.467 [2024-11-20 10:04:20.245039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.467 qpair failed and we were unable to recover it. 00:30:49.467 [2024-11-20 10:04:20.245385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.467 [2024-11-20 10:04:20.245399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.467 qpair failed and we were unable to recover it. 00:30:49.467 [2024-11-20 10:04:20.245713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.467 [2024-11-20 10:04:20.245727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.467 qpair failed and we were unable to recover it. 00:30:49.467 [2024-11-20 10:04:20.246057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.467 [2024-11-20 10:04:20.246073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.467 qpair failed and we were unable to recover it. 00:30:49.467 [2024-11-20 10:04:20.246318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.467 [2024-11-20 10:04:20.246332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.467 qpair failed and we were unable to recover it. 00:30:49.467 [2024-11-20 10:04:20.246689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.467 [2024-11-20 10:04:20.246703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.467 qpair failed and we were unable to recover it. 00:30:49.467 [2024-11-20 10:04:20.247050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.467 [2024-11-20 10:04:20.247063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.467 qpair failed and we were unable to recover it. 00:30:49.467 [2024-11-20 10:04:20.247482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.467 [2024-11-20 10:04:20.247495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.467 qpair failed and we were unable to recover it. 00:30:49.467 [2024-11-20 10:04:20.247837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.467 [2024-11-20 10:04:20.247852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.467 qpair failed and we were unable to recover it. 00:30:49.467 [2024-11-20 10:04:20.248185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.467 [2024-11-20 10:04:20.248198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.467 qpair failed and we were unable to recover it. 00:30:49.467 [2024-11-20 10:04:20.248516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.467 [2024-11-20 10:04:20.248530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.467 qpair failed and we were unable to recover it. 00:30:49.467 [2024-11-20 10:04:20.248855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.467 [2024-11-20 10:04:20.248867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.467 qpair failed and we were unable to recover it. 00:30:49.467 [2024-11-20 10:04:20.249197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.467 [2024-11-20 10:04:20.249209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.467 qpair failed and we were unable to recover it. 00:30:49.467 [2024-11-20 10:04:20.249538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.467 [2024-11-20 10:04:20.249552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.467 qpair failed and we were unable to recover it. 00:30:49.467 [2024-11-20 10:04:20.249895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.467 [2024-11-20 10:04:20.249909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.467 qpair failed and we were unable to recover it. 00:30:49.467 [2024-11-20 10:04:20.250252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.467 [2024-11-20 10:04:20.250265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.467 qpair failed and we were unable to recover it. 00:30:49.467 [2024-11-20 10:04:20.250579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.468 [2024-11-20 10:04:20.250594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.468 qpair failed and we were unable to recover it. 00:30:49.468 [2024-11-20 10:04:20.250911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.468 [2024-11-20 10:04:20.250923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.468 qpair failed and we were unable to recover it. 00:30:49.468 [2024-11-20 10:04:20.251264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.468 [2024-11-20 10:04:20.251277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.468 qpair failed and we were unable to recover it. 00:30:49.468 [2024-11-20 10:04:20.251596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.468 [2024-11-20 10:04:20.251607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.468 qpair failed and we were unable to recover it. 00:30:49.468 [2024-11-20 10:04:20.251963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.468 [2024-11-20 10:04:20.251977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.468 qpair failed and we were unable to recover it. 00:30:49.468 [2024-11-20 10:04:20.252166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.468 [2024-11-20 10:04:20.252180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.468 qpair failed and we were unable to recover it. 00:30:49.468 [2024-11-20 10:04:20.252413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.468 [2024-11-20 10:04:20.252426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.468 qpair failed and we were unable to recover it. 00:30:49.468 [2024-11-20 10:04:20.252713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.468 [2024-11-20 10:04:20.252725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.468 qpair failed and we were unable to recover it. 00:30:49.468 [2024-11-20 10:04:20.253058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.468 [2024-11-20 10:04:20.253071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.468 qpair failed and we were unable to recover it. 00:30:49.468 [2024-11-20 10:04:20.253312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.468 [2024-11-20 10:04:20.253325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.468 qpair failed and we were unable to recover it. 00:30:49.468 [2024-11-20 10:04:20.253627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.468 [2024-11-20 10:04:20.253639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.468 qpair failed and we were unable to recover it. 00:30:49.468 [2024-11-20 10:04:20.253983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.468 [2024-11-20 10:04:20.253996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.468 qpair failed and we were unable to recover it. 00:30:49.468 [2024-11-20 10:04:20.254314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.468 [2024-11-20 10:04:20.254326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.468 qpair failed and we were unable to recover it. 00:30:49.468 [2024-11-20 10:04:20.254662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.468 [2024-11-20 10:04:20.254674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.468 qpair failed and we were unable to recover it. 00:30:49.468 [2024-11-20 10:04:20.255006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.468 [2024-11-20 10:04:20.255019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.468 qpair failed and we were unable to recover it. 00:30:49.468 [2024-11-20 10:04:20.255345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.468 [2024-11-20 10:04:20.255358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.468 qpair failed and we were unable to recover it. 00:30:49.468 [2024-11-20 10:04:20.255533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.468 [2024-11-20 10:04:20.255545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.468 qpair failed and we were unable to recover it. 00:30:49.468 [2024-11-20 10:04:20.255895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.468 [2024-11-20 10:04:20.255909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.468 qpair failed and we were unable to recover it. 00:30:49.468 [2024-11-20 10:04:20.256250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.468 [2024-11-20 10:04:20.256263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.468 qpair failed and we were unable to recover it. 00:30:49.468 [2024-11-20 10:04:20.256582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.468 [2024-11-20 10:04:20.256593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.468 qpair failed and we were unable to recover it. 00:30:49.468 [2024-11-20 10:04:20.256935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.468 [2024-11-20 10:04:20.256947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.468 qpair failed and we were unable to recover it. 00:30:49.468 [2024-11-20 10:04:20.257308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.468 [2024-11-20 10:04:20.257320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.468 qpair failed and we were unable to recover it. 00:30:49.468 [2024-11-20 10:04:20.257631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.468 [2024-11-20 10:04:20.257642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.468 qpair failed and we were unable to recover it. 00:30:49.468 [2024-11-20 10:04:20.257994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.468 [2024-11-20 10:04:20.258008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.468 qpair failed and we were unable to recover it. 00:30:49.468 [2024-11-20 10:04:20.258347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.468 [2024-11-20 10:04:20.258360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.468 qpair failed and we were unable to recover it. 00:30:49.468 [2024-11-20 10:04:20.258706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.468 [2024-11-20 10:04:20.258720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.468 qpair failed and we were unable to recover it. 00:30:49.468 [2024-11-20 10:04:20.259053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.468 [2024-11-20 10:04:20.259066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.468 qpair failed and we were unable to recover it. 00:30:49.468 [2024-11-20 10:04:20.259403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.468 [2024-11-20 10:04:20.259420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.468 qpair failed and we were unable to recover it. 00:30:49.468 [2024-11-20 10:04:20.259745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.468 [2024-11-20 10:04:20.259758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.468 qpair failed and we were unable to recover it. 00:30:49.468 [2024-11-20 10:04:20.260102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.468 [2024-11-20 10:04:20.260115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.468 qpair failed and we were unable to recover it. 00:30:49.468 [2024-11-20 10:04:20.260439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.468 [2024-11-20 10:04:20.260452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.468 qpair failed and we were unable to recover it. 00:30:49.468 [2024-11-20 10:04:20.260771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.468 [2024-11-20 10:04:20.260786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.468 qpair failed and we were unable to recover it. 00:30:49.468 [2024-11-20 10:04:20.261123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.468 [2024-11-20 10:04:20.261135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.468 qpair failed and we were unable to recover it. 00:30:49.468 [2024-11-20 10:04:20.261469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.468 [2024-11-20 10:04:20.261484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.468 qpair failed and we were unable to recover it. 00:30:49.468 [2024-11-20 10:04:20.261799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.468 [2024-11-20 10:04:20.261813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.468 qpair failed and we were unable to recover it. 00:30:49.468 [2024-11-20 10:04:20.262156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.468 [2024-11-20 10:04:20.262175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.468 qpair failed and we were unable to recover it. 00:30:49.468 [2024-11-20 10:04:20.262521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.468 [2024-11-20 10:04:20.262534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.468 qpair failed and we were unable to recover it. 00:30:49.468 [2024-11-20 10:04:20.262893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.468 [2024-11-20 10:04:20.262907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.468 qpair failed and we were unable to recover it. 00:30:49.468 [2024-11-20 10:04:20.263261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.469 [2024-11-20 10:04:20.263273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.469 qpair failed and we were unable to recover it. 00:30:49.469 [2024-11-20 10:04:20.263462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.469 [2024-11-20 10:04:20.263475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.469 qpair failed and we were unable to recover it. 00:30:49.469 [2024-11-20 10:04:20.263796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.469 [2024-11-20 10:04:20.263809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.469 qpair failed and we were unable to recover it. 00:30:49.469 [2024-11-20 10:04:20.264155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.469 [2024-11-20 10:04:20.264174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.469 qpair failed and we were unable to recover it. 00:30:49.469 [2024-11-20 10:04:20.264520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.469 [2024-11-20 10:04:20.264534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.469 qpair failed and we were unable to recover it. 00:30:49.469 [2024-11-20 10:04:20.264754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.469 [2024-11-20 10:04:20.264766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.469 qpair failed and we were unable to recover it. 00:30:49.469 [2024-11-20 10:04:20.265087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.469 [2024-11-20 10:04:20.265100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.469 qpair failed and we were unable to recover it. 00:30:49.469 [2024-11-20 10:04:20.265425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.469 [2024-11-20 10:04:20.265440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.469 qpair failed and we were unable to recover it. 00:30:49.469 [2024-11-20 10:04:20.265757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.469 [2024-11-20 10:04:20.265771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.469 qpair failed and we were unable to recover it. 00:30:49.469 [2024-11-20 10:04:20.266101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.469 [2024-11-20 10:04:20.266113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.469 qpair failed and we were unable to recover it. 00:30:49.469 [2024-11-20 10:04:20.266432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.469 [2024-11-20 10:04:20.266445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.469 qpair failed and we were unable to recover it. 00:30:49.469 [2024-11-20 10:04:20.266762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.469 [2024-11-20 10:04:20.266774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.469 qpair failed and we were unable to recover it. 00:30:49.469 [2024-11-20 10:04:20.267189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.469 [2024-11-20 10:04:20.267202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.469 qpair failed and we were unable to recover it. 00:30:49.469 [2024-11-20 10:04:20.267534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.469 [2024-11-20 10:04:20.267547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.469 qpair failed and we were unable to recover it. 00:30:49.469 [2024-11-20 10:04:20.267891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.469 [2024-11-20 10:04:20.267903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.469 qpair failed and we were unable to recover it. 00:30:49.469 [2024-11-20 10:04:20.268231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.469 [2024-11-20 10:04:20.268244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.469 qpair failed and we were unable to recover it. 00:30:49.469 [2024-11-20 10:04:20.268662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.469 [2024-11-20 10:04:20.268674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.469 qpair failed and we were unable to recover it. 00:30:49.469 [2024-11-20 10:04:20.268975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.469 [2024-11-20 10:04:20.268986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.469 qpair failed and we were unable to recover it. 00:30:49.469 [2024-11-20 10:04:20.269296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.469 [2024-11-20 10:04:20.269309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.469 qpair failed and we were unable to recover it. 00:30:49.469 [2024-11-20 10:04:20.269663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.469 [2024-11-20 10:04:20.269675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.469 qpair failed and we were unable to recover it. 00:30:49.469 [2024-11-20 10:04:20.269994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.469 [2024-11-20 10:04:20.270006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.469 qpair failed and we were unable to recover it. 00:30:49.469 [2024-11-20 10:04:20.270351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.469 [2024-11-20 10:04:20.270364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.469 qpair failed and we were unable to recover it. 00:30:49.469 [2024-11-20 10:04:20.270701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.469 [2024-11-20 10:04:20.270713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.469 qpair failed and we were unable to recover it. 00:30:49.469 [2024-11-20 10:04:20.271060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.469 [2024-11-20 10:04:20.271073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.469 qpair failed and we were unable to recover it. 00:30:49.469 [2024-11-20 10:04:20.271261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.469 [2024-11-20 10:04:20.271275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.469 qpair failed and we were unable to recover it. 00:30:49.469 [2024-11-20 10:04:20.271634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.469 [2024-11-20 10:04:20.271648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.469 qpair failed and we were unable to recover it. 00:30:49.469 [2024-11-20 10:04:20.271993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.469 [2024-11-20 10:04:20.272006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.469 qpair failed and we were unable to recover it. 00:30:49.469 [2024-11-20 10:04:20.272367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.469 [2024-11-20 10:04:20.272380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.469 qpair failed and we were unable to recover it. 00:30:49.469 [2024-11-20 10:04:20.272704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.469 [2024-11-20 10:04:20.272718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.469 qpair failed and we were unable to recover it. 00:30:49.469 [2024-11-20 10:04:20.273046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.469 [2024-11-20 10:04:20.273061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.469 qpair failed and we were unable to recover it. 00:30:49.469 [2024-11-20 10:04:20.273383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.469 [2024-11-20 10:04:20.273395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.469 qpair failed and we were unable to recover it. 00:30:49.469 [2024-11-20 10:04:20.273744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.469 [2024-11-20 10:04:20.273756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.469 qpair failed and we were unable to recover it. 00:30:49.469 [2024-11-20 10:04:20.274054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.469 [2024-11-20 10:04:20.274067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.469 qpair failed and we were unable to recover it. 00:30:49.469 [2024-11-20 10:04:20.274405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.469 [2024-11-20 10:04:20.274418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.469 qpair failed and we were unable to recover it. 00:30:49.469 [2024-11-20 10:04:20.274756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.469 [2024-11-20 10:04:20.274771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.469 qpair failed and we were unable to recover it. 00:30:49.469 [2024-11-20 10:04:20.275106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.469 [2024-11-20 10:04:20.275118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.469 qpair failed and we were unable to recover it. 00:30:49.469 [2024-11-20 10:04:20.275436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.469 [2024-11-20 10:04:20.275448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.469 qpair failed and we were unable to recover it. 00:30:49.469 [2024-11-20 10:04:20.275764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.469 [2024-11-20 10:04:20.275777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.469 qpair failed and we were unable to recover it. 00:30:49.469 [2024-11-20 10:04:20.276108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.470 [2024-11-20 10:04:20.276122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.470 qpair failed and we were unable to recover it. 00:30:49.470 [2024-11-20 10:04:20.276470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.470 [2024-11-20 10:04:20.276483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.470 qpair failed and we were unable to recover it. 00:30:49.470 [2024-11-20 10:04:20.276830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.470 [2024-11-20 10:04:20.276843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.470 qpair failed and we were unable to recover it. 00:30:49.470 [2024-11-20 10:04:20.277168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.470 [2024-11-20 10:04:20.277181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.470 qpair failed and we were unable to recover it. 00:30:49.470 [2024-11-20 10:04:20.277509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.470 [2024-11-20 10:04:20.277523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.470 qpair failed and we were unable to recover it. 00:30:49.470 [2024-11-20 10:04:20.277862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.470 [2024-11-20 10:04:20.277875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.470 qpair failed and we were unable to recover it. 00:30:49.470 [2024-11-20 10:04:20.278220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.470 [2024-11-20 10:04:20.278235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.470 qpair failed and we were unable to recover it. 00:30:49.470 [2024-11-20 10:04:20.278570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.470 [2024-11-20 10:04:20.278583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.470 qpair failed and we were unable to recover it. 00:30:49.470 [2024-11-20 10:04:20.278913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.470 [2024-11-20 10:04:20.278927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.470 qpair failed and we were unable to recover it. 00:30:49.470 [2024-11-20 10:04:20.279286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.470 [2024-11-20 10:04:20.279300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.470 qpair failed and we were unable to recover it. 00:30:49.470 [2024-11-20 10:04:20.279641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.470 [2024-11-20 10:04:20.279656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.470 qpair failed and we were unable to recover it. 00:30:49.470 [2024-11-20 10:04:20.279972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.470 [2024-11-20 10:04:20.279985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.470 qpair failed and we were unable to recover it. 00:30:49.470 [2024-11-20 10:04:20.280217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.470 [2024-11-20 10:04:20.280230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.470 qpair failed and we were unable to recover it. 00:30:49.470 [2024-11-20 10:04:20.280559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.470 [2024-11-20 10:04:20.280572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.470 qpair failed and we were unable to recover it. 00:30:49.470 [2024-11-20 10:04:20.280922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.470 [2024-11-20 10:04:20.280937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.470 qpair failed and we were unable to recover it. 00:30:49.470 [2024-11-20 10:04:20.281171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.470 [2024-11-20 10:04:20.281186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.470 qpair failed and we were unable to recover it. 00:30:49.470 [2024-11-20 10:04:20.281540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.470 [2024-11-20 10:04:20.281553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.470 qpair failed and we were unable to recover it. 00:30:49.470 [2024-11-20 10:04:20.281880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.470 [2024-11-20 10:04:20.281895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.470 qpair failed and we were unable to recover it. 00:30:49.470 [2024-11-20 10:04:20.282230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.470 [2024-11-20 10:04:20.282243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.470 qpair failed and we were unable to recover it. 00:30:49.470 [2024-11-20 10:04:20.282581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.470 [2024-11-20 10:04:20.282595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.470 qpair failed and we were unable to recover it. 00:30:49.470 [2024-11-20 10:04:20.282924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.470 [2024-11-20 10:04:20.282936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.470 qpair failed and we were unable to recover it. 00:30:49.470 [2024-11-20 10:04:20.283252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.470 [2024-11-20 10:04:20.283266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.470 qpair failed and we were unable to recover it. 00:30:49.470 [2024-11-20 10:04:20.283602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.470 [2024-11-20 10:04:20.283615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.470 qpair failed and we were unable to recover it. 00:30:49.470 [2024-11-20 10:04:20.283818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.470 [2024-11-20 10:04:20.283831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.470 qpair failed and we were unable to recover it. 00:30:49.470 [2024-11-20 10:04:20.284169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.470 [2024-11-20 10:04:20.284182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.470 qpair failed and we were unable to recover it. 00:30:49.470 [2024-11-20 10:04:20.284526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.470 [2024-11-20 10:04:20.284539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.470 qpair failed and we were unable to recover it. 00:30:49.470 [2024-11-20 10:04:20.284892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.470 [2024-11-20 10:04:20.284905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.470 qpair failed and we were unable to recover it. 00:30:49.470 [2024-11-20 10:04:20.285258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.470 [2024-11-20 10:04:20.285273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.470 qpair failed and we were unable to recover it. 00:30:49.470 [2024-11-20 10:04:20.285595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.470 [2024-11-20 10:04:20.285609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.470 qpair failed and we were unable to recover it. 00:30:49.470 [2024-11-20 10:04:20.285959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.470 [2024-11-20 10:04:20.285972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.470 qpair failed and we were unable to recover it. 00:30:49.470 [2024-11-20 10:04:20.286291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.470 [2024-11-20 10:04:20.286303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.470 qpair failed and we were unable to recover it. 00:30:49.470 [2024-11-20 10:04:20.286648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.470 [2024-11-20 10:04:20.286662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.470 qpair failed and we were unable to recover it. 00:30:49.470 [2024-11-20 10:04:20.287006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.470 [2024-11-20 10:04:20.287020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.470 qpair failed and we were unable to recover it. 00:30:49.470 [2024-11-20 10:04:20.287216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.470 [2024-11-20 10:04:20.287232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.470 qpair failed and we were unable to recover it. 00:30:49.470 [2024-11-20 10:04:20.287580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.471 [2024-11-20 10:04:20.287592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.471 qpair failed and we were unable to recover it. 00:30:49.471 [2024-11-20 10:04:20.287939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.471 [2024-11-20 10:04:20.287953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.471 qpair failed and we were unable to recover it. 00:30:49.471 [2024-11-20 10:04:20.288292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.471 [2024-11-20 10:04:20.288305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.471 qpair failed and we were unable to recover it. 00:30:49.471 [2024-11-20 10:04:20.288655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.471 [2024-11-20 10:04:20.288668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.471 qpair failed and we were unable to recover it. 00:30:49.471 [2024-11-20 10:04:20.289010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.471 [2024-11-20 10:04:20.289022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.471 qpair failed and we were unable to recover it. 00:30:49.471 [2024-11-20 10:04:20.289341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.471 [2024-11-20 10:04:20.289355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.471 qpair failed and we were unable to recover it. 00:30:49.471 [2024-11-20 10:04:20.289698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.471 [2024-11-20 10:04:20.289711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.471 qpair failed and we were unable to recover it. 00:30:49.471 [2024-11-20 10:04:20.290060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.471 [2024-11-20 10:04:20.290073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.471 qpair failed and we were unable to recover it. 00:30:49.471 [2024-11-20 10:04:20.290387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.471 [2024-11-20 10:04:20.290402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.471 qpair failed and we were unable to recover it. 00:30:49.471 [2024-11-20 10:04:20.290728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.471 [2024-11-20 10:04:20.290742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.471 qpair failed and we were unable to recover it. 00:30:49.471 [2024-11-20 10:04:20.291068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.471 [2024-11-20 10:04:20.291080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.471 qpair failed and we were unable to recover it. 00:30:49.471 [2024-11-20 10:04:20.291402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.471 [2024-11-20 10:04:20.291414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.471 qpair failed and we were unable to recover it. 00:30:49.471 [2024-11-20 10:04:20.291756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.471 [2024-11-20 10:04:20.291768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.471 qpair failed and we were unable to recover it. 00:30:49.471 [2024-11-20 10:04:20.292081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.471 [2024-11-20 10:04:20.292094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.471 qpair failed and we were unable to recover it. 00:30:49.471 [2024-11-20 10:04:20.292440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.471 [2024-11-20 10:04:20.292452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.471 qpair failed and we were unable to recover it. 00:30:49.471 [2024-11-20 10:04:20.292798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.471 [2024-11-20 10:04:20.292812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.471 qpair failed and we were unable to recover it. 00:30:49.471 [2024-11-20 10:04:20.293155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.471 [2024-11-20 10:04:20.293188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.471 qpair failed and we were unable to recover it. 00:30:49.471 [2024-11-20 10:04:20.293521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.471 [2024-11-20 10:04:20.293535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.471 qpair failed and we were unable to recover it. 00:30:49.471 [2024-11-20 10:04:20.293854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.471 [2024-11-20 10:04:20.293867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.471 qpair failed and we were unable to recover it. 00:30:49.471 [2024-11-20 10:04:20.294209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.471 [2024-11-20 10:04:20.294222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.471 qpair failed and we were unable to recover it. 00:30:49.471 [2024-11-20 10:04:20.294531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.471 [2024-11-20 10:04:20.294544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.471 qpair failed and we were unable to recover it. 00:30:49.471 [2024-11-20 10:04:20.294879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.471 [2024-11-20 10:04:20.294893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.471 qpair failed and we were unable to recover it. 00:30:49.471 [2024-11-20 10:04:20.295234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.471 [2024-11-20 10:04:20.295248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.471 qpair failed and we were unable to recover it. 00:30:49.471 [2024-11-20 10:04:20.295599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.471 [2024-11-20 10:04:20.295613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.471 qpair failed and we were unable to recover it. 00:30:49.471 [2024-11-20 10:04:20.295955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.471 [2024-11-20 10:04:20.295968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.471 qpair failed and we were unable to recover it. 00:30:49.471 [2024-11-20 10:04:20.296316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.471 [2024-11-20 10:04:20.296329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.471 qpair failed and we were unable to recover it. 00:30:49.471 [2024-11-20 10:04:20.296640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.471 [2024-11-20 10:04:20.296653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.471 qpair failed and we were unable to recover it. 00:30:49.471 [2024-11-20 10:04:20.296999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.471 [2024-11-20 10:04:20.297011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.471 qpair failed and we were unable to recover it. 00:30:49.471 [2024-11-20 10:04:20.297340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.471 [2024-11-20 10:04:20.297352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.471 qpair failed and we were unable to recover it. 00:30:49.471 [2024-11-20 10:04:20.297684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.471 [2024-11-20 10:04:20.297696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.471 qpair failed and we were unable to recover it. 00:30:49.471 [2024-11-20 10:04:20.298028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.471 [2024-11-20 10:04:20.298042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.471 qpair failed and we were unable to recover it. 00:30:49.471 [2024-11-20 10:04:20.298392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.471 [2024-11-20 10:04:20.298405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.471 qpair failed and we were unable to recover it. 00:30:49.471 [2024-11-20 10:04:20.298728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.471 [2024-11-20 10:04:20.298742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.471 qpair failed and we were unable to recover it. 00:30:49.471 [2024-11-20 10:04:20.299081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.471 [2024-11-20 10:04:20.299094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.471 qpair failed and we were unable to recover it. 00:30:49.471 [2024-11-20 10:04:20.299418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.471 [2024-11-20 10:04:20.299430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.471 qpair failed and we were unable to recover it. 00:30:49.471 [2024-11-20 10:04:20.299777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.471 [2024-11-20 10:04:20.299789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.471 qpair failed and we were unable to recover it. 00:30:49.471 [2024-11-20 10:04:20.300131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.471 [2024-11-20 10:04:20.300145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.471 qpair failed and we were unable to recover it. 00:30:49.471 [2024-11-20 10:04:20.300465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.471 [2024-11-20 10:04:20.300481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.471 qpair failed and we were unable to recover it. 00:30:49.471 [2024-11-20 10:04:20.300800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.472 [2024-11-20 10:04:20.300812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.472 qpair failed and we were unable to recover it. 00:30:49.472 [2024-11-20 10:04:20.301141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.472 [2024-11-20 10:04:20.301153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.472 qpair failed and we were unable to recover it. 00:30:49.472 [2024-11-20 10:04:20.301508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.472 [2024-11-20 10:04:20.301523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.472 qpair failed and we were unable to recover it. 00:30:49.472 [2024-11-20 10:04:20.301872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.472 [2024-11-20 10:04:20.301885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.472 qpair failed and we were unable to recover it. 00:30:49.472 [2024-11-20 10:04:20.302231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.472 [2024-11-20 10:04:20.302244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.472 qpair failed and we were unable to recover it. 00:30:49.472 [2024-11-20 10:04:20.302592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.472 [2024-11-20 10:04:20.302604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.472 qpair failed and we were unable to recover it. 00:30:49.472 [2024-11-20 10:04:20.302958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.472 [2024-11-20 10:04:20.302973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.472 qpair failed and we were unable to recover it. 00:30:49.472 [2024-11-20 10:04:20.303298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.472 [2024-11-20 10:04:20.303313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.472 qpair failed and we were unable to recover it. 00:30:49.472 [2024-11-20 10:04:20.303653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.472 [2024-11-20 10:04:20.303667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.472 qpair failed and we were unable to recover it. 00:30:49.472 [2024-11-20 10:04:20.303984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.472 [2024-11-20 10:04:20.303998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.472 qpair failed and we were unable to recover it. 00:30:49.472 [2024-11-20 10:04:20.304334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.472 [2024-11-20 10:04:20.304347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.472 qpair failed and we were unable to recover it. 00:30:49.472 [2024-11-20 10:04:20.304636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.472 [2024-11-20 10:04:20.304647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.472 qpair failed and we were unable to recover it. 00:30:49.472 [2024-11-20 10:04:20.305011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.472 [2024-11-20 10:04:20.305024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.472 qpair failed and we were unable to recover it. 00:30:49.472 [2024-11-20 10:04:20.305367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.472 [2024-11-20 10:04:20.305382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.472 qpair failed and we were unable to recover it. 00:30:49.472 [2024-11-20 10:04:20.305721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.472 [2024-11-20 10:04:20.305733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.472 qpair failed and we were unable to recover it. 00:30:49.472 [2024-11-20 10:04:20.306086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.472 [2024-11-20 10:04:20.306100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.472 qpair failed and we were unable to recover it. 00:30:49.472 [2024-11-20 10:04:20.306442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.472 [2024-11-20 10:04:20.306455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.472 qpair failed and we were unable to recover it. 00:30:49.472 [2024-11-20 10:04:20.306792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.472 [2024-11-20 10:04:20.306806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.472 qpair failed and we were unable to recover it. 00:30:49.472 [2024-11-20 10:04:20.307121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.472 [2024-11-20 10:04:20.307133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.472 qpair failed and we were unable to recover it. 00:30:49.472 [2024-11-20 10:04:20.307454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.472 [2024-11-20 10:04:20.307467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.472 qpair failed and we were unable to recover it. 00:30:49.472 [2024-11-20 10:04:20.307809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.472 [2024-11-20 10:04:20.307822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.472 qpair failed and we were unable to recover it. 00:30:49.472 [2024-11-20 10:04:20.308144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.472 [2024-11-20 10:04:20.308155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.472 qpair failed and we were unable to recover it. 00:30:49.472 [2024-11-20 10:04:20.308501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.472 [2024-11-20 10:04:20.308514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.472 qpair failed and we were unable to recover it. 00:30:49.472 [2024-11-20 10:04:20.308855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.472 [2024-11-20 10:04:20.308869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.472 qpair failed and we were unable to recover it. 00:30:49.472 [2024-11-20 10:04:20.309179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.472 [2024-11-20 10:04:20.309192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.472 qpair failed and we were unable to recover it. 00:30:49.472 [2024-11-20 10:04:20.309548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.472 [2024-11-20 10:04:20.309562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.472 qpair failed and we were unable to recover it. 00:30:49.472 [2024-11-20 10:04:20.309907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.472 [2024-11-20 10:04:20.309920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.472 qpair failed and we were unable to recover it. 00:30:49.472 [2024-11-20 10:04:20.310240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.472 [2024-11-20 10:04:20.310252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.472 qpair failed and we were unable to recover it. 00:30:49.472 [2024-11-20 10:04:20.310584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.472 [2024-11-20 10:04:20.310596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.472 qpair failed and we were unable to recover it. 00:30:49.472 [2024-11-20 10:04:20.310936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.472 [2024-11-20 10:04:20.310951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.472 qpair failed and we were unable to recover it. 00:30:49.472 [2024-11-20 10:04:20.311272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.472 [2024-11-20 10:04:20.311285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.472 qpair failed and we were unable to recover it. 00:30:49.472 [2024-11-20 10:04:20.311616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.472 [2024-11-20 10:04:20.311631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.472 qpair failed and we were unable to recover it. 00:30:49.472 [2024-11-20 10:04:20.311830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.472 [2024-11-20 10:04:20.311842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.472 qpair failed and we were unable to recover it. 00:30:49.472 [2024-11-20 10:04:20.312196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.472 [2024-11-20 10:04:20.312208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.472 qpair failed and we were unable to recover it. 00:30:49.472 [2024-11-20 10:04:20.312539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.472 [2024-11-20 10:04:20.312551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.472 qpair failed and we were unable to recover it. 00:30:49.472 [2024-11-20 10:04:20.312887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.472 [2024-11-20 10:04:20.312901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.472 qpair failed and we were unable to recover it. 00:30:49.472 [2024-11-20 10:04:20.313229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.472 [2024-11-20 10:04:20.313243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.472 qpair failed and we were unable to recover it. 00:30:49.472 [2024-11-20 10:04:20.313561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.472 [2024-11-20 10:04:20.313575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.472 qpair failed and we were unable to recover it. 00:30:49.473 [2024-11-20 10:04:20.313897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.473 [2024-11-20 10:04:20.313910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.473 qpair failed and we were unable to recover it. 00:30:49.473 [2024-11-20 10:04:20.314243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.473 [2024-11-20 10:04:20.314260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.473 qpair failed and we were unable to recover it. 00:30:49.473 [2024-11-20 10:04:20.314566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.473 [2024-11-20 10:04:20.314578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.473 qpair failed and we were unable to recover it. 00:30:49.473 [2024-11-20 10:04:20.314914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.473 [2024-11-20 10:04:20.314928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.473 qpair failed and we were unable to recover it. 00:30:49.473 [2024-11-20 10:04:20.315200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.473 [2024-11-20 10:04:20.315212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.473 qpair failed and we were unable to recover it. 00:30:49.473 [2024-11-20 10:04:20.315549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.473 [2024-11-20 10:04:20.315561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.473 qpair failed and we were unable to recover it. 00:30:49.473 [2024-11-20 10:04:20.315915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.473 [2024-11-20 10:04:20.315928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.473 qpair failed and we were unable to recover it. 00:30:49.473 [2024-11-20 10:04:20.316244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.473 [2024-11-20 10:04:20.316257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.473 qpair failed and we were unable to recover it. 00:30:49.473 [2024-11-20 10:04:20.316566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.473 [2024-11-20 10:04:20.316578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.473 qpair failed and we were unable to recover it. 00:30:49.473 [2024-11-20 10:04:20.316923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.473 [2024-11-20 10:04:20.316937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.473 qpair failed and we were unable to recover it. 00:30:49.473 [2024-11-20 10:04:20.317254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.473 [2024-11-20 10:04:20.317267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.473 qpair failed and we were unable to recover it. 00:30:49.473 [2024-11-20 10:04:20.317607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.473 [2024-11-20 10:04:20.317621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.473 qpair failed and we were unable to recover it. 00:30:49.473 [2024-11-20 10:04:20.317962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.473 [2024-11-20 10:04:20.317975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.473 qpair failed and we were unable to recover it. 00:30:49.473 [2024-11-20 10:04:20.318196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.473 [2024-11-20 10:04:20.318210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.473 qpair failed and we were unable to recover it. 00:30:49.473 [2024-11-20 10:04:20.318543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.473 [2024-11-20 10:04:20.318556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.473 qpair failed and we were unable to recover it. 00:30:49.473 [2024-11-20 10:04:20.318743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.473 [2024-11-20 10:04:20.318757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.473 qpair failed and we were unable to recover it. 00:30:49.473 [2024-11-20 10:04:20.319103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.473 [2024-11-20 10:04:20.319116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.473 qpair failed and we were unable to recover it. 00:30:49.473 [2024-11-20 10:04:20.319484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.473 [2024-11-20 10:04:20.319497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.473 qpair failed and we were unable to recover it. 00:30:49.473 [2024-11-20 10:04:20.319822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.473 [2024-11-20 10:04:20.319834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.473 qpair failed and we were unable to recover it. 00:30:49.473 [2024-11-20 10:04:20.320152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.473 [2024-11-20 10:04:20.320170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.473 qpair failed and we were unable to recover it. 00:30:49.473 [2024-11-20 10:04:20.320511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.473 [2024-11-20 10:04:20.320525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.473 qpair failed and we were unable to recover it. 00:30:49.473 [2024-11-20 10:04:20.320866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.473 [2024-11-20 10:04:20.320879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.473 qpair failed and we were unable to recover it. 00:30:49.473 [2024-11-20 10:04:20.321200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.473 [2024-11-20 10:04:20.321223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.473 qpair failed and we were unable to recover it. 00:30:49.473 [2024-11-20 10:04:20.321551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.473 [2024-11-20 10:04:20.321564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.473 qpair failed and we were unable to recover it. 00:30:49.473 [2024-11-20 10:04:20.321895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.473 [2024-11-20 10:04:20.321909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.473 qpair failed and we were unable to recover it. 00:30:49.473 [2024-11-20 10:04:20.322226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.473 [2024-11-20 10:04:20.322239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.473 qpair failed and we were unable to recover it. 00:30:49.473 [2024-11-20 10:04:20.322585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.473 [2024-11-20 10:04:20.322597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.473 qpair failed and we were unable to recover it. 00:30:49.473 [2024-11-20 10:04:20.322941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.473 [2024-11-20 10:04:20.322953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.473 qpair failed and we were unable to recover it. 00:30:49.473 [2024-11-20 10:04:20.323278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.473 [2024-11-20 10:04:20.323293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.473 qpair failed and we were unable to recover it. 00:30:49.473 [2024-11-20 10:04:20.323617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.473 [2024-11-20 10:04:20.323629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.473 qpair failed and we were unable to recover it. 00:30:49.473 [2024-11-20 10:04:20.323992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.473 [2024-11-20 10:04:20.324006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.473 qpair failed and we were unable to recover it. 00:30:49.473 [2024-11-20 10:04:20.324339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.473 [2024-11-20 10:04:20.324352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.473 qpair failed and we were unable to recover it. 00:30:49.473 [2024-11-20 10:04:20.324704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.473 [2024-11-20 10:04:20.324717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.473 qpair failed and we were unable to recover it. 00:30:49.473 [2024-11-20 10:04:20.325044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.473 [2024-11-20 10:04:20.325057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.473 qpair failed and we were unable to recover it. 00:30:49.473 [2024-11-20 10:04:20.325377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.473 [2024-11-20 10:04:20.325390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.473 qpair failed and we were unable to recover it. 00:30:49.473 [2024-11-20 10:04:20.325594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.473 [2024-11-20 10:04:20.325607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.473 qpair failed and we were unable to recover it. 00:30:49.473 [2024-11-20 10:04:20.325955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.473 [2024-11-20 10:04:20.325968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.473 qpair failed and we were unable to recover it. 00:30:49.473 [2024-11-20 10:04:20.326301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.473 [2024-11-20 10:04:20.326315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.474 qpair failed and we were unable to recover it. 00:30:49.474 [2024-11-20 10:04:20.326661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.474 [2024-11-20 10:04:20.326675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.474 qpair failed and we were unable to recover it. 00:30:49.474 [2024-11-20 10:04:20.327015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.474 [2024-11-20 10:04:20.327029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.474 qpair failed and we were unable to recover it. 00:30:49.474 [2024-11-20 10:04:20.327378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.474 [2024-11-20 10:04:20.327392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.474 qpair failed and we were unable to recover it. 00:30:49.474 [2024-11-20 10:04:20.327733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.474 [2024-11-20 10:04:20.327749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.474 qpair failed and we were unable to recover it. 00:30:49.474 [2024-11-20 10:04:20.327977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.474 [2024-11-20 10:04:20.327989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.474 qpair failed and we were unable to recover it. 00:30:49.474 [2024-11-20 10:04:20.328328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.474 [2024-11-20 10:04:20.328341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.474 qpair failed and we were unable to recover it. 00:30:49.474 [2024-11-20 10:04:20.328719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.474 [2024-11-20 10:04:20.328732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.474 qpair failed and we were unable to recover it. 00:30:49.474 [2024-11-20 10:04:20.329075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.474 [2024-11-20 10:04:20.329090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.474 qpair failed and we were unable to recover it. 00:30:49.474 [2024-11-20 10:04:20.329403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.474 [2024-11-20 10:04:20.329416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.474 qpair failed and we were unable to recover it. 00:30:49.474 [2024-11-20 10:04:20.329757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.474 [2024-11-20 10:04:20.329770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.474 qpair failed and we were unable to recover it. 00:30:49.474 [2024-11-20 10:04:20.330092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.474 [2024-11-20 10:04:20.330105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.474 qpair failed and we were unable to recover it. 00:30:49.474 [2024-11-20 10:04:20.330425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.474 [2024-11-20 10:04:20.330438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.474 qpair failed and we were unable to recover it. 00:30:49.474 [2024-11-20 10:04:20.330782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.474 [2024-11-20 10:04:20.330795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.474 qpair failed and we were unable to recover it. 00:30:49.474 [2024-11-20 10:04:20.331138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.474 [2024-11-20 10:04:20.331151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.474 qpair failed and we were unable to recover it. 00:30:49.474 [2024-11-20 10:04:20.331508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.474 [2024-11-20 10:04:20.331521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.474 qpair failed and we were unable to recover it. 00:30:49.474 [2024-11-20 10:04:20.331871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.474 [2024-11-20 10:04:20.331883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.474 qpair failed and we were unable to recover it. 00:30:49.474 [2024-11-20 10:04:20.332211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.474 [2024-11-20 10:04:20.332223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.474 qpair failed and we were unable to recover it. 00:30:49.474 [2024-11-20 10:04:20.332590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.474 [2024-11-20 10:04:20.332603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.474 qpair failed and we were unable to recover it. 00:30:49.474 [2024-11-20 10:04:20.332928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.474 [2024-11-20 10:04:20.332941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.474 qpair failed and we were unable to recover it. 00:30:49.474 [2024-11-20 10:04:20.333287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.474 [2024-11-20 10:04:20.333301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.474 qpair failed and we were unable to recover it. 00:30:49.474 [2024-11-20 10:04:20.333662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.474 [2024-11-20 10:04:20.333676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.474 qpair failed and we were unable to recover it. 00:30:49.474 [2024-11-20 10:04:20.334010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.474 [2024-11-20 10:04:20.334023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.474 qpair failed and we were unable to recover it. 00:30:49.474 [2024-11-20 10:04:20.334342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.474 [2024-11-20 10:04:20.334355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.474 qpair failed and we were unable to recover it. 00:30:49.474 [2024-11-20 10:04:20.334701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.474 [2024-11-20 10:04:20.334714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.474 qpair failed and we were unable to recover it. 00:30:49.474 [2024-11-20 10:04:20.334956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.474 [2024-11-20 10:04:20.334968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.474 qpair failed and we were unable to recover it. 00:30:49.474 [2024-11-20 10:04:20.335319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.474 [2024-11-20 10:04:20.335332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.474 qpair failed and we were unable to recover it. 00:30:49.474 [2024-11-20 10:04:20.335688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.474 [2024-11-20 10:04:20.335702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.474 qpair failed and we were unable to recover it. 00:30:49.474 [2024-11-20 10:04:20.336046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.474 [2024-11-20 10:04:20.336060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.474 qpair failed and we were unable to recover it. 00:30:49.474 [2024-11-20 10:04:20.336412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.474 [2024-11-20 10:04:20.336425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.474 qpair failed and we were unable to recover it. 00:30:49.474 [2024-11-20 10:04:20.336769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.474 [2024-11-20 10:04:20.336784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.474 qpair failed and we were unable to recover it. 00:30:49.474 [2024-11-20 10:04:20.336979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.474 [2024-11-20 10:04:20.336992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.474 qpair failed and we were unable to recover it. 00:30:49.474 [2024-11-20 10:04:20.337282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.474 [2024-11-20 10:04:20.337295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.474 qpair failed and we were unable to recover it. 00:30:49.474 [2024-11-20 10:04:20.337604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.474 [2024-11-20 10:04:20.337618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.474 qpair failed and we were unable to recover it. 00:30:49.475 [2024-11-20 10:04:20.337964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.475 [2024-11-20 10:04:20.337976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.475 qpair failed and we were unable to recover it. 00:30:49.475 [2024-11-20 10:04:20.338312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.475 [2024-11-20 10:04:20.338326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.475 qpair failed and we were unable to recover it. 00:30:49.475 [2024-11-20 10:04:20.338645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.475 [2024-11-20 10:04:20.338657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.475 qpair failed and we were unable to recover it. 00:30:49.475 [2024-11-20 10:04:20.339010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.475 [2024-11-20 10:04:20.339023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.475 qpair failed and we were unable to recover it. 00:30:49.475 [2024-11-20 10:04:20.339392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.475 [2024-11-20 10:04:20.339405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.475 qpair failed and we were unable to recover it. 00:30:49.475 [2024-11-20 10:04:20.339719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.475 [2024-11-20 10:04:20.339734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.475 qpair failed and we were unable to recover it. 00:30:49.475 [2024-11-20 10:04:20.340079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.475 [2024-11-20 10:04:20.340092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.475 qpair failed and we were unable to recover it. 00:30:49.475 [2024-11-20 10:04:20.340279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.475 [2024-11-20 10:04:20.340295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.475 qpair failed and we were unable to recover it. 00:30:49.475 [2024-11-20 10:04:20.340526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.475 [2024-11-20 10:04:20.340539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.475 qpair failed and we were unable to recover it. 00:30:49.475 [2024-11-20 10:04:20.340763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.475 [2024-11-20 10:04:20.340777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.475 qpair failed and we were unable to recover it. 00:30:49.475 [2024-11-20 10:04:20.341104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.475 [2024-11-20 10:04:20.341120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.475 qpair failed and we were unable to recover it. 00:30:49.475 [2024-11-20 10:04:20.341449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.475 [2024-11-20 10:04:20.341464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.475 qpair failed and we were unable to recover it. 00:30:49.475 [2024-11-20 10:04:20.341807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.475 [2024-11-20 10:04:20.341820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.475 qpair failed and we were unable to recover it. 00:30:49.475 [2024-11-20 10:04:20.342048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.475 [2024-11-20 10:04:20.342060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.475 qpair failed and we were unable to recover it. 00:30:49.475 [2024-11-20 10:04:20.342392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.475 [2024-11-20 10:04:20.342405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.475 qpair failed and we were unable to recover it. 00:30:49.475 [2024-11-20 10:04:20.342742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.475 [2024-11-20 10:04:20.342756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.475 qpair failed and we were unable to recover it. 00:30:49.475 [2024-11-20 10:04:20.343099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.475 [2024-11-20 10:04:20.343112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.475 qpair failed and we were unable to recover it. 00:30:49.475 [2024-11-20 10:04:20.343446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.475 [2024-11-20 10:04:20.343461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.475 qpair failed and we were unable to recover it. 00:30:49.475 [2024-11-20 10:04:20.343801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.475 [2024-11-20 10:04:20.343814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.475 qpair failed and we were unable to recover it. 00:30:49.475 [2024-11-20 10:04:20.344178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.475 [2024-11-20 10:04:20.344194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.475 qpair failed and we were unable to recover it. 00:30:49.475 [2024-11-20 10:04:20.344527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.475 [2024-11-20 10:04:20.344541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.475 qpair failed and we were unable to recover it. 00:30:49.475 [2024-11-20 10:04:20.344873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.475 [2024-11-20 10:04:20.344887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.475 qpair failed and we were unable to recover it. 00:30:49.475 [2024-11-20 10:04:20.345232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.475 [2024-11-20 10:04:20.345246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.475 qpair failed and we were unable to recover it. 00:30:49.475 [2024-11-20 10:04:20.345600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.475 [2024-11-20 10:04:20.345614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.475 qpair failed and we were unable to recover it. 00:30:49.475 [2024-11-20 10:04:20.345955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.475 [2024-11-20 10:04:20.345968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.475 qpair failed and we were unable to recover it. 00:30:49.475 [2024-11-20 10:04:20.346197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.475 [2024-11-20 10:04:20.346210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.475 qpair failed and we were unable to recover it. 00:30:49.475 [2024-11-20 10:04:20.346547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.475 [2024-11-20 10:04:20.346559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.475 qpair failed and we were unable to recover it. 00:30:49.475 [2024-11-20 10:04:20.346912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.475 [2024-11-20 10:04:20.346925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.475 qpair failed and we were unable to recover it. 00:30:49.475 [2024-11-20 10:04:20.347238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.475 [2024-11-20 10:04:20.347252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.475 qpair failed and we were unable to recover it. 00:30:49.475 [2024-11-20 10:04:20.347582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.475 [2024-11-20 10:04:20.347596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.475 qpair failed and we were unable to recover it. 00:30:49.475 [2024-11-20 10:04:20.347942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.475 [2024-11-20 10:04:20.347955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.475 qpair failed and we were unable to recover it. 00:30:49.475 [2024-11-20 10:04:20.348250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.475 [2024-11-20 10:04:20.348263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.475 qpair failed and we were unable to recover it. 00:30:49.758 [2024-11-20 10:04:20.348590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.758 [2024-11-20 10:04:20.348605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.758 qpair failed and we were unable to recover it. 00:30:49.758 [2024-11-20 10:04:20.348928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.758 [2024-11-20 10:04:20.348944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.758 qpair failed and we were unable to recover it. 00:30:49.758 [2024-11-20 10:04:20.349180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.758 [2024-11-20 10:04:20.349193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.758 qpair failed and we were unable to recover it. 00:30:49.758 [2024-11-20 10:04:20.349537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.758 [2024-11-20 10:04:20.349551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.758 qpair failed and we were unable to recover it. 00:30:49.758 [2024-11-20 10:04:20.349901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.758 [2024-11-20 10:04:20.349915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.758 qpair failed and we were unable to recover it. 00:30:49.758 [2024-11-20 10:04:20.350283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.758 [2024-11-20 10:04:20.350298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.758 qpair failed and we were unable to recover it. 00:30:49.758 [2024-11-20 10:04:20.350659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.758 [2024-11-20 10:04:20.350673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.758 qpair failed and we were unable to recover it. 00:30:49.758 [2024-11-20 10:04:20.351024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.758 [2024-11-20 10:04:20.351037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.758 qpair failed and we were unable to recover it. 00:30:49.758 [2024-11-20 10:04:20.351386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.758 [2024-11-20 10:04:20.351400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.758 qpair failed and we were unable to recover it. 00:30:49.758 [2024-11-20 10:04:20.351717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.758 [2024-11-20 10:04:20.351729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.758 qpair failed and we were unable to recover it. 00:30:49.758 [2024-11-20 10:04:20.351988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.758 [2024-11-20 10:04:20.352001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.758 qpair failed and we were unable to recover it. 00:30:49.758 [2024-11-20 10:04:20.352339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.758 [2024-11-20 10:04:20.352352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.758 qpair failed and we were unable to recover it. 00:30:49.758 [2024-11-20 10:04:20.352570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.758 [2024-11-20 10:04:20.352582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.758 qpair failed and we were unable to recover it. 00:30:49.758 [2024-11-20 10:04:20.352918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.758 [2024-11-20 10:04:20.352932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.758 qpair failed and we were unable to recover it. 00:30:49.758 [2024-11-20 10:04:20.353275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.758 [2024-11-20 10:04:20.353287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.758 qpair failed and we were unable to recover it. 00:30:49.758 [2024-11-20 10:04:20.353610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.758 [2024-11-20 10:04:20.353625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.758 qpair failed and we were unable to recover it. 00:30:49.758 [2024-11-20 10:04:20.353913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.758 [2024-11-20 10:04:20.353926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.758 qpair failed and we were unable to recover it. 00:30:49.758 [2024-11-20 10:04:20.354283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.758 [2024-11-20 10:04:20.354295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.758 qpair failed and we were unable to recover it. 00:30:49.758 [2024-11-20 10:04:20.354553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.758 [2024-11-20 10:04:20.354569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.758 qpair failed and we were unable to recover it. 00:30:49.758 [2024-11-20 10:04:20.354818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.758 [2024-11-20 10:04:20.354831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.758 qpair failed and we were unable to recover it. 00:30:49.758 [2024-11-20 10:04:20.355187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.758 [2024-11-20 10:04:20.355200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.758 qpair failed and we were unable to recover it. 00:30:49.758 [2024-11-20 10:04:20.355419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.758 [2024-11-20 10:04:20.355433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.758 qpair failed and we were unable to recover it. 00:30:49.758 [2024-11-20 10:04:20.355659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.758 [2024-11-20 10:04:20.355673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.758 qpair failed and we were unable to recover it. 00:30:49.758 [2024-11-20 10:04:20.356021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.758 [2024-11-20 10:04:20.356034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.758 qpair failed and we were unable to recover it. 00:30:49.758 [2024-11-20 10:04:20.356386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.758 [2024-11-20 10:04:20.356401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.758 qpair failed and we were unable to recover it. 00:30:49.758 [2024-11-20 10:04:20.356760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.758 [2024-11-20 10:04:20.356774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.758 qpair failed and we were unable to recover it. 00:30:49.758 [2024-11-20 10:04:20.357093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.758 [2024-11-20 10:04:20.357107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.758 qpair failed and we were unable to recover it. 00:30:49.758 [2024-11-20 10:04:20.357313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.758 [2024-11-20 10:04:20.357325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.758 qpair failed and we were unable to recover it. 00:30:49.758 [2024-11-20 10:04:20.357658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.758 [2024-11-20 10:04:20.357671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.758 qpair failed and we were unable to recover it. 00:30:49.758 [2024-11-20 10:04:20.357989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.758 [2024-11-20 10:04:20.358003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.758 qpair failed and we were unable to recover it. 00:30:49.758 [2024-11-20 10:04:20.358361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.758 [2024-11-20 10:04:20.358374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.758 qpair failed and we were unable to recover it. 00:30:49.758 [2024-11-20 10:04:20.358721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.758 [2024-11-20 10:04:20.358733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.758 qpair failed and we were unable to recover it. 00:30:49.758 [2024-11-20 10:04:20.358966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.758 [2024-11-20 10:04:20.358979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.758 qpair failed and we were unable to recover it. 00:30:49.758 [2024-11-20 10:04:20.359324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.758 [2024-11-20 10:04:20.359337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.758 qpair failed and we were unable to recover it. 00:30:49.758 [2024-11-20 10:04:20.359686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.758 [2024-11-20 10:04:20.359698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.758 qpair failed and we were unable to recover it. 00:30:49.758 [2024-11-20 10:04:20.360015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.758 [2024-11-20 10:04:20.360026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.758 qpair failed and we were unable to recover it. 00:30:49.758 [2024-11-20 10:04:20.360264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.759 [2024-11-20 10:04:20.360277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.759 qpair failed and we were unable to recover it. 00:30:49.759 [2024-11-20 10:04:20.360614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.759 [2024-11-20 10:04:20.360628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.759 qpair failed and we were unable to recover it. 00:30:49.759 [2024-11-20 10:04:20.360947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.759 [2024-11-20 10:04:20.360960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.759 qpair failed and we were unable to recover it. 00:30:49.759 [2024-11-20 10:04:20.361277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.759 [2024-11-20 10:04:20.361291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.759 qpair failed and we were unable to recover it. 00:30:49.759 [2024-11-20 10:04:20.361495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.759 [2024-11-20 10:04:20.361509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.759 qpair failed and we were unable to recover it. 00:30:49.759 [2024-11-20 10:04:20.361857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.759 [2024-11-20 10:04:20.361870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.759 qpair failed and we were unable to recover it. 00:30:49.759 [2024-11-20 10:04:20.362184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.759 [2024-11-20 10:04:20.362197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.759 qpair failed and we were unable to recover it. 00:30:49.759 [2024-11-20 10:04:20.362552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.759 [2024-11-20 10:04:20.362566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.759 qpair failed and we were unable to recover it. 00:30:49.759 [2024-11-20 10:04:20.362883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.759 [2024-11-20 10:04:20.362896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.759 qpair failed and we were unable to recover it. 00:30:49.759 [2024-11-20 10:04:20.363225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.759 [2024-11-20 10:04:20.363237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.759 qpair failed and we were unable to recover it. 00:30:49.759 [2024-11-20 10:04:20.363577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.759 [2024-11-20 10:04:20.363591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.759 qpair failed and we were unable to recover it. 00:30:49.759 [2024-11-20 10:04:20.363948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.759 [2024-11-20 10:04:20.363960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.759 qpair failed and we were unable to recover it. 00:30:49.759 [2024-11-20 10:04:20.364308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.759 [2024-11-20 10:04:20.364323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.759 qpair failed and we were unable to recover it. 00:30:49.759 [2024-11-20 10:04:20.364675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.759 [2024-11-20 10:04:20.364687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.759 qpair failed and we were unable to recover it. 00:30:49.759 [2024-11-20 10:04:20.365004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.759 [2024-11-20 10:04:20.365018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.759 qpair failed and we were unable to recover it. 00:30:49.759 [2024-11-20 10:04:20.365340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.759 [2024-11-20 10:04:20.365353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.759 qpair failed and we were unable to recover it. 00:30:49.759 [2024-11-20 10:04:20.365661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.759 [2024-11-20 10:04:20.365673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.759 qpair failed and we were unable to recover it. 00:30:49.759 [2024-11-20 10:04:20.366015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.759 [2024-11-20 10:04:20.366028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.759 qpair failed and we were unable to recover it. 00:30:49.759 [2024-11-20 10:04:20.366362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.759 [2024-11-20 10:04:20.366375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.759 qpair failed and we were unable to recover it. 00:30:49.759 [2024-11-20 10:04:20.366597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.759 [2024-11-20 10:04:20.366610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.759 qpair failed and we were unable to recover it. 00:30:49.759 [2024-11-20 10:04:20.366956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.759 [2024-11-20 10:04:20.366973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.759 qpair failed and we were unable to recover it. 00:30:49.759 [2024-11-20 10:04:20.367299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.759 [2024-11-20 10:04:20.367313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.759 qpair failed and we were unable to recover it. 00:30:49.759 [2024-11-20 10:04:20.367610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.759 [2024-11-20 10:04:20.367625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.759 qpair failed and we were unable to recover it. 00:30:49.759 [2024-11-20 10:04:20.367917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.759 [2024-11-20 10:04:20.367930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.759 qpair failed and we were unable to recover it. 00:30:49.759 [2024-11-20 10:04:20.368278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.759 [2024-11-20 10:04:20.368292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.759 qpair failed and we were unable to recover it. 00:30:49.759 [2024-11-20 10:04:20.368604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.759 [2024-11-20 10:04:20.368616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.759 qpair failed and we were unable to recover it. 00:30:49.759 [2024-11-20 10:04:20.368937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.759 [2024-11-20 10:04:20.368951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.759 qpair failed and we were unable to recover it. 00:30:49.759 [2024-11-20 10:04:20.369301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.759 [2024-11-20 10:04:20.369315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.759 qpair failed and we were unable to recover it. 00:30:49.759 [2024-11-20 10:04:20.369669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.759 [2024-11-20 10:04:20.369682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.759 qpair failed and we were unable to recover it. 00:30:49.759 [2024-11-20 10:04:20.369999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.759 [2024-11-20 10:04:20.370010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.759 qpair failed and we were unable to recover it. 00:30:49.759 [2024-11-20 10:04:20.370337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.759 [2024-11-20 10:04:20.370350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.759 qpair failed and we were unable to recover it. 00:30:49.759 [2024-11-20 10:04:20.370682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.759 [2024-11-20 10:04:20.370695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.759 qpair failed and we were unable to recover it. 00:30:49.759 [2024-11-20 10:04:20.371043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.759 [2024-11-20 10:04:20.371056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.759 qpair failed and we were unable to recover it. 00:30:49.759 [2024-11-20 10:04:20.371385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.759 [2024-11-20 10:04:20.371397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.759 qpair failed and we were unable to recover it. 00:30:49.759 [2024-11-20 10:04:20.371728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.759 [2024-11-20 10:04:20.371740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.759 qpair failed and we were unable to recover it. 00:30:49.759 [2024-11-20 10:04:20.372079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.759 [2024-11-20 10:04:20.372094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.759 qpair failed and we were unable to recover it. 00:30:49.759 [2024-11-20 10:04:20.372441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.759 [2024-11-20 10:04:20.372454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.759 qpair failed and we were unable to recover it. 00:30:49.759 [2024-11-20 10:04:20.372766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.760 [2024-11-20 10:04:20.372778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.760 qpair failed and we were unable to recover it. 00:30:49.760 [2024-11-20 10:04:20.373128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.760 [2024-11-20 10:04:20.373140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.760 qpair failed and we were unable to recover it. 00:30:49.760 [2024-11-20 10:04:20.373366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.760 [2024-11-20 10:04:20.373378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.760 qpair failed and we were unable to recover it. 00:30:49.760 [2024-11-20 10:04:20.373723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.760 [2024-11-20 10:04:20.373737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.760 qpair failed and we were unable to recover it. 00:30:49.760 [2024-11-20 10:04:20.373966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.760 [2024-11-20 10:04:20.373979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.760 qpair failed and we were unable to recover it. 00:30:49.760 [2024-11-20 10:04:20.374332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.760 [2024-11-20 10:04:20.374344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.760 qpair failed and we were unable to recover it. 00:30:49.760 [2024-11-20 10:04:20.374548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.760 [2024-11-20 10:04:20.374559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.760 qpair failed and we were unable to recover it. 00:30:49.760 [2024-11-20 10:04:20.374888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.760 [2024-11-20 10:04:20.374901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.760 qpair failed and we were unable to recover it. 00:30:49.760 [2024-11-20 10:04:20.375224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.760 [2024-11-20 10:04:20.375237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.760 qpair failed and we were unable to recover it. 00:30:49.760 [2024-11-20 10:04:20.375440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.760 [2024-11-20 10:04:20.375452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.760 qpair failed and we were unable to recover it. 00:30:49.760 [2024-11-20 10:04:20.375772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.760 [2024-11-20 10:04:20.375783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.760 qpair failed and we were unable to recover it. 00:30:49.760 [2024-11-20 10:04:20.376115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.760 [2024-11-20 10:04:20.376128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.760 qpair failed and we were unable to recover it. 00:30:49.760 [2024-11-20 10:04:20.376473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.760 [2024-11-20 10:04:20.376487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.760 qpair failed and we were unable to recover it. 00:30:49.760 [2024-11-20 10:04:20.376807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.760 [2024-11-20 10:04:20.376819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.760 qpair failed and we were unable to recover it. 00:30:49.760 [2024-11-20 10:04:20.377057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.760 [2024-11-20 10:04:20.377069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.760 qpair failed and we were unable to recover it. 00:30:49.760 [2024-11-20 10:04:20.377434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.760 [2024-11-20 10:04:20.377448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.760 qpair failed and we were unable to recover it. 00:30:49.760 [2024-11-20 10:04:20.377761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.760 [2024-11-20 10:04:20.377773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.760 qpair failed and we were unable to recover it. 00:30:49.760 [2024-11-20 10:04:20.378089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.760 [2024-11-20 10:04:20.378102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.760 qpair failed and we were unable to recover it. 00:30:49.760 [2024-11-20 10:04:20.378440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.760 [2024-11-20 10:04:20.378452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.760 qpair failed and we were unable to recover it. 00:30:49.760 [2024-11-20 10:04:20.378772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.760 [2024-11-20 10:04:20.378786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.760 qpair failed and we were unable to recover it. 00:30:49.760 [2024-11-20 10:04:20.379111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.760 [2024-11-20 10:04:20.379123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.760 qpair failed and we were unable to recover it. 00:30:49.760 [2024-11-20 10:04:20.379343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.760 [2024-11-20 10:04:20.379356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.760 qpair failed and we were unable to recover it. 00:30:49.760 [2024-11-20 10:04:20.379712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.760 [2024-11-20 10:04:20.379725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.760 qpair failed and we were unable to recover it. 00:30:49.760 [2024-11-20 10:04:20.380077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.760 [2024-11-20 10:04:20.380091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.760 qpair failed and we were unable to recover it. 00:30:49.760 [2024-11-20 10:04:20.380404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.760 [2024-11-20 10:04:20.380417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.760 qpair failed and we were unable to recover it. 00:30:49.760 [2024-11-20 10:04:20.380807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.760 [2024-11-20 10:04:20.380824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.760 qpair failed and we were unable to recover it. 00:30:49.760 [2024-11-20 10:04:20.381130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.760 [2024-11-20 10:04:20.381143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.760 qpair failed and we were unable to recover it. 00:30:49.760 [2024-11-20 10:04:20.381477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.760 [2024-11-20 10:04:20.381490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.760 qpair failed and we were unable to recover it. 00:30:49.760 [2024-11-20 10:04:20.381884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.760 [2024-11-20 10:04:20.381899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.760 qpair failed and we were unable to recover it. 00:30:49.760 [2024-11-20 10:04:20.382228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.760 [2024-11-20 10:04:20.382241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.760 qpair failed and we were unable to recover it. 00:30:49.760 [2024-11-20 10:04:20.382600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.760 [2024-11-20 10:04:20.382614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.760 qpair failed and we were unable to recover it. 00:30:49.760 [2024-11-20 10:04:20.382935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.760 [2024-11-20 10:04:20.382948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.760 qpair failed and we were unable to recover it. 00:30:49.760 [2024-11-20 10:04:20.383171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.760 [2024-11-20 10:04:20.383184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.760 qpair failed and we were unable to recover it. 00:30:49.760 [2024-11-20 10:04:20.383497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.760 [2024-11-20 10:04:20.383510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.760 qpair failed and we were unable to recover it. 00:30:49.760 [2024-11-20 10:04:20.383902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.760 [2024-11-20 10:04:20.383917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.760 qpair failed and we were unable to recover it. 00:30:49.760 [2024-11-20 10:04:20.384232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.760 [2024-11-20 10:04:20.384247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.760 qpair failed and we were unable to recover it. 00:30:49.760 [2024-11-20 10:04:20.384437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.760 [2024-11-20 10:04:20.384449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.760 qpair failed and we were unable to recover it. 00:30:49.760 [2024-11-20 10:04:20.384771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.761 [2024-11-20 10:04:20.384784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.761 qpair failed and we were unable to recover it. 00:30:49.761 [2024-11-20 10:04:20.385117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.761 [2024-11-20 10:04:20.385130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.761 qpair failed and we were unable to recover it. 00:30:49.761 [2024-11-20 10:04:20.385478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.761 [2024-11-20 10:04:20.385491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.761 qpair failed and we were unable to recover it. 00:30:49.761 [2024-11-20 10:04:20.385771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.761 [2024-11-20 10:04:20.385783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.761 qpair failed and we were unable to recover it. 00:30:49.761 [2024-11-20 10:04:20.386014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.761 [2024-11-20 10:04:20.386027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.761 qpair failed and we were unable to recover it. 00:30:49.761 [2024-11-20 10:04:20.386276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.761 [2024-11-20 10:04:20.386287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.761 qpair failed and we were unable to recover it. 00:30:49.761 [2024-11-20 10:04:20.386614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.761 [2024-11-20 10:04:20.386626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.761 qpair failed and we were unable to recover it. 00:30:49.761 [2024-11-20 10:04:20.386949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.761 [2024-11-20 10:04:20.386962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.761 qpair failed and we were unable to recover it. 00:30:49.761 [2024-11-20 10:04:20.387214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.761 [2024-11-20 10:04:20.387227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.761 qpair failed and we were unable to recover it. 00:30:49.761 [2024-11-20 10:04:20.387445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.761 [2024-11-20 10:04:20.387458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.761 qpair failed and we were unable to recover it. 00:30:49.761 [2024-11-20 10:04:20.387775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.761 [2024-11-20 10:04:20.387788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.761 qpair failed and we were unable to recover it. 00:30:49.761 [2024-11-20 10:04:20.388113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.761 [2024-11-20 10:04:20.388128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.761 qpair failed and we were unable to recover it. 00:30:49.761 [2024-11-20 10:04:20.388468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.761 [2024-11-20 10:04:20.388481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.761 qpair failed and we were unable to recover it. 00:30:49.761 [2024-11-20 10:04:20.388830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.761 [2024-11-20 10:04:20.388845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.761 qpair failed and we were unable to recover it. 00:30:49.761 [2024-11-20 10:04:20.389168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.761 [2024-11-20 10:04:20.389182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.761 qpair failed and we were unable to recover it. 00:30:49.761 [2024-11-20 10:04:20.389273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.761 [2024-11-20 10:04:20.389285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.761 qpair failed and we were unable to recover it. 00:30:49.761 [2024-11-20 10:04:20.389517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.761 [2024-11-20 10:04:20.389529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.761 qpair failed and we were unable to recover it. 00:30:49.761 [2024-11-20 10:04:20.389808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.761 [2024-11-20 10:04:20.389820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.761 qpair failed and we were unable to recover it. 00:30:49.761 [2024-11-20 10:04:20.390050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.761 [2024-11-20 10:04:20.390063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.761 qpair failed and we were unable to recover it. 00:30:49.761 [2024-11-20 10:04:20.390290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.761 [2024-11-20 10:04:20.390303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.761 qpair failed and we were unable to recover it. 00:30:49.761 [2024-11-20 10:04:20.390615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.761 [2024-11-20 10:04:20.390628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.761 qpair failed and we were unable to recover it. 00:30:49.761 [2024-11-20 10:04:20.390911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.761 [2024-11-20 10:04:20.390924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.761 qpair failed and we were unable to recover it. 00:30:49.761 [2024-11-20 10:04:20.391153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.761 [2024-11-20 10:04:20.391171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.761 qpair failed and we were unable to recover it. 00:30:49.761 [2024-11-20 10:04:20.391540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.761 [2024-11-20 10:04:20.391553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.761 qpair failed and we were unable to recover it. 00:30:49.761 [2024-11-20 10:04:20.391871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.761 [2024-11-20 10:04:20.391885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.761 qpair failed and we were unable to recover it. 00:30:49.761 [2024-11-20 10:04:20.392081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.761 [2024-11-20 10:04:20.392097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.761 qpair failed and we were unable to recover it. 00:30:49.761 [2024-11-20 10:04:20.392407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.761 [2024-11-20 10:04:20.392419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.761 qpair failed and we were unable to recover it. 00:30:49.761 [2024-11-20 10:04:20.392765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.761 [2024-11-20 10:04:20.392778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.761 qpair failed and we were unable to recover it. 00:30:49.761 [2024-11-20 10:04:20.393063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.761 [2024-11-20 10:04:20.393078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.761 qpair failed and we were unable to recover it. 00:30:49.761 [2024-11-20 10:04:20.393269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.761 [2024-11-20 10:04:20.393282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.761 qpair failed and we were unable to recover it. 00:30:49.761 [2024-11-20 10:04:20.393632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.761 [2024-11-20 10:04:20.393644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.761 qpair failed and we were unable to recover it. 00:30:49.761 [2024-11-20 10:04:20.393869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.761 [2024-11-20 10:04:20.393881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.761 qpair failed and we were unable to recover it. 00:30:49.761 [2024-11-20 10:04:20.394093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.761 [2024-11-20 10:04:20.394107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.761 qpair failed and we were unable to recover it. 00:30:49.761 [2024-11-20 10:04:20.394346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.761 [2024-11-20 10:04:20.394360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.761 qpair failed and we were unable to recover it. 00:30:49.761 [2024-11-20 10:04:20.394693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.761 [2024-11-20 10:04:20.394706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.761 qpair failed and we were unable to recover it. 00:30:49.761 [2024-11-20 10:04:20.395060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.761 [2024-11-20 10:04:20.395073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.761 qpair failed and we were unable to recover it. 00:30:49.761 [2024-11-20 10:04:20.395385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.761 [2024-11-20 10:04:20.395397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.761 qpair failed and we were unable to recover it. 00:30:49.762 [2024-11-20 10:04:20.395679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.762 [2024-11-20 10:04:20.395691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.762 qpair failed and we were unable to recover it. 00:30:49.762 [2024-11-20 10:04:20.395883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.762 [2024-11-20 10:04:20.395898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.762 qpair failed and we were unable to recover it. 00:30:49.762 [2024-11-20 10:04:20.396226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.762 [2024-11-20 10:04:20.396238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.762 qpair failed and we were unable to recover it. 00:30:49.762 [2024-11-20 10:04:20.396459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.762 [2024-11-20 10:04:20.396471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.762 qpair failed and we were unable to recover it. 00:30:49.762 [2024-11-20 10:04:20.396817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.762 [2024-11-20 10:04:20.396830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.762 qpair failed and we were unable to recover it. 00:30:49.762 [2024-11-20 10:04:20.397021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.762 [2024-11-20 10:04:20.397034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.762 qpair failed and we were unable to recover it. 00:30:49.762 [2024-11-20 10:04:20.397398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.762 [2024-11-20 10:04:20.397411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.762 qpair failed and we were unable to recover it. 00:30:49.762 [2024-11-20 10:04:20.397772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.762 [2024-11-20 10:04:20.397785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.762 qpair failed and we were unable to recover it. 00:30:49.762 [2024-11-20 10:04:20.398074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.762 [2024-11-20 10:04:20.398086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.762 qpair failed and we were unable to recover it. 00:30:49.762 [2024-11-20 10:04:20.398430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.762 [2024-11-20 10:04:20.398443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.762 qpair failed and we were unable to recover it. 00:30:49.762 [2024-11-20 10:04:20.398773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.762 [2024-11-20 10:04:20.398787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.762 qpair failed and we were unable to recover it. 00:30:49.762 [2024-11-20 10:04:20.399114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.762 [2024-11-20 10:04:20.399128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.762 qpair failed and we were unable to recover it. 00:30:49.762 [2024-11-20 10:04:20.399480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.762 [2024-11-20 10:04:20.399493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.762 qpair failed and we were unable to recover it. 00:30:49.762 [2024-11-20 10:04:20.399804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.762 [2024-11-20 10:04:20.399816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.762 qpair failed and we were unable to recover it. 00:30:49.762 [2024-11-20 10:04:20.400151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.762 [2024-11-20 10:04:20.400176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.762 qpair failed and we were unable to recover it. 00:30:49.762 [2024-11-20 10:04:20.400539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.762 [2024-11-20 10:04:20.400552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.762 qpair failed and we were unable to recover it. 00:30:49.762 [2024-11-20 10:04:20.400864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.762 [2024-11-20 10:04:20.400879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.762 qpair failed and we were unable to recover it. 00:30:49.762 [2024-11-20 10:04:20.401086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.762 [2024-11-20 10:04:20.401099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.762 qpair failed and we were unable to recover it. 00:30:49.762 [2024-11-20 10:04:20.401442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.762 [2024-11-20 10:04:20.401456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.762 qpair failed and we were unable to recover it. 00:30:49.762 [2024-11-20 10:04:20.401767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.762 [2024-11-20 10:04:20.401780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.762 qpair failed and we were unable to recover it. 00:30:49.762 [2024-11-20 10:04:20.402130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.762 [2024-11-20 10:04:20.402146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.762 qpair failed and we were unable to recover it. 00:30:49.762 [2024-11-20 10:04:20.402501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.762 [2024-11-20 10:04:20.402515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.762 qpair failed and we were unable to recover it. 00:30:49.762 [2024-11-20 10:04:20.402867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.762 [2024-11-20 10:04:20.402882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.762 qpair failed and we were unable to recover it. 00:30:49.762 [2024-11-20 10:04:20.403092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.762 [2024-11-20 10:04:20.403108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.762 qpair failed and we were unable to recover it. 00:30:49.762 [2024-11-20 10:04:20.403460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.762 [2024-11-20 10:04:20.403474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.762 qpair failed and we were unable to recover it. 00:30:49.762 [2024-11-20 10:04:20.403695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.762 [2024-11-20 10:04:20.403710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.762 qpair failed and we were unable to recover it. 00:30:49.762 [2024-11-20 10:04:20.403891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.762 [2024-11-20 10:04:20.403905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.762 qpair failed and we were unable to recover it. 00:30:49.762 [2024-11-20 10:04:20.404250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.762 [2024-11-20 10:04:20.404263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.762 qpair failed and we were unable to recover it. 00:30:49.762 [2024-11-20 10:04:20.404642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.762 [2024-11-20 10:04:20.404657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.762 qpair failed and we were unable to recover it. 00:30:49.762 [2024-11-20 10:04:20.404869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.762 [2024-11-20 10:04:20.404883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.762 qpair failed and we were unable to recover it. 00:30:49.762 [2024-11-20 10:04:20.405186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.762 [2024-11-20 10:04:20.405200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.763 qpair failed and we were unable to recover it. 00:30:49.763 [2024-11-20 10:04:20.405555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.763 [2024-11-20 10:04:20.405570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.763 qpair failed and we were unable to recover it. 00:30:49.763 [2024-11-20 10:04:20.405915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.763 [2024-11-20 10:04:20.405928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.763 qpair failed and we were unable to recover it. 00:30:49.763 [2024-11-20 10:04:20.406163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.763 [2024-11-20 10:04:20.406178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.763 qpair failed and we were unable to recover it. 00:30:49.763 [2024-11-20 10:04:20.406536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.763 [2024-11-20 10:04:20.406549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.763 qpair failed and we were unable to recover it. 00:30:49.763 [2024-11-20 10:04:20.406912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.763 [2024-11-20 10:04:20.406926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.763 qpair failed and we were unable to recover it. 00:30:49.763 [2024-11-20 10:04:20.407283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.763 [2024-11-20 10:04:20.407298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.763 qpair failed and we were unable to recover it. 00:30:49.763 [2024-11-20 10:04:20.407632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.763 [2024-11-20 10:04:20.407645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.763 qpair failed and we were unable to recover it. 00:30:49.763 [2024-11-20 10:04:20.407989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.763 [2024-11-20 10:04:20.408004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.763 qpair failed and we were unable to recover it. 00:30:49.763 [2024-11-20 10:04:20.408223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.763 [2024-11-20 10:04:20.408236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.763 qpair failed and we were unable to recover it. 00:30:49.763 [2024-11-20 10:04:20.408653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.763 [2024-11-20 10:04:20.408668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.763 qpair failed and we were unable to recover it. 00:30:49.763 [2024-11-20 10:04:20.409060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.763 [2024-11-20 10:04:20.409074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.763 qpair failed and we were unable to recover it. 00:30:49.763 [2024-11-20 10:04:20.409296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.763 [2024-11-20 10:04:20.409308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.763 qpair failed and we were unable to recover it. 00:30:49.763 [2024-11-20 10:04:20.409688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.763 [2024-11-20 10:04:20.409702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.763 qpair failed and we were unable to recover it. 00:30:49.763 [2024-11-20 10:04:20.410046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.763 [2024-11-20 10:04:20.410059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.763 qpair failed and we were unable to recover it. 00:30:49.763 [2024-11-20 10:04:20.410465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.763 [2024-11-20 10:04:20.410479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.763 qpair failed and we were unable to recover it. 00:30:49.763 [2024-11-20 10:04:20.410784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.763 [2024-11-20 10:04:20.410797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.763 qpair failed and we were unable to recover it. 00:30:49.763 [2024-11-20 10:04:20.411011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.763 [2024-11-20 10:04:20.411025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.763 qpair failed and we were unable to recover it. 00:30:49.763 [2024-11-20 10:04:20.411417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.763 [2024-11-20 10:04:20.411433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.763 qpair failed and we were unable to recover it. 00:30:49.763 [2024-11-20 10:04:20.411782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.763 [2024-11-20 10:04:20.411796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.763 qpair failed and we were unable to recover it. 00:30:49.763 [2024-11-20 10:04:20.412138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.763 [2024-11-20 10:04:20.412152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.763 qpair failed and we were unable to recover it. 00:30:49.763 [2024-11-20 10:04:20.412508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.763 [2024-11-20 10:04:20.412522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.763 qpair failed and we were unable to recover it. 00:30:49.763 [2024-11-20 10:04:20.412844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.763 [2024-11-20 10:04:20.412859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.763 qpair failed and we were unable to recover it. 00:30:49.763 [2024-11-20 10:04:20.413215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.763 [2024-11-20 10:04:20.413228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.763 qpair failed and we were unable to recover it. 00:30:49.763 [2024-11-20 10:04:20.413568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.763 [2024-11-20 10:04:20.413581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.763 qpair failed and we were unable to recover it. 00:30:49.763 [2024-11-20 10:04:20.413900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.763 [2024-11-20 10:04:20.413915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.763 qpair failed and we were unable to recover it. 00:30:49.763 [2024-11-20 10:04:20.414188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.763 [2024-11-20 10:04:20.414201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.763 qpair failed and we were unable to recover it. 00:30:49.763 [2024-11-20 10:04:20.414506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.763 [2024-11-20 10:04:20.414520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.763 qpair failed and we were unable to recover it. 00:30:49.763 [2024-11-20 10:04:20.414758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.763 [2024-11-20 10:04:20.414774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.763 qpair failed and we were unable to recover it. 00:30:49.763 [2024-11-20 10:04:20.415116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.763 [2024-11-20 10:04:20.415129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.763 qpair failed and we were unable to recover it. 00:30:49.763 [2024-11-20 10:04:20.415339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.763 [2024-11-20 10:04:20.415352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.763 qpair failed and we were unable to recover it. 00:30:49.763 [2024-11-20 10:04:20.415679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.763 [2024-11-20 10:04:20.415693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.763 qpair failed and we were unable to recover it. 00:30:49.763 [2024-11-20 10:04:20.416029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.763 [2024-11-20 10:04:20.416041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.763 qpair failed and we were unable to recover it. 00:30:49.763 [2024-11-20 10:04:20.416236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.763 [2024-11-20 10:04:20.416251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.763 qpair failed and we were unable to recover it. 00:30:49.763 [2024-11-20 10:04:20.416604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.763 [2024-11-20 10:04:20.416618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.763 qpair failed and we were unable to recover it. 00:30:49.763 [2024-11-20 10:04:20.417011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.763 [2024-11-20 10:04:20.417026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.763 qpair failed and we were unable to recover it. 00:30:49.763 [2024-11-20 10:04:20.417333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.763 [2024-11-20 10:04:20.417345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.763 qpair failed and we were unable to recover it. 00:30:49.763 [2024-11-20 10:04:20.417679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.763 [2024-11-20 10:04:20.417691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.763 qpair failed and we were unable to recover it. 00:30:49.764 [2024-11-20 10:04:20.418037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.764 [2024-11-20 10:04:20.418051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.764 qpair failed and we were unable to recover it. 00:30:49.764 [2024-11-20 10:04:20.418261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.764 [2024-11-20 10:04:20.418273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.764 qpair failed and we were unable to recover it. 00:30:49.764 [2024-11-20 10:04:20.418629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.764 [2024-11-20 10:04:20.418641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.764 qpair failed and we were unable to recover it. 00:30:49.764 [2024-11-20 10:04:20.418845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.764 [2024-11-20 10:04:20.418861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.764 qpair failed and we were unable to recover it. 00:30:49.764 [2024-11-20 10:04:20.419181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.764 [2024-11-20 10:04:20.419196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.764 qpair failed and we were unable to recover it. 00:30:49.764 [2024-11-20 10:04:20.419527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.764 [2024-11-20 10:04:20.419542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.764 qpair failed and we were unable to recover it. 00:30:49.764 [2024-11-20 10:04:20.419883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.764 [2024-11-20 10:04:20.419896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.764 qpair failed and we were unable to recover it. 00:30:49.764 [2024-11-20 10:04:20.420119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.764 [2024-11-20 10:04:20.420131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.764 qpair failed and we were unable to recover it. 00:30:49.764 [2024-11-20 10:04:20.420455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.764 [2024-11-20 10:04:20.420468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.764 qpair failed and we were unable to recover it. 00:30:49.764 [2024-11-20 10:04:20.420815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.764 [2024-11-20 10:04:20.420828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.764 qpair failed and we were unable to recover it. 00:30:49.764 [2024-11-20 10:04:20.421169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.764 [2024-11-20 10:04:20.421183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.764 qpair failed and we were unable to recover it. 00:30:49.764 [2024-11-20 10:04:20.421555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.764 [2024-11-20 10:04:20.421569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.764 qpair failed and we were unable to recover it. 00:30:49.764 [2024-11-20 10:04:20.421888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.764 [2024-11-20 10:04:20.421901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.764 qpair failed and we were unable to recover it. 00:30:49.764 [2024-11-20 10:04:20.422231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.764 [2024-11-20 10:04:20.422244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.764 qpair failed and we were unable to recover it. 00:30:49.764 [2024-11-20 10:04:20.422536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.764 [2024-11-20 10:04:20.422548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.764 qpair failed and we were unable to recover it. 00:30:49.764 [2024-11-20 10:04:20.422898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.764 [2024-11-20 10:04:20.422912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.764 qpair failed and we were unable to recover it. 00:30:49.764 [2024-11-20 10:04:20.423242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.764 [2024-11-20 10:04:20.423255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.764 qpair failed and we were unable to recover it. 00:30:49.764 [2024-11-20 10:04:20.423548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.764 [2024-11-20 10:04:20.423561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.764 qpair failed and we were unable to recover it. 00:30:49.764 [2024-11-20 10:04:20.423915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.764 [2024-11-20 10:04:20.423932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.764 qpair failed and we were unable to recover it. 00:30:49.764 [2024-11-20 10:04:20.424270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.764 [2024-11-20 10:04:20.424285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.764 qpair failed and we were unable to recover it. 00:30:49.764 [2024-11-20 10:04:20.424653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.764 [2024-11-20 10:04:20.424666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.764 qpair failed and we were unable to recover it. 00:30:49.764 [2024-11-20 10:04:20.424890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.764 [2024-11-20 10:04:20.424903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.764 qpair failed and we were unable to recover it. 00:30:49.764 [2024-11-20 10:04:20.425258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.764 [2024-11-20 10:04:20.425271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.764 qpair failed and we were unable to recover it. 00:30:49.764 [2024-11-20 10:04:20.425590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.764 [2024-11-20 10:04:20.425603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.764 qpair failed and we were unable to recover it. 00:30:49.764 [2024-11-20 10:04:20.426006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.764 [2024-11-20 10:04:20.426018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.764 qpair failed and we were unable to recover it. 00:30:49.764 [2024-11-20 10:04:20.426325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.764 [2024-11-20 10:04:20.426339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.764 qpair failed and we were unable to recover it. 00:30:49.764 [2024-11-20 10:04:20.426686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.764 [2024-11-20 10:04:20.426698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.764 qpair failed and we were unable to recover it. 00:30:49.764 [2024-11-20 10:04:20.426892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.764 [2024-11-20 10:04:20.426904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.764 qpair failed and we were unable to recover it. 00:30:49.764 [2024-11-20 10:04:20.427171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.764 [2024-11-20 10:04:20.427185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.764 qpair failed and we were unable to recover it. 00:30:49.764 [2024-11-20 10:04:20.427538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.764 [2024-11-20 10:04:20.427552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.764 qpair failed and we were unable to recover it. 00:30:49.764 [2024-11-20 10:04:20.427946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.764 [2024-11-20 10:04:20.427962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.764 qpair failed and we were unable to recover it. 00:30:49.764 [2024-11-20 10:04:20.428293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.764 [2024-11-20 10:04:20.428305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.764 qpair failed and we were unable to recover it. 00:30:49.764 [2024-11-20 10:04:20.428610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.764 [2024-11-20 10:04:20.428623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.764 qpair failed and we were unable to recover it. 00:30:49.764 [2024-11-20 10:04:20.429360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.764 [2024-11-20 10:04:20.429403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.764 qpair failed and we were unable to recover it. 00:30:49.764 [2024-11-20 10:04:20.429803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.764 [2024-11-20 10:04:20.429817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.764 qpair failed and we were unable to recover it. 00:30:49.764 [2024-11-20 10:04:20.430030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.764 [2024-11-20 10:04:20.430043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.764 qpair failed and we were unable to recover it. 00:30:49.764 [2024-11-20 10:04:20.430394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.764 [2024-11-20 10:04:20.430407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.764 qpair failed and we were unable to recover it. 00:30:49.764 [2024-11-20 10:04:20.431647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.764 [2024-11-20 10:04:20.431680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.765 qpair failed and we were unable to recover it. 00:30:49.765 [2024-11-20 10:04:20.431921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.765 [2024-11-20 10:04:20.431936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.765 qpair failed and we were unable to recover it. 00:30:49.765 [2024-11-20 10:04:20.432277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.765 [2024-11-20 10:04:20.432290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.765 qpair failed and we were unable to recover it. 00:30:49.765 [2024-11-20 10:04:20.432622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.765 [2024-11-20 10:04:20.432633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.765 qpair failed and we were unable to recover it. 00:30:49.765 [2024-11-20 10:04:20.432923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.765 [2024-11-20 10:04:20.432935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.765 qpair failed and we were unable to recover it. 00:30:49.765 [2024-11-20 10:04:20.433268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.765 [2024-11-20 10:04:20.433282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.765 qpair failed and we were unable to recover it. 00:30:49.765 [2024-11-20 10:04:20.433621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.765 [2024-11-20 10:04:20.433632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.765 qpair failed and we were unable to recover it. 00:30:49.765 [2024-11-20 10:04:20.433825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.765 [2024-11-20 10:04:20.433836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.765 qpair failed and we were unable to recover it. 00:30:49.765 [2024-11-20 10:04:20.434192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.765 [2024-11-20 10:04:20.434204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.765 qpair failed and we were unable to recover it. 00:30:49.765 [2024-11-20 10:04:20.434402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.765 [2024-11-20 10:04:20.434413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.765 qpair failed and we were unable to recover it. 00:30:49.765 [2024-11-20 10:04:20.434766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.765 [2024-11-20 10:04:20.434776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.765 qpair failed and we were unable to recover it. 00:30:49.765 [2024-11-20 10:04:20.434995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.765 [2024-11-20 10:04:20.435006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.765 qpair failed and we were unable to recover it. 00:30:49.765 [2024-11-20 10:04:20.435213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.765 [2024-11-20 10:04:20.435226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.765 qpair failed and we were unable to recover it. 00:30:49.765 [2024-11-20 10:04:20.435545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.765 [2024-11-20 10:04:20.435557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.765 qpair failed and we were unable to recover it. 00:30:49.765 [2024-11-20 10:04:20.435950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.765 [2024-11-20 10:04:20.435962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.765 qpair failed and we were unable to recover it. 00:30:49.765 [2024-11-20 10:04:20.436170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.765 [2024-11-20 10:04:20.436183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.765 qpair failed and we were unable to recover it. 00:30:49.765 [2024-11-20 10:04:20.436506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.765 [2024-11-20 10:04:20.436521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.765 qpair failed and we were unable to recover it. 00:30:49.765 [2024-11-20 10:04:20.436865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.765 [2024-11-20 10:04:20.436877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.765 qpair failed and we were unable to recover it. 00:30:49.765 [2024-11-20 10:04:20.437084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.765 [2024-11-20 10:04:20.437097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.765 qpair failed and we were unable to recover it. 00:30:49.765 [2024-11-20 10:04:20.437310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.765 [2024-11-20 10:04:20.437323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.765 qpair failed and we were unable to recover it. 00:30:49.765 [2024-11-20 10:04:20.437604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.765 [2024-11-20 10:04:20.437621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.765 qpair failed and we were unable to recover it. 00:30:49.765 [2024-11-20 10:04:20.437959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.765 [2024-11-20 10:04:20.437969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.765 qpair failed and we were unable to recover it. 00:30:49.765 [2024-11-20 10:04:20.438185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.765 [2024-11-20 10:04:20.438196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.765 qpair failed and we were unable to recover it. 00:30:49.765 [2024-11-20 10:04:20.438529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.765 [2024-11-20 10:04:20.438539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.765 qpair failed and we were unable to recover it. 00:30:49.765 [2024-11-20 10:04:20.438738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.765 [2024-11-20 10:04:20.438749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.765 qpair failed and we were unable to recover it. 00:30:49.765 [2024-11-20 10:04:20.439073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.765 [2024-11-20 10:04:20.439087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.765 qpair failed and we were unable to recover it. 00:30:49.765 [2024-11-20 10:04:20.439405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.765 [2024-11-20 10:04:20.439420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.765 qpair failed and we were unable to recover it. 00:30:49.765 [2024-11-20 10:04:20.439663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.765 [2024-11-20 10:04:20.439675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.765 qpair failed and we were unable to recover it. 00:30:49.765 [2024-11-20 10:04:20.440029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.765 [2024-11-20 10:04:20.440040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.765 qpair failed and we were unable to recover it. 00:30:49.765 [2024-11-20 10:04:20.440353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.765 [2024-11-20 10:04:20.440365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.765 qpair failed and we were unable to recover it. 00:30:49.765 [2024-11-20 10:04:20.440703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.765 [2024-11-20 10:04:20.440716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.765 qpair failed and we were unable to recover it. 00:30:49.765 [2024-11-20 10:04:20.441055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.765 [2024-11-20 10:04:20.441066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.765 qpair failed and we were unable to recover it. 00:30:49.765 [2024-11-20 10:04:20.441400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.765 [2024-11-20 10:04:20.441414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.765 qpair failed and we were unable to recover it. 00:30:49.765 [2024-11-20 10:04:20.441738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.765 [2024-11-20 10:04:20.441752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.765 qpair failed and we were unable to recover it. 00:30:49.765 [2024-11-20 10:04:20.442074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.765 [2024-11-20 10:04:20.442085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.765 qpair failed and we were unable to recover it. 00:30:49.765 [2024-11-20 10:04:20.442441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.765 [2024-11-20 10:04:20.442454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.765 qpair failed and we were unable to recover it. 00:30:49.765 [2024-11-20 10:04:20.442788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.765 [2024-11-20 10:04:20.442800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.765 qpair failed and we were unable to recover it. 00:30:49.765 [2024-11-20 10:04:20.443148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.765 [2024-11-20 10:04:20.443166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.765 qpair failed and we were unable to recover it. 00:30:49.766 [2024-11-20 10:04:20.443462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.766 [2024-11-20 10:04:20.443472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.766 qpair failed and we were unable to recover it. 00:30:49.766 [2024-11-20 10:04:20.443794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.766 [2024-11-20 10:04:20.443805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.766 qpair failed and we were unable to recover it. 00:30:49.766 [2024-11-20 10:04:20.444165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.766 [2024-11-20 10:04:20.444176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.766 qpair failed and we were unable to recover it. 00:30:49.766 [2024-11-20 10:04:20.444430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.766 [2024-11-20 10:04:20.444440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.766 qpair failed and we were unable to recover it. 00:30:49.766 [2024-11-20 10:04:20.444755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.766 [2024-11-20 10:04:20.444768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.766 qpair failed and we were unable to recover it. 00:30:49.766 [2024-11-20 10:04:20.445081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.766 [2024-11-20 10:04:20.445092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.766 qpair failed and we were unable to recover it. 00:30:49.766 [2024-11-20 10:04:20.445378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.766 [2024-11-20 10:04:20.445391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.766 qpair failed and we were unable to recover it. 00:30:49.766 [2024-11-20 10:04:20.445710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.766 [2024-11-20 10:04:20.445721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.766 qpair failed and we were unable to recover it. 00:30:49.766 [2024-11-20 10:04:20.446040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.766 [2024-11-20 10:04:20.446051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.766 qpair failed and we were unable to recover it. 00:30:49.766 [2024-11-20 10:04:20.446379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.766 [2024-11-20 10:04:20.446392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.766 qpair failed and we were unable to recover it. 00:30:49.766 [2024-11-20 10:04:20.446769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.766 [2024-11-20 10:04:20.446781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.766 qpair failed and we were unable to recover it. 00:30:49.766 [2024-11-20 10:04:20.447184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.766 [2024-11-20 10:04:20.447195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.766 qpair failed and we were unable to recover it. 00:30:49.766 [2024-11-20 10:04:20.447392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.766 [2024-11-20 10:04:20.447408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.766 qpair failed and we were unable to recover it. 00:30:49.766 [2024-11-20 10:04:20.447748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.766 [2024-11-20 10:04:20.447760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.766 qpair failed and we were unable to recover it. 00:30:49.766 [2024-11-20 10:04:20.448084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.766 [2024-11-20 10:04:20.448094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.766 qpair failed and we were unable to recover it. 00:30:49.766 [2024-11-20 10:04:20.448407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.766 [2024-11-20 10:04:20.448419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.766 qpair failed and we were unable to recover it. 00:30:49.766 [2024-11-20 10:04:20.448669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.766 [2024-11-20 10:04:20.448679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.766 qpair failed and we were unable to recover it. 00:30:49.766 [2024-11-20 10:04:20.449010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.766 [2024-11-20 10:04:20.449022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.766 qpair failed and we were unable to recover it. 00:30:49.766 [2024-11-20 10:04:20.449370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.766 [2024-11-20 10:04:20.449383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.766 qpair failed and we were unable to recover it. 00:30:49.766 [2024-11-20 10:04:20.449714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.766 [2024-11-20 10:04:20.449732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.766 qpair failed and we were unable to recover it. 00:30:49.766 [2024-11-20 10:04:20.450114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.766 [2024-11-20 10:04:20.450124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.766 qpair failed and we were unable to recover it. 00:30:49.766 [2024-11-20 10:04:20.450531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.766 [2024-11-20 10:04:20.450542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.766 qpair failed and we were unable to recover it. 00:30:49.766 [2024-11-20 10:04:20.450875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.766 [2024-11-20 10:04:20.450886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.766 qpair failed and we were unable to recover it. 00:30:49.766 [2024-11-20 10:04:20.451140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.766 [2024-11-20 10:04:20.451152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.766 qpair failed and we were unable to recover it. 00:30:49.766 [2024-11-20 10:04:20.451468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.766 [2024-11-20 10:04:20.451479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.766 qpair failed and we were unable to recover it. 00:30:49.766 [2024-11-20 10:04:20.451789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.766 [2024-11-20 10:04:20.451800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.766 qpair failed and we were unable to recover it. 00:30:49.766 [2024-11-20 10:04:20.452021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.766 [2024-11-20 10:04:20.452032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.766 qpair failed and we were unable to recover it. 00:30:49.766 [2024-11-20 10:04:20.452385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.766 [2024-11-20 10:04:20.452397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.766 qpair failed and we were unable to recover it. 00:30:49.766 [2024-11-20 10:04:20.452721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.766 [2024-11-20 10:04:20.452735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.766 qpair failed and we were unable to recover it. 00:30:49.766 [2024-11-20 10:04:20.452999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.766 [2024-11-20 10:04:20.453011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.766 qpair failed and we were unable to recover it. 00:30:49.766 [2024-11-20 10:04:20.453220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.766 [2024-11-20 10:04:20.453230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.766 qpair failed and we were unable to recover it. 00:30:49.766 [2024-11-20 10:04:20.453532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.766 [2024-11-20 10:04:20.453543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.766 qpair failed and we were unable to recover it. 00:30:49.766 [2024-11-20 10:04:20.453925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.766 [2024-11-20 10:04:20.453938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.766 qpair failed and we were unable to recover it. 00:30:49.766 [2024-11-20 10:04:20.454174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.766 [2024-11-20 10:04:20.454185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.766 qpair failed and we were unable to recover it. 00:30:49.766 [2024-11-20 10:04:20.454494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.766 [2024-11-20 10:04:20.454505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.766 qpair failed and we were unable to recover it. 00:30:49.766 [2024-11-20 10:04:20.454853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.766 [2024-11-20 10:04:20.454866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.766 qpair failed and we were unable to recover it. 00:30:49.766 [2024-11-20 10:04:20.455170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.766 [2024-11-20 10:04:20.455182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.766 qpair failed and we were unable to recover it. 00:30:49.766 [2024-11-20 10:04:20.455529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.767 [2024-11-20 10:04:20.455541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.767 qpair failed and we were unable to recover it. 00:30:49.767 [2024-11-20 10:04:20.455728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.767 [2024-11-20 10:04:20.455740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.767 qpair failed and we were unable to recover it. 00:30:49.767 [2024-11-20 10:04:20.455923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.767 [2024-11-20 10:04:20.455934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.767 qpair failed and we were unable to recover it. 00:30:49.767 [2024-11-20 10:04:20.456290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.767 [2024-11-20 10:04:20.456301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.767 qpair failed and we were unable to recover it. 00:30:49.767 [2024-11-20 10:04:20.456510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.767 [2024-11-20 10:04:20.456523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.767 qpair failed and we were unable to recover it. 00:30:49.767 [2024-11-20 10:04:20.456889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.767 [2024-11-20 10:04:20.456901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.767 qpair failed and we were unable to recover it. 00:30:49.767 [2024-11-20 10:04:20.457089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.767 [2024-11-20 10:04:20.457101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.767 qpair failed and we were unable to recover it. 00:30:49.767 [2024-11-20 10:04:20.457463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.767 [2024-11-20 10:04:20.457476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.767 qpair failed and we were unable to recover it. 00:30:49.767 [2024-11-20 10:04:20.457814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.767 [2024-11-20 10:04:20.457825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.767 qpair failed and we were unable to recover it. 00:30:49.767 [2024-11-20 10:04:20.458130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.767 [2024-11-20 10:04:20.458141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.767 qpair failed and we were unable to recover it. 00:30:49.767 [2024-11-20 10:04:20.458473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.767 [2024-11-20 10:04:20.458484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.767 qpair failed and we were unable to recover it. 00:30:49.767 [2024-11-20 10:04:20.458817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.767 [2024-11-20 10:04:20.458828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.767 qpair failed and we were unable to recover it. 00:30:49.767 [2024-11-20 10:04:20.459185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.767 [2024-11-20 10:04:20.459197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.767 qpair failed and we were unable to recover it. 00:30:49.767 [2024-11-20 10:04:20.459597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.767 [2024-11-20 10:04:20.459608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.767 qpair failed and we were unable to recover it. 00:30:49.767 [2024-11-20 10:04:20.459827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.767 [2024-11-20 10:04:20.459838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.767 qpair failed and we were unable to recover it. 00:30:49.767 [2024-11-20 10:04:20.460061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.767 [2024-11-20 10:04:20.460072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.767 qpair failed and we were unable to recover it. 00:30:49.767 [2024-11-20 10:04:20.460278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.767 [2024-11-20 10:04:20.460289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.767 qpair failed and we were unable to recover it. 00:30:49.767 [2024-11-20 10:04:20.460625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.767 [2024-11-20 10:04:20.460635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.767 qpair failed and we were unable to recover it. 00:30:49.767 [2024-11-20 10:04:20.460937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.767 [2024-11-20 10:04:20.460948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.767 qpair failed and we were unable to recover it. 00:30:49.767 [2024-11-20 10:04:20.461318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.767 [2024-11-20 10:04:20.461329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.767 qpair failed and we were unable to recover it. 00:30:49.767 [2024-11-20 10:04:20.461639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.767 [2024-11-20 10:04:20.461650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.767 qpair failed and we were unable to recover it. 00:30:49.767 [2024-11-20 10:04:20.461984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.767 [2024-11-20 10:04:20.461994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.767 qpair failed and we were unable to recover it. 00:30:49.767 [2024-11-20 10:04:20.462295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.767 [2024-11-20 10:04:20.462307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.767 qpair failed and we were unable to recover it. 00:30:49.767 [2024-11-20 10:04:20.462601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.767 [2024-11-20 10:04:20.462611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.767 qpair failed and we were unable to recover it. 00:30:49.767 [2024-11-20 10:04:20.462954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.767 [2024-11-20 10:04:20.462965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.767 qpair failed and we were unable to recover it. 00:30:49.767 [2024-11-20 10:04:20.463196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.767 [2024-11-20 10:04:20.463208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.767 qpair failed and we were unable to recover it. 00:30:49.767 [2024-11-20 10:04:20.463605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.767 [2024-11-20 10:04:20.463616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.767 qpair failed and we were unable to recover it. 00:30:49.767 [2024-11-20 10:04:20.463959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.767 [2024-11-20 10:04:20.463970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.767 qpair failed and we were unable to recover it. 00:30:49.767 [2024-11-20 10:04:20.464170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.767 [2024-11-20 10:04:20.464182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.767 qpair failed and we were unable to recover it. 00:30:49.767 [2024-11-20 10:04:20.464482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.767 [2024-11-20 10:04:20.464493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.767 qpair failed and we were unable to recover it. 00:30:49.767 [2024-11-20 10:04:20.464696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.767 [2024-11-20 10:04:20.464708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.767 qpair failed and we were unable to recover it. 00:30:49.767 [2024-11-20 10:04:20.465089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.767 [2024-11-20 10:04:20.465099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.767 qpair failed and we were unable to recover it. 00:30:49.767 [2024-11-20 10:04:20.465440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.767 [2024-11-20 10:04:20.465451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.767 qpair failed and we were unable to recover it. 00:30:49.768 [2024-11-20 10:04:20.465786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.768 [2024-11-20 10:04:20.465797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.768 qpair failed and we were unable to recover it. 00:30:49.768 [2024-11-20 10:04:20.466140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.768 [2024-11-20 10:04:20.466151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.768 qpair failed and we were unable to recover it. 00:30:49.768 [2024-11-20 10:04:20.466493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.768 [2024-11-20 10:04:20.466504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.768 qpair failed and we were unable to recover it. 00:30:49.768 [2024-11-20 10:04:20.466815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.768 [2024-11-20 10:04:20.466826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.768 qpair failed and we were unable to recover it. 00:30:49.768 [2024-11-20 10:04:20.467001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.768 [2024-11-20 10:04:20.467012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.768 qpair failed and we were unable to recover it. 00:30:49.768 [2024-11-20 10:04:20.467319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.768 [2024-11-20 10:04:20.467333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.768 qpair failed and we were unable to recover it. 00:30:49.768 [2024-11-20 10:04:20.467643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.768 [2024-11-20 10:04:20.467653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.768 qpair failed and we were unable to recover it. 00:30:49.768 [2024-11-20 10:04:20.468054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.768 [2024-11-20 10:04:20.468064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.768 qpair failed and we were unable to recover it. 00:30:49.768 [2024-11-20 10:04:20.468246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.768 [2024-11-20 10:04:20.468256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.768 qpair failed and we were unable to recover it. 00:30:49.768 [2024-11-20 10:04:20.468488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.768 [2024-11-20 10:04:20.468498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.768 qpair failed and we were unable to recover it. 00:30:49.768 [2024-11-20 10:04:20.468832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.768 [2024-11-20 10:04:20.468842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.768 qpair failed and we were unable to recover it. 00:30:49.768 [2024-11-20 10:04:20.469088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.768 [2024-11-20 10:04:20.469099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.768 qpair failed and we were unable to recover it. 00:30:49.768 [2024-11-20 10:04:20.469454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.768 [2024-11-20 10:04:20.469467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.768 qpair failed and we were unable to recover it. 00:30:49.768 [2024-11-20 10:04:20.469803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.768 [2024-11-20 10:04:20.469814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.768 qpair failed and we were unable to recover it. 00:30:49.768 [2024-11-20 10:04:20.469993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.768 [2024-11-20 10:04:20.470005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.768 qpair failed and we were unable to recover it. 00:30:49.768 [2024-11-20 10:04:20.470344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.768 [2024-11-20 10:04:20.470356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.768 qpair failed and we were unable to recover it. 00:30:49.768 [2024-11-20 10:04:20.470706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.768 [2024-11-20 10:04:20.470717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.768 qpair failed and we were unable to recover it. 00:30:49.768 [2024-11-20 10:04:20.471057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.768 [2024-11-20 10:04:20.471068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.768 qpair failed and we were unable to recover it. 00:30:49.768 [2024-11-20 10:04:20.471385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.768 [2024-11-20 10:04:20.471397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.768 qpair failed and we were unable to recover it. 00:30:49.768 [2024-11-20 10:04:20.471702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.768 [2024-11-20 10:04:20.471713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.768 qpair failed and we were unable to recover it. 00:30:49.768 [2024-11-20 10:04:20.471841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.768 [2024-11-20 10:04:20.471853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.768 qpair failed and we were unable to recover it. 00:30:49.768 [2024-11-20 10:04:20.472198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.768 [2024-11-20 10:04:20.472210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.768 qpair failed and we were unable to recover it. 00:30:49.768 [2024-11-20 10:04:20.472428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.768 [2024-11-20 10:04:20.472439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.768 qpair failed and we were unable to recover it. 00:30:49.768 [2024-11-20 10:04:20.472639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.768 [2024-11-20 10:04:20.472650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.768 qpair failed and we were unable to recover it. 00:30:49.768 [2024-11-20 10:04:20.472856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.768 [2024-11-20 10:04:20.472868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.768 qpair failed and we were unable to recover it. 00:30:49.768 [2024-11-20 10:04:20.473220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.768 [2024-11-20 10:04:20.473231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.768 qpair failed and we were unable to recover it. 00:30:49.768 [2024-11-20 10:04:20.473478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.768 [2024-11-20 10:04:20.473489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.768 qpair failed and we were unable to recover it. 00:30:49.768 [2024-11-20 10:04:20.473729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.768 [2024-11-20 10:04:20.473741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.768 qpair failed and we were unable to recover it. 00:30:49.768 [2024-11-20 10:04:20.474067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.768 [2024-11-20 10:04:20.474078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.768 qpair failed and we were unable to recover it. 00:30:49.768 [2024-11-20 10:04:20.474405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.768 [2024-11-20 10:04:20.474416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.768 qpair failed and we were unable to recover it. 00:30:49.768 [2024-11-20 10:04:20.474661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.768 [2024-11-20 10:04:20.474672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.768 qpair failed and we were unable to recover it. 00:30:49.768 [2024-11-20 10:04:20.474871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.768 [2024-11-20 10:04:20.474882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.768 qpair failed and we were unable to recover it. 00:30:49.768 [2024-11-20 10:04:20.475079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.768 [2024-11-20 10:04:20.475090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.768 qpair failed and we were unable to recover it. 00:30:49.768 [2024-11-20 10:04:20.475426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.768 [2024-11-20 10:04:20.475437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.768 qpair failed and we were unable to recover it. 00:30:49.768 [2024-11-20 10:04:20.475797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.768 [2024-11-20 10:04:20.475808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.768 qpair failed and we were unable to recover it. 00:30:49.768 [2024-11-20 10:04:20.476115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.768 [2024-11-20 10:04:20.476127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.768 qpair failed and we were unable to recover it. 00:30:49.768 [2024-11-20 10:04:20.476457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.768 [2024-11-20 10:04:20.476468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.768 qpair failed and we were unable to recover it. 00:30:49.768 [2024-11-20 10:04:20.476779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.769 [2024-11-20 10:04:20.476792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.769 qpair failed and we were unable to recover it. 00:30:49.769 [2024-11-20 10:04:20.477138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.769 [2024-11-20 10:04:20.477149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.769 qpair failed and we were unable to recover it. 00:30:49.769 [2024-11-20 10:04:20.477374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.769 [2024-11-20 10:04:20.477385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.769 qpair failed and we were unable to recover it. 00:30:49.769 [2024-11-20 10:04:20.477710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.769 [2024-11-20 10:04:20.477721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.769 qpair failed and we were unable to recover it. 00:30:49.769 [2024-11-20 10:04:20.478046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.769 [2024-11-20 10:04:20.478056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.769 qpair failed and we were unable to recover it. 00:30:49.769 [2024-11-20 10:04:20.478380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.769 [2024-11-20 10:04:20.478391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.769 qpair failed and we were unable to recover it. 00:30:49.769 [2024-11-20 10:04:20.478715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.769 [2024-11-20 10:04:20.478725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.769 qpair failed and we were unable to recover it. 00:30:49.769 [2024-11-20 10:04:20.479043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.769 [2024-11-20 10:04:20.479054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.769 qpair failed and we were unable to recover it. 00:30:49.769 [2024-11-20 10:04:20.479379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.769 [2024-11-20 10:04:20.479393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.769 qpair failed and we were unable to recover it. 00:30:49.769 [2024-11-20 10:04:20.479720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.769 [2024-11-20 10:04:20.479731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.769 qpair failed and we were unable to recover it. 00:30:49.769 [2024-11-20 10:04:20.480087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.769 [2024-11-20 10:04:20.480099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.769 qpair failed and we were unable to recover it. 00:30:49.769 [2024-11-20 10:04:20.480258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.769 [2024-11-20 10:04:20.480270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.769 qpair failed and we were unable to recover it. 00:30:49.769 [2024-11-20 10:04:20.480472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.769 [2024-11-20 10:04:20.480484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.769 qpair failed and we were unable to recover it. 00:30:49.769 [2024-11-20 10:04:20.480818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.769 [2024-11-20 10:04:20.480828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.769 qpair failed and we were unable to recover it. 00:30:49.769 [2024-11-20 10:04:20.481150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.769 [2024-11-20 10:04:20.481167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.769 qpair failed and we were unable to recover it. 00:30:49.769 [2024-11-20 10:04:20.481540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.769 [2024-11-20 10:04:20.481552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.769 qpair failed and we were unable to recover it. 00:30:49.769 [2024-11-20 10:04:20.481896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.769 [2024-11-20 10:04:20.481915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.769 qpair failed and we were unable to recover it. 00:30:49.769 [2024-11-20 10:04:20.482116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.769 [2024-11-20 10:04:20.482126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.769 qpair failed and we were unable to recover it. 00:30:49.769 [2024-11-20 10:04:20.482471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.769 [2024-11-20 10:04:20.482482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.769 qpair failed and we were unable to recover it. 00:30:49.769 [2024-11-20 10:04:20.482825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.769 [2024-11-20 10:04:20.482835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.769 qpair failed and we were unable to recover it. 00:30:49.769 [2024-11-20 10:04:20.483028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.769 [2024-11-20 10:04:20.483039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.769 qpair failed and we were unable to recover it. 00:30:49.769 [2024-11-20 10:04:20.483385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.769 [2024-11-20 10:04:20.483395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.769 qpair failed and we were unable to recover it. 00:30:49.769 [2024-11-20 10:04:20.483607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.769 [2024-11-20 10:04:20.483617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.769 qpair failed and we were unable to recover it. 00:30:49.769 [2024-11-20 10:04:20.483945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.769 [2024-11-20 10:04:20.483957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.769 qpair failed and we were unable to recover it. 00:30:49.769 [2024-11-20 10:04:20.484018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.769 [2024-11-20 10:04:20.484028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.769 qpair failed and we were unable to recover it. 00:30:49.769 [2024-11-20 10:04:20.484338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.769 [2024-11-20 10:04:20.484349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.769 qpair failed and we were unable to recover it. 00:30:49.769 [2024-11-20 10:04:20.484536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.769 [2024-11-20 10:04:20.484547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.769 qpair failed and we were unable to recover it. 00:30:49.769 [2024-11-20 10:04:20.484943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.769 [2024-11-20 10:04:20.484953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.769 qpair failed and we were unable to recover it. 00:30:49.769 [2024-11-20 10:04:20.485141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.769 [2024-11-20 10:04:20.485151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.769 qpair failed and we were unable to recover it. 00:30:49.769 [2024-11-20 10:04:20.485474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.769 [2024-11-20 10:04:20.485485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.769 qpair failed and we were unable to recover it. 00:30:49.769 [2024-11-20 10:04:20.485748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.769 [2024-11-20 10:04:20.485759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.769 qpair failed and we were unable to recover it. 00:30:49.769 [2024-11-20 10:04:20.486181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.769 [2024-11-20 10:04:20.486193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.769 qpair failed and we were unable to recover it. 00:30:49.769 [2024-11-20 10:04:20.486534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.769 [2024-11-20 10:04:20.486545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.769 qpair failed and we were unable to recover it. 00:30:49.769 [2024-11-20 10:04:20.486909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.769 [2024-11-20 10:04:20.486919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.769 qpair failed and we were unable to recover it. 00:30:49.769 [2024-11-20 10:04:20.487232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.769 [2024-11-20 10:04:20.487243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.769 qpair failed and we were unable to recover it. 00:30:49.769 [2024-11-20 10:04:20.487576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.769 [2024-11-20 10:04:20.487587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.769 qpair failed and we were unable to recover it. 00:30:49.769 [2024-11-20 10:04:20.487879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.769 [2024-11-20 10:04:20.487889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.769 qpair failed and we were unable to recover it. 00:30:49.769 [2024-11-20 10:04:20.488214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.770 [2024-11-20 10:04:20.488224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.770 qpair failed and we were unable to recover it. 00:30:49.770 [2024-11-20 10:04:20.488434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.770 [2024-11-20 10:04:20.488446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.770 qpair failed and we were unable to recover it. 00:30:49.770 [2024-11-20 10:04:20.488666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.770 [2024-11-20 10:04:20.488677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.770 qpair failed and we were unable to recover it. 00:30:49.770 [2024-11-20 10:04:20.489016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.770 [2024-11-20 10:04:20.489028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.770 qpair failed and we were unable to recover it. 00:30:49.770 [2024-11-20 10:04:20.489404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.770 [2024-11-20 10:04:20.489416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.770 qpair failed and we were unable to recover it. 00:30:49.770 [2024-11-20 10:04:20.489617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.770 [2024-11-20 10:04:20.489628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.770 qpair failed and we were unable to recover it. 00:30:49.770 [2024-11-20 10:04:20.490027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.770 [2024-11-20 10:04:20.490038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.770 qpair failed and we were unable to recover it. 00:30:49.770 [2024-11-20 10:04:20.490386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.770 [2024-11-20 10:04:20.490397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.770 qpair failed and we were unable to recover it. 00:30:49.770 [2024-11-20 10:04:20.490767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.770 [2024-11-20 10:04:20.490780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.770 qpair failed and we were unable to recover it. 00:30:49.770 [2024-11-20 10:04:20.491113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.770 [2024-11-20 10:04:20.491123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.770 qpair failed and we were unable to recover it. 00:30:49.770 [2024-11-20 10:04:20.491432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.770 [2024-11-20 10:04:20.491444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.770 qpair failed and we were unable to recover it. 00:30:49.770 [2024-11-20 10:04:20.491772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.770 [2024-11-20 10:04:20.491788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.770 qpair failed and we were unable to recover it. 00:30:49.770 [2024-11-20 10:04:20.491978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.770 [2024-11-20 10:04:20.491988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.770 qpair failed and we were unable to recover it. 00:30:49.770 [2024-11-20 10:04:20.492299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.770 [2024-11-20 10:04:20.492309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.770 qpair failed and we were unable to recover it. 00:30:49.770 [2024-11-20 10:04:20.492509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.770 [2024-11-20 10:04:20.492521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.770 qpair failed and we were unable to recover it. 00:30:49.770 [2024-11-20 10:04:20.492746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.770 [2024-11-20 10:04:20.492756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.770 qpair failed and we were unable to recover it. 00:30:49.770 [2024-11-20 10:04:20.493126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.770 [2024-11-20 10:04:20.493138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.770 qpair failed and we were unable to recover it. 00:30:49.770 [2024-11-20 10:04:20.493491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.770 [2024-11-20 10:04:20.493503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.770 qpair failed and we were unable to recover it. 00:30:49.770 [2024-11-20 10:04:20.493707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.770 [2024-11-20 10:04:20.493718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.770 qpair failed and we were unable to recover it. 00:30:49.770 [2024-11-20 10:04:20.494054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.770 [2024-11-20 10:04:20.494063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.770 qpair failed and we were unable to recover it. 00:30:49.770 [2024-11-20 10:04:20.494268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.770 [2024-11-20 10:04:20.494280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.770 qpair failed and we were unable to recover it. 00:30:49.770 [2024-11-20 10:04:20.494656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.770 [2024-11-20 10:04:20.494666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.770 qpair failed and we were unable to recover it. 00:30:49.770 [2024-11-20 10:04:20.495028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.770 [2024-11-20 10:04:20.495041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.770 qpair failed and we were unable to recover it. 00:30:49.770 [2024-11-20 10:04:20.495383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.770 [2024-11-20 10:04:20.495394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.770 qpair failed and we were unable to recover it. 00:30:49.770 [2024-11-20 10:04:20.495765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.770 [2024-11-20 10:04:20.495777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.770 qpair failed and we were unable to recover it. 00:30:49.770 [2024-11-20 10:04:20.496124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.770 [2024-11-20 10:04:20.496134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.770 qpair failed and we were unable to recover it. 00:30:49.770 [2024-11-20 10:04:20.496459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.770 [2024-11-20 10:04:20.496470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.770 qpair failed and we were unable to recover it. 00:30:49.770 [2024-11-20 10:04:20.496851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.770 [2024-11-20 10:04:20.496862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.770 qpair failed and we were unable to recover it. 00:30:49.770 [2024-11-20 10:04:20.497177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.770 [2024-11-20 10:04:20.497189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.770 qpair failed and we were unable to recover it. 00:30:49.770 [2024-11-20 10:04:20.497503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.770 [2024-11-20 10:04:20.497514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.770 qpair failed and we were unable to recover it. 00:30:49.770 [2024-11-20 10:04:20.497913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.770 [2024-11-20 10:04:20.497925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.770 qpair failed and we were unable to recover it. 00:30:49.770 [2024-11-20 10:04:20.498261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.770 [2024-11-20 10:04:20.498272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.770 qpair failed and we were unable to recover it. 00:30:49.770 [2024-11-20 10:04:20.498596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.770 [2024-11-20 10:04:20.498606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.770 qpair failed and we were unable to recover it. 00:30:49.770 [2024-11-20 10:04:20.498928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.770 [2024-11-20 10:04:20.498938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.770 qpair failed and we were unable to recover it. 00:30:49.770 [2024-11-20 10:04:20.499256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.770 [2024-11-20 10:04:20.499267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.770 qpair failed and we were unable to recover it. 00:30:49.770 [2024-11-20 10:04:20.499588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.770 [2024-11-20 10:04:20.499598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.770 qpair failed and we were unable to recover it. 00:30:49.770 [2024-11-20 10:04:20.499918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.770 [2024-11-20 10:04:20.499929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.770 qpair failed and we were unable to recover it. 00:30:49.770 [2024-11-20 10:04:20.500226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.770 [2024-11-20 10:04:20.500237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.771 qpair failed and we were unable to recover it. 00:30:49.771 [2024-11-20 10:04:20.500565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.771 [2024-11-20 10:04:20.500576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.771 qpair failed and we were unable to recover it. 00:30:49.771 [2024-11-20 10:04:20.500976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.771 [2024-11-20 10:04:20.500987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.771 qpair failed and we were unable to recover it. 00:30:49.771 [2024-11-20 10:04:20.501319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.771 [2024-11-20 10:04:20.501329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.771 qpair failed and we were unable to recover it. 00:30:49.771 [2024-11-20 10:04:20.501648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.771 [2024-11-20 10:04:20.501659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.771 qpair failed and we were unable to recover it. 00:30:49.771 [2024-11-20 10:04:20.501982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.771 [2024-11-20 10:04:20.501994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.771 qpair failed and we were unable to recover it. 00:30:49.771 [2024-11-20 10:04:20.502356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.771 [2024-11-20 10:04:20.502367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.771 qpair failed and we were unable to recover it. 00:30:49.771 [2024-11-20 10:04:20.502722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.771 [2024-11-20 10:04:20.502733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.771 qpair failed and we were unable to recover it. 00:30:49.771 [2024-11-20 10:04:20.503034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.771 [2024-11-20 10:04:20.503044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.771 qpair failed and we were unable to recover it. 00:30:49.771 [2024-11-20 10:04:20.503390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.771 [2024-11-20 10:04:20.503400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.771 qpair failed and we were unable to recover it. 00:30:49.771 [2024-11-20 10:04:20.503747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.771 [2024-11-20 10:04:20.503757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.771 qpair failed and we were unable to recover it. 00:30:49.771 [2024-11-20 10:04:20.504080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.771 [2024-11-20 10:04:20.504092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.771 qpair failed and we were unable to recover it. 00:30:49.771 [2024-11-20 10:04:20.504418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.771 [2024-11-20 10:04:20.504429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.771 qpair failed and we were unable to recover it. 00:30:49.771 [2024-11-20 10:04:20.504748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.771 [2024-11-20 10:04:20.504759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.771 qpair failed and we were unable to recover it. 00:30:49.771 [2024-11-20 10:04:20.505119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.771 [2024-11-20 10:04:20.505131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.771 qpair failed and we were unable to recover it. 00:30:49.771 [2024-11-20 10:04:20.505446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.771 [2024-11-20 10:04:20.505459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.771 qpair failed and we were unable to recover it. 00:30:49.771 [2024-11-20 10:04:20.505775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.771 [2024-11-20 10:04:20.505785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.771 qpair failed and we were unable to recover it. 00:30:49.771 [2024-11-20 10:04:20.506104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.771 [2024-11-20 10:04:20.506114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.771 qpair failed and we were unable to recover it. 00:30:49.771 [2024-11-20 10:04:20.506343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.771 [2024-11-20 10:04:20.506355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.771 qpair failed and we were unable to recover it. 00:30:49.771 [2024-11-20 10:04:20.506686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.771 [2024-11-20 10:04:20.506699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.771 qpair failed and we were unable to recover it. 00:30:49.771 [2024-11-20 10:04:20.507051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.771 [2024-11-20 10:04:20.507063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.771 qpair failed and we were unable to recover it. 00:30:49.771 [2024-11-20 10:04:20.507383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.771 [2024-11-20 10:04:20.507395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.771 qpair failed and we were unable to recover it. 00:30:49.771 [2024-11-20 10:04:20.507697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.771 [2024-11-20 10:04:20.507707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.771 qpair failed and we were unable to recover it. 00:30:49.771 [2024-11-20 10:04:20.508017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.771 [2024-11-20 10:04:20.508026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.771 qpair failed and we were unable to recover it. 00:30:49.771 [2024-11-20 10:04:20.508431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.771 [2024-11-20 10:04:20.508443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.771 qpair failed and we were unable to recover it. 00:30:49.771 [2024-11-20 10:04:20.508765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.771 [2024-11-20 10:04:20.508775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.771 qpair failed and we were unable to recover it. 00:30:49.771 [2024-11-20 10:04:20.509060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.771 [2024-11-20 10:04:20.509072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.771 qpair failed and we were unable to recover it. 00:30:49.771 [2024-11-20 10:04:20.509384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.771 [2024-11-20 10:04:20.509395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.771 qpair failed and we were unable to recover it. 00:30:49.771 [2024-11-20 10:04:20.509717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.771 [2024-11-20 10:04:20.509728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.771 qpair failed and we were unable to recover it. 00:30:49.771 [2024-11-20 10:04:20.510053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.771 [2024-11-20 10:04:20.510063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.771 qpair failed and we were unable to recover it. 00:30:49.771 [2024-11-20 10:04:20.510384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.771 [2024-11-20 10:04:20.510395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.771 qpair failed and we were unable to recover it. 00:30:49.771 [2024-11-20 10:04:20.510750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.771 [2024-11-20 10:04:20.510760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.771 qpair failed and we were unable to recover it. 00:30:49.771 [2024-11-20 10:04:20.511082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.771 [2024-11-20 10:04:20.511094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.771 qpair failed and we were unable to recover it. 00:30:49.771 [2024-11-20 10:04:20.511447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.771 [2024-11-20 10:04:20.511459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.771 qpair failed and we were unable to recover it. 00:30:49.771 [2024-11-20 10:04:20.511780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.771 [2024-11-20 10:04:20.511792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.771 qpair failed and we were unable to recover it. 00:30:49.771 [2024-11-20 10:04:20.512117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.772 [2024-11-20 10:04:20.512128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.772 qpair failed and we were unable to recover it. 00:30:49.772 [2024-11-20 10:04:20.512459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.772 [2024-11-20 10:04:20.512470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.772 qpair failed and we were unable to recover it. 00:30:49.772 [2024-11-20 10:04:20.512817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.772 [2024-11-20 10:04:20.512829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.772 qpair failed and we were unable to recover it. 00:30:49.772 [2024-11-20 10:04:20.513184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.772 [2024-11-20 10:04:20.513198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.772 qpair failed and we were unable to recover it. 00:30:49.772 [2024-11-20 10:04:20.513632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.772 [2024-11-20 10:04:20.513644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.772 qpair failed and we were unable to recover it. 00:30:49.772 [2024-11-20 10:04:20.513955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.772 [2024-11-20 10:04:20.513966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.772 qpair failed and we were unable to recover it. 00:30:49.772 [2024-11-20 10:04:20.514283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.772 [2024-11-20 10:04:20.514294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.772 qpair failed and we were unable to recover it. 00:30:49.772 [2024-11-20 10:04:20.514632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.772 [2024-11-20 10:04:20.514643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.772 qpair failed and we were unable to recover it. 00:30:49.772 [2024-11-20 10:04:20.515001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.772 [2024-11-20 10:04:20.515011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.772 qpair failed and we were unable to recover it. 00:30:49.772 [2024-11-20 10:04:20.515351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.772 [2024-11-20 10:04:20.515361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.772 qpair failed and we were unable to recover it. 00:30:49.772 [2024-11-20 10:04:20.515733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.772 [2024-11-20 10:04:20.515745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.772 qpair failed and we were unable to recover it. 00:30:49.772 [2024-11-20 10:04:20.516005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.772 [2024-11-20 10:04:20.516018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.772 qpair failed and we were unable to recover it. 00:30:49.772 [2024-11-20 10:04:20.516234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.772 [2024-11-20 10:04:20.516246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.772 qpair failed and we were unable to recover it. 00:30:49.772 [2024-11-20 10:04:20.516579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.772 [2024-11-20 10:04:20.516589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.772 qpair failed and we were unable to recover it. 00:30:49.772 [2024-11-20 10:04:20.516910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.772 [2024-11-20 10:04:20.516920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.772 qpair failed and we were unable to recover it. 00:30:49.772 [2024-11-20 10:04:20.517138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.772 [2024-11-20 10:04:20.517148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.772 qpair failed and we were unable to recover it. 00:30:49.772 [2024-11-20 10:04:20.517491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.772 [2024-11-20 10:04:20.517501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.772 qpair failed and we were unable to recover it. 00:30:49.772 [2024-11-20 10:04:20.517835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.772 [2024-11-20 10:04:20.517846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.772 qpair failed and we were unable to recover it. 00:30:49.772 [2024-11-20 10:04:20.518169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.772 [2024-11-20 10:04:20.518182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.772 qpair failed and we were unable to recover it. 00:30:49.772 [2024-11-20 10:04:20.518500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.772 [2024-11-20 10:04:20.518515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.772 qpair failed and we were unable to recover it. 00:30:49.772 [2024-11-20 10:04:20.518834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.772 [2024-11-20 10:04:20.518845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.772 qpair failed and we were unable to recover it. 00:30:49.772 [2024-11-20 10:04:20.519170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.772 [2024-11-20 10:04:20.519182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.772 qpair failed and we were unable to recover it. 00:30:49.772 [2024-11-20 10:04:20.519576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.772 [2024-11-20 10:04:20.519586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.772 qpair failed and we were unable to recover it. 00:30:49.772 [2024-11-20 10:04:20.519812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.772 [2024-11-20 10:04:20.519823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.772 qpair failed and we were unable to recover it. 00:30:49.772 [2024-11-20 10:04:20.520165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.772 [2024-11-20 10:04:20.520178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.772 qpair failed and we were unable to recover it. 00:30:49.772 [2024-11-20 10:04:20.520473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.772 [2024-11-20 10:04:20.520484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.772 qpair failed and we were unable to recover it. 00:30:49.772 [2024-11-20 10:04:20.520730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.772 [2024-11-20 10:04:20.520742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.772 qpair failed and we were unable to recover it. 00:30:49.772 [2024-11-20 10:04:20.521080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.772 [2024-11-20 10:04:20.521090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.772 qpair failed and we were unable to recover it. 00:30:49.772 [2024-11-20 10:04:20.521416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.772 [2024-11-20 10:04:20.521427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.772 qpair failed and we were unable to recover it. 00:30:49.772 [2024-11-20 10:04:20.521652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.772 [2024-11-20 10:04:20.521662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.772 qpair failed and we were unable to recover it. 00:30:49.772 [2024-11-20 10:04:20.521990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.772 [2024-11-20 10:04:20.522000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.772 qpair failed and we were unable to recover it. 00:30:49.772 [2024-11-20 10:04:20.522362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.772 [2024-11-20 10:04:20.522372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.772 qpair failed and we were unable to recover it. 00:30:49.773 [2024-11-20 10:04:20.522717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.773 [2024-11-20 10:04:20.522731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.773 qpair failed and we were unable to recover it. 00:30:49.773 [2024-11-20 10:04:20.523056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.773 [2024-11-20 10:04:20.523066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.773 qpair failed and we were unable to recover it. 00:30:49.773 [2024-11-20 10:04:20.523385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.773 [2024-11-20 10:04:20.523396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.773 qpair failed and we were unable to recover it. 00:30:49.773 [2024-11-20 10:04:20.523752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.773 [2024-11-20 10:04:20.523762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.773 qpair failed and we were unable to recover it. 00:30:49.773 [2024-11-20 10:04:20.524070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.773 [2024-11-20 10:04:20.524090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.773 qpair failed and we were unable to recover it. 00:30:49.773 [2024-11-20 10:04:20.524352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.773 [2024-11-20 10:04:20.524362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.773 qpair failed and we were unable to recover it. 00:30:49.773 [2024-11-20 10:04:20.524583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.773 [2024-11-20 10:04:20.524593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.773 qpair failed and we were unable to recover it. 00:30:49.773 [2024-11-20 10:04:20.524978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.773 [2024-11-20 10:04:20.524989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.773 qpair failed and we were unable to recover it. 00:30:49.773 [2024-11-20 10:04:20.525328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.773 [2024-11-20 10:04:20.525339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.773 qpair failed and we were unable to recover it. 00:30:49.773 [2024-11-20 10:04:20.525670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.773 [2024-11-20 10:04:20.525680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.773 qpair failed and we were unable to recover it. 00:30:49.773 [2024-11-20 10:04:20.526080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.773 [2024-11-20 10:04:20.526090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.773 qpair failed and we were unable to recover it. 00:30:49.773 [2024-11-20 10:04:20.526267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.773 [2024-11-20 10:04:20.526279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.773 qpair failed and we were unable to recover it. 00:30:49.773 [2024-11-20 10:04:20.526468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.773 [2024-11-20 10:04:20.526480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.773 qpair failed and we were unable to recover it. 00:30:49.773 [2024-11-20 10:04:20.526834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.773 [2024-11-20 10:04:20.526844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.773 qpair failed and we were unable to recover it. 00:30:49.773 [2024-11-20 10:04:20.527179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.773 [2024-11-20 10:04:20.527192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.773 qpair failed and we were unable to recover it. 00:30:49.773 [2024-11-20 10:04:20.527547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.773 [2024-11-20 10:04:20.527558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.773 qpair failed and we were unable to recover it. 00:30:49.773 [2024-11-20 10:04:20.527868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.773 [2024-11-20 10:04:20.527878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.773 qpair failed and we were unable to recover it. 00:30:49.773 [2024-11-20 10:04:20.528188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.773 [2024-11-20 10:04:20.528200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.773 qpair failed and we were unable to recover it. 00:30:49.773 [2024-11-20 10:04:20.528565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.773 [2024-11-20 10:04:20.528575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.773 qpair failed and we were unable to recover it. 00:30:49.773 [2024-11-20 10:04:20.528888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.773 [2024-11-20 10:04:20.528899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.773 qpair failed and we were unable to recover it. 00:30:49.773 [2024-11-20 10:04:20.529218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.773 [2024-11-20 10:04:20.529230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.773 qpair failed and we were unable to recover it. 00:30:49.773 [2024-11-20 10:04:20.529447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.773 [2024-11-20 10:04:20.529459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.773 qpair failed and we were unable to recover it. 00:30:49.773 [2024-11-20 10:04:20.529816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.773 [2024-11-20 10:04:20.529827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.773 qpair failed and we were unable to recover it. 00:30:49.773 [2024-11-20 10:04:20.530192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.773 [2024-11-20 10:04:20.530203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.773 qpair failed and we were unable to recover it. 00:30:49.773 [2024-11-20 10:04:20.530527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.773 [2024-11-20 10:04:20.530537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.773 qpair failed and we were unable to recover it. 00:30:49.773 [2024-11-20 10:04:20.530737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.773 [2024-11-20 10:04:20.530749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.773 qpair failed and we were unable to recover it. 00:30:49.773 [2024-11-20 10:04:20.531091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.773 [2024-11-20 10:04:20.531103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.773 qpair failed and we were unable to recover it. 00:30:49.773 [2024-11-20 10:04:20.531345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.773 [2024-11-20 10:04:20.531359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.773 qpair failed and we were unable to recover it. 00:30:49.773 [2024-11-20 10:04:20.531687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.773 [2024-11-20 10:04:20.531698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.773 qpair failed and we were unable to recover it. 00:30:49.773 [2024-11-20 10:04:20.531987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.773 [2024-11-20 10:04:20.531997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.773 qpair failed and we were unable to recover it. 00:30:49.773 [2024-11-20 10:04:20.532318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.773 [2024-11-20 10:04:20.532329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.773 qpair failed and we were unable to recover it. 00:30:49.773 [2024-11-20 10:04:20.532739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.773 [2024-11-20 10:04:20.532749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.773 qpair failed and we were unable to recover it. 00:30:49.773 [2024-11-20 10:04:20.533064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.773 [2024-11-20 10:04:20.533074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.773 qpair failed and we were unable to recover it. 00:30:49.773 [2024-11-20 10:04:20.533280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.773 [2024-11-20 10:04:20.533291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.773 qpair failed and we were unable to recover it. 00:30:49.773 [2024-11-20 10:04:20.533668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.773 [2024-11-20 10:04:20.533679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.773 qpair failed and we were unable to recover it. 00:30:49.773 [2024-11-20 10:04:20.534011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.773 [2024-11-20 10:04:20.534021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.773 qpair failed and we were unable to recover it. 00:30:49.773 [2024-11-20 10:04:20.534343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.773 [2024-11-20 10:04:20.534354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.773 qpair failed and we were unable to recover it. 00:30:49.773 [2024-11-20 10:04:20.534545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.773 [2024-11-20 10:04:20.534556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.774 qpair failed and we were unable to recover it. 00:30:49.774 [2024-11-20 10:04:20.534908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.774 [2024-11-20 10:04:20.534918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.774 qpair failed and we were unable to recover it. 00:30:49.774 [2024-11-20 10:04:20.535258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.774 [2024-11-20 10:04:20.535269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.774 qpair failed and we were unable to recover it. 00:30:49.774 [2024-11-20 10:04:20.535584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.774 [2024-11-20 10:04:20.535594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.774 qpair failed and we were unable to recover it. 00:30:49.774 [2024-11-20 10:04:20.535980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.774 [2024-11-20 10:04:20.535992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.774 qpair failed and we were unable to recover it. 00:30:49.774 [2024-11-20 10:04:20.536304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.774 [2024-11-20 10:04:20.536315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.774 qpair failed and we were unable to recover it. 00:30:49.774 [2024-11-20 10:04:20.536641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.774 [2024-11-20 10:04:20.536651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.774 qpair failed and we were unable to recover it. 00:30:49.774 [2024-11-20 10:04:20.536956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.774 [2024-11-20 10:04:20.536966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.774 qpair failed and we were unable to recover it. 00:30:49.774 [2024-11-20 10:04:20.537296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.774 [2024-11-20 10:04:20.537307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.774 qpair failed and we were unable to recover it. 00:30:49.774 [2024-11-20 10:04:20.537607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.774 [2024-11-20 10:04:20.537617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.774 qpair failed and we were unable to recover it. 00:30:49.774 [2024-11-20 10:04:20.537930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.774 [2024-11-20 10:04:20.537943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.774 qpair failed and we were unable to recover it. 00:30:49.774 [2024-11-20 10:04:20.538260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.774 [2024-11-20 10:04:20.538272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.774 qpair failed and we were unable to recover it. 00:30:49.774 [2024-11-20 10:04:20.538586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.774 [2024-11-20 10:04:20.538598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.774 qpair failed and we were unable to recover it. 00:30:49.774 [2024-11-20 10:04:20.538785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.774 [2024-11-20 10:04:20.538794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.774 qpair failed and we were unable to recover it. 00:30:49.774 [2024-11-20 10:04:20.539015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.774 [2024-11-20 10:04:20.539025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.774 qpair failed and we were unable to recover it. 00:30:49.774 [2024-11-20 10:04:20.539376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.774 [2024-11-20 10:04:20.539429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.774 qpair failed and we were unable to recover it. 00:30:49.774 [2024-11-20 10:04:20.539760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.774 [2024-11-20 10:04:20.539770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.774 qpair failed and we were unable to recover it. 00:30:49.774 [2024-11-20 10:04:20.540166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.774 [2024-11-20 10:04:20.540179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.774 qpair failed and we were unable to recover it. 00:30:49.774 [2024-11-20 10:04:20.540529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.774 [2024-11-20 10:04:20.540540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.774 qpair failed and we were unable to recover it. 00:30:49.774 [2024-11-20 10:04:20.540881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.774 [2024-11-20 10:04:20.540891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.774 qpair failed and we were unable to recover it. 00:30:49.774 [2024-11-20 10:04:20.541165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.774 [2024-11-20 10:04:20.541176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.774 qpair failed and we were unable to recover it. 00:30:49.774 [2024-11-20 10:04:20.541491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.774 [2024-11-20 10:04:20.541501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.774 qpair failed and we were unable to recover it. 00:30:49.774 [2024-11-20 10:04:20.541895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.774 [2024-11-20 10:04:20.541906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.774 qpair failed and we were unable to recover it. 00:30:49.774 [2024-11-20 10:04:20.542230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.774 [2024-11-20 10:04:20.542241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.774 qpair failed and we were unable to recover it. 00:30:49.774 [2024-11-20 10:04:20.542414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.774 [2024-11-20 10:04:20.542426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.774 qpair failed and we were unable to recover it. 00:30:49.774 [2024-11-20 10:04:20.542720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.774 [2024-11-20 10:04:20.542732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.774 qpair failed and we were unable to recover it. 00:30:49.774 [2024-11-20 10:04:20.542972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.774 [2024-11-20 10:04:20.542982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.774 qpair failed and we were unable to recover it. 00:30:49.774 [2024-11-20 10:04:20.543316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.774 [2024-11-20 10:04:20.543327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.774 qpair failed and we were unable to recover it. 00:30:49.774 [2024-11-20 10:04:20.543651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.774 [2024-11-20 10:04:20.543661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.774 qpair failed and we were unable to recover it. 00:30:49.774 [2024-11-20 10:04:20.544054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.774 [2024-11-20 10:04:20.544065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.774 qpair failed and we were unable to recover it. 00:30:49.774 [2024-11-20 10:04:20.544437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.774 [2024-11-20 10:04:20.544451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.774 qpair failed and we were unable to recover it. 00:30:49.774 [2024-11-20 10:04:20.544775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.774 [2024-11-20 10:04:20.544785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.774 qpair failed and we were unable to recover it. 00:30:49.774 [2024-11-20 10:04:20.545179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.774 [2024-11-20 10:04:20.545192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.774 qpair failed and we were unable to recover it. 00:30:49.774 [2024-11-20 10:04:20.545505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.774 [2024-11-20 10:04:20.545516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.774 qpair failed and we were unable to recover it. 00:30:49.774 [2024-11-20 10:04:20.545728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.774 [2024-11-20 10:04:20.545738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.774 qpair failed and we were unable to recover it. 00:30:49.774 [2024-11-20 10:04:20.546098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.774 [2024-11-20 10:04:20.546108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.774 qpair failed and we were unable to recover it. 00:30:49.774 [2024-11-20 10:04:20.546441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.774 [2024-11-20 10:04:20.546452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.774 qpair failed and we were unable to recover it. 00:30:49.774 [2024-11-20 10:04:20.546789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.774 [2024-11-20 10:04:20.546799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.774 qpair failed and we were unable to recover it. 00:30:49.775 [2024-11-20 10:04:20.547139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.775 [2024-11-20 10:04:20.547153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.775 qpair failed and we were unable to recover it. 00:30:49.775 [2024-11-20 10:04:20.547483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.775 [2024-11-20 10:04:20.547494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.775 qpair failed and we were unable to recover it. 00:30:49.775 [2024-11-20 10:04:20.547784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.775 [2024-11-20 10:04:20.547794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.775 qpair failed and we were unable to recover it. 00:30:49.775 [2024-11-20 10:04:20.548018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.775 [2024-11-20 10:04:20.548030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.775 qpair failed and we were unable to recover it. 00:30:49.775 [2024-11-20 10:04:20.548332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.775 [2024-11-20 10:04:20.548342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.775 qpair failed and we were unable to recover it. 00:30:49.775 [2024-11-20 10:04:20.548649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.775 [2024-11-20 10:04:20.548660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.775 qpair failed and we were unable to recover it. 00:30:49.775 [2024-11-20 10:04:20.548982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.775 [2024-11-20 10:04:20.548994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.775 qpair failed and we were unable to recover it. 00:30:49.775 [2024-11-20 10:04:20.549334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.775 [2024-11-20 10:04:20.549347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.775 qpair failed and we were unable to recover it. 00:30:49.775 [2024-11-20 10:04:20.549734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.775 [2024-11-20 10:04:20.549745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.775 qpair failed and we were unable to recover it. 00:30:49.775 [2024-11-20 10:04:20.549945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.775 [2024-11-20 10:04:20.549955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.775 qpair failed and we were unable to recover it. 00:30:49.775 [2024-11-20 10:04:20.550286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.775 [2024-11-20 10:04:20.550296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.775 qpair failed and we were unable to recover it. 00:30:49.775 [2024-11-20 10:04:20.550710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.775 [2024-11-20 10:04:20.550723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.775 qpair failed and we were unable to recover it. 00:30:49.775 [2024-11-20 10:04:20.551043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.775 [2024-11-20 10:04:20.551052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.775 qpair failed and we were unable to recover it. 00:30:49.775 [2024-11-20 10:04:20.551399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.775 [2024-11-20 10:04:20.551410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.775 qpair failed and we were unable to recover it. 00:30:49.775 [2024-11-20 10:04:20.551728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.775 [2024-11-20 10:04:20.551740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.775 qpair failed and we were unable to recover it. 00:30:49.775 [2024-11-20 10:04:20.552094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.775 [2024-11-20 10:04:20.552106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.775 qpair failed and we were unable to recover it. 00:30:49.775 [2024-11-20 10:04:20.552446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.775 [2024-11-20 10:04:20.552458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.775 qpair failed and we were unable to recover it. 00:30:49.775 [2024-11-20 10:04:20.552773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.775 [2024-11-20 10:04:20.552783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.775 qpair failed and we were unable to recover it. 00:30:49.775 [2024-11-20 10:04:20.553131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.775 [2024-11-20 10:04:20.553142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.775 qpair failed and we were unable to recover it. 00:30:49.775 [2024-11-20 10:04:20.553482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.775 [2024-11-20 10:04:20.553493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.775 qpair failed and we were unable to recover it. 00:30:49.775 [2024-11-20 10:04:20.553842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.775 [2024-11-20 10:04:20.553860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.775 qpair failed and we were unable to recover it. 00:30:49.775 [2024-11-20 10:04:20.554251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.775 [2024-11-20 10:04:20.554261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.775 qpair failed and we were unable to recover it. 00:30:49.775 [2024-11-20 10:04:20.554600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.775 [2024-11-20 10:04:20.554612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.775 qpair failed and we were unable to recover it. 00:30:49.775 [2024-11-20 10:04:20.554936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.775 [2024-11-20 10:04:20.554946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.775 qpair failed and we were unable to recover it. 00:30:49.775 [2024-11-20 10:04:20.555339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.775 [2024-11-20 10:04:20.555351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.775 qpair failed and we were unable to recover it. 00:30:49.775 [2024-11-20 10:04:20.555538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.775 [2024-11-20 10:04:20.555551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.775 qpair failed and we were unable to recover it. 00:30:49.775 [2024-11-20 10:04:20.555891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.775 [2024-11-20 10:04:20.555901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.775 qpair failed and we were unable to recover it. 00:30:49.775 [2024-11-20 10:04:20.556210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.775 [2024-11-20 10:04:20.556222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.775 qpair failed and we were unable to recover it. 00:30:49.775 [2024-11-20 10:04:20.556560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.775 [2024-11-20 10:04:20.556571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.775 qpair failed and we were unable to recover it. 00:30:49.775 [2024-11-20 10:04:20.556967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.775 [2024-11-20 10:04:20.556978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.775 qpair failed and we were unable to recover it. 00:30:49.775 [2024-11-20 10:04:20.557309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.775 [2024-11-20 10:04:20.557320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.775 qpair failed and we were unable to recover it. 00:30:49.775 [2024-11-20 10:04:20.557654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.775 [2024-11-20 10:04:20.557665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.775 qpair failed and we were unable to recover it. 00:30:49.775 [2024-11-20 10:04:20.557995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.775 [2024-11-20 10:04:20.558011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.775 qpair failed and we were unable to recover it. 00:30:49.775 [2024-11-20 10:04:20.558334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.775 [2024-11-20 10:04:20.558345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.775 qpair failed and we were unable to recover it. 00:30:49.775 [2024-11-20 10:04:20.558701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.776 [2024-11-20 10:04:20.558711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.776 qpair failed and we were unable to recover it. 00:30:49.776 [2024-11-20 10:04:20.558923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.776 [2024-11-20 10:04:20.558934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.776 qpair failed and we were unable to recover it. 00:30:49.776 [2024-11-20 10:04:20.559284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.776 [2024-11-20 10:04:20.559295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.776 qpair failed and we were unable to recover it. 00:30:49.776 [2024-11-20 10:04:20.559611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.776 [2024-11-20 10:04:20.559621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.776 qpair failed and we were unable to recover it. 00:30:49.776 [2024-11-20 10:04:20.559942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.776 [2024-11-20 10:04:20.559955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.776 qpair failed and we were unable to recover it. 00:30:49.776 [2024-11-20 10:04:20.560265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.776 [2024-11-20 10:04:20.560276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.776 qpair failed and we were unable to recover it. 00:30:49.776 [2024-11-20 10:04:20.560600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.776 [2024-11-20 10:04:20.560612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.776 qpair failed and we were unable to recover it. 00:30:49.776 [2024-11-20 10:04:20.560813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.776 [2024-11-20 10:04:20.560824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.776 qpair failed and we were unable to recover it. 00:30:49.776 [2024-11-20 10:04:20.561166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.776 [2024-11-20 10:04:20.561178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.776 qpair failed and we were unable to recover it. 00:30:49.776 [2024-11-20 10:04:20.561483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.776 [2024-11-20 10:04:20.561494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.776 qpair failed and we were unable to recover it. 00:30:49.776 [2024-11-20 10:04:20.561851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.776 [2024-11-20 10:04:20.561860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.776 qpair failed and we were unable to recover it. 00:30:49.776 [2024-11-20 10:04:20.562176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.776 [2024-11-20 10:04:20.562187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.776 qpair failed and we were unable to recover it. 00:30:49.776 [2024-11-20 10:04:20.562410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.776 [2024-11-20 10:04:20.562422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.776 qpair failed and we were unable to recover it. 00:30:49.801 [2024-11-20 10:04:20.562750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.802 [2024-11-20 10:04:20.562760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.802 qpair failed and we were unable to recover it. 00:30:49.802 [2024-11-20 10:04:20.563064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.802 [2024-11-20 10:04:20.563076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.802 qpair failed and we were unable to recover it. 00:30:49.802 [2024-11-20 10:04:20.563397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.802 [2024-11-20 10:04:20.563409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.802 qpair failed and we were unable to recover it. 00:30:49.802 [2024-11-20 10:04:20.563742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.802 [2024-11-20 10:04:20.563753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.802 qpair failed and we were unable to recover it. 00:30:49.802 [2024-11-20 10:04:20.564066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.802 [2024-11-20 10:04:20.564077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.802 qpair failed and we were unable to recover it. 00:30:49.802 [2024-11-20 10:04:20.564414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.802 [2024-11-20 10:04:20.564425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.802 qpair failed and we were unable to recover it. 00:30:49.802 [2024-11-20 10:04:20.564819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.802 [2024-11-20 10:04:20.564831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.802 qpair failed and we were unable to recover it. 00:30:49.802 [2024-11-20 10:04:20.565112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.802 [2024-11-20 10:04:20.565125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.802 qpair failed and we were unable to recover it. 00:30:49.802 [2024-11-20 10:04:20.565456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.802 [2024-11-20 10:04:20.565467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.802 qpair failed and we were unable to recover it. 00:30:49.802 [2024-11-20 10:04:20.565802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.802 [2024-11-20 10:04:20.565812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.802 qpair failed and we were unable to recover it. 00:30:49.802 [2024-11-20 10:04:20.566121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.802 [2024-11-20 10:04:20.566133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.802 qpair failed and we were unable to recover it. 00:30:49.802 [2024-11-20 10:04:20.566462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.802 [2024-11-20 10:04:20.566473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.802 qpair failed and we were unable to recover it. 00:30:49.802 [2024-11-20 10:04:20.566783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.802 [2024-11-20 10:04:20.566794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.802 qpair failed and we were unable to recover it. 00:30:49.802 [2024-11-20 10:04:20.567118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.802 [2024-11-20 10:04:20.567131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.802 qpair failed and we were unable to recover it. 00:30:49.802 [2024-11-20 10:04:20.567470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.802 [2024-11-20 10:04:20.567481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.802 qpair failed and we were unable to recover it. 00:30:49.802 [2024-11-20 10:04:20.567785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.802 [2024-11-20 10:04:20.567796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.802 qpair failed and we were unable to recover it. 00:30:49.802 [2024-11-20 10:04:20.568123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.802 [2024-11-20 10:04:20.568134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.802 qpair failed and we were unable to recover it. 00:30:49.802 [2024-11-20 10:04:20.568387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.802 [2024-11-20 10:04:20.568398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.802 qpair failed and we were unable to recover it. 00:30:49.802 [2024-11-20 10:04:20.568732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.802 [2024-11-20 10:04:20.568743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.802 qpair failed and we were unable to recover it. 00:30:49.802 [2024-11-20 10:04:20.568956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.802 [2024-11-20 10:04:20.568968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.802 qpair failed and we were unable to recover it. 00:30:49.802 [2024-11-20 10:04:20.569309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.802 [2024-11-20 10:04:20.569322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.802 qpair failed and we were unable to recover it. 00:30:49.802 [2024-11-20 10:04:20.569625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.802 [2024-11-20 10:04:20.569638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.802 qpair failed and we were unable to recover it. 00:30:49.802 [2024-11-20 10:04:20.569965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.802 [2024-11-20 10:04:20.569976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.802 qpair failed and we were unable to recover it. 00:30:49.802 [2024-11-20 10:04:20.570281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.802 [2024-11-20 10:04:20.570293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.802 qpair failed and we were unable to recover it. 00:30:49.802 [2024-11-20 10:04:20.570630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.802 [2024-11-20 10:04:20.570640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.802 qpair failed and we were unable to recover it. 00:30:49.802 [2024-11-20 10:04:20.571037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.802 [2024-11-20 10:04:20.571048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.802 qpair failed and we were unable to recover it. 00:30:49.802 [2024-11-20 10:04:20.571290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.802 [2024-11-20 10:04:20.571302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.802 qpair failed and we were unable to recover it. 00:30:49.802 [2024-11-20 10:04:20.571627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.802 [2024-11-20 10:04:20.571637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.802 qpair failed and we were unable to recover it. 00:30:49.803 [2024-11-20 10:04:20.571956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.803 [2024-11-20 10:04:20.571968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.803 qpair failed and we were unable to recover it. 00:30:49.803 [2024-11-20 10:04:20.572293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.803 [2024-11-20 10:04:20.572304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.803 qpair failed and we were unable to recover it. 00:30:49.803 [2024-11-20 10:04:20.572629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.803 [2024-11-20 10:04:20.572639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.803 qpair failed and we were unable to recover it. 00:30:49.803 [2024-11-20 10:04:20.572997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.803 [2024-11-20 10:04:20.573007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.803 qpair failed and we were unable to recover it. 00:30:49.803 [2024-11-20 10:04:20.573323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.803 [2024-11-20 10:04:20.573334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.803 qpair failed and we were unable to recover it. 00:30:49.803 [2024-11-20 10:04:20.573650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.803 [2024-11-20 10:04:20.573660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.803 qpair failed and we were unable to recover it. 00:30:49.803 [2024-11-20 10:04:20.574021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.803 [2024-11-20 10:04:20.574034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.803 qpair failed and we were unable to recover it. 00:30:49.803 [2024-11-20 10:04:20.574351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.803 [2024-11-20 10:04:20.574362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.803 qpair failed and we were unable to recover it. 00:30:49.803 [2024-11-20 10:04:20.574610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.803 [2024-11-20 10:04:20.574620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.803 qpair failed and we were unable to recover it. 00:30:49.803 [2024-11-20 10:04:20.578181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.803 [2024-11-20 10:04:20.578219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.803 qpair failed and we were unable to recover it. 00:30:49.803 [2024-11-20 10:04:20.578576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.803 [2024-11-20 10:04:20.578589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.803 qpair failed and we were unable to recover it. 00:30:49.803 [2024-11-20 10:04:20.578939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.803 [2024-11-20 10:04:20.578953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.803 qpair failed and we were unable to recover it. 00:30:49.803 [2024-11-20 10:04:20.579286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.803 [2024-11-20 10:04:20.579298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.803 qpair failed and we were unable to recover it. 00:30:49.803 [2024-11-20 10:04:20.579649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.803 [2024-11-20 10:04:20.579661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.803 qpair failed and we were unable to recover it. 00:30:49.803 [2024-11-20 10:04:20.579981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.803 [2024-11-20 10:04:20.579994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.803 qpair failed and we were unable to recover it. 00:30:49.803 [2024-11-20 10:04:20.580323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.803 [2024-11-20 10:04:20.580335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.803 qpair failed and we were unable to recover it. 00:30:49.803 [2024-11-20 10:04:20.580717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.803 [2024-11-20 10:04:20.580729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.803 qpair failed and we were unable to recover it. 00:30:49.803 [2024-11-20 10:04:20.581057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.803 [2024-11-20 10:04:20.581070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.803 qpair failed and we were unable to recover it. 00:30:49.803 [2024-11-20 10:04:20.581409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.803 [2024-11-20 10:04:20.581429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.803 qpair failed and we were unable to recover it. 00:30:49.803 [2024-11-20 10:04:20.581779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.803 [2024-11-20 10:04:20.581790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.803 qpair failed and we were unable to recover it. 00:30:49.803 [2024-11-20 10:04:20.581991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.803 [2024-11-20 10:04:20.582003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.803 qpair failed and we were unable to recover it. 00:30:49.803 [2024-11-20 10:04:20.582343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.803 [2024-11-20 10:04:20.582355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.803 qpair failed and we were unable to recover it. 00:30:49.803 [2024-11-20 10:04:20.582666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.803 [2024-11-20 10:04:20.582721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.803 qpair failed and we were unable to recover it. 00:30:49.803 [2024-11-20 10:04:20.583073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.803 [2024-11-20 10:04:20.583086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.803 qpair failed and we were unable to recover it. 00:30:49.803 [2024-11-20 10:04:20.583433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.803 [2024-11-20 10:04:20.583450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.803 qpair failed and we were unable to recover it. 00:30:49.803 [2024-11-20 10:04:20.583668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.803 [2024-11-20 10:04:20.583682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.803 qpair failed and we were unable to recover it. 00:30:49.803 [2024-11-20 10:04:20.583934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.803 [2024-11-20 10:04:20.583963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.803 qpair failed and we were unable to recover it. 00:30:49.803 [2024-11-20 10:04:20.584335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.803 [2024-11-20 10:04:20.584365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.803 qpair failed and we were unable to recover it. 00:30:49.803 [2024-11-20 10:04:20.584729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.803 [2024-11-20 10:04:20.584757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.803 qpair failed and we were unable to recover it. 00:30:49.803 [2024-11-20 10:04:20.585138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.803 [2024-11-20 10:04:20.585179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.803 qpair failed and we were unable to recover it. 00:30:49.803 [2024-11-20 10:04:20.585557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.803 [2024-11-20 10:04:20.585586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.803 qpair failed and we were unable to recover it. 00:30:49.803 [2024-11-20 10:04:20.585954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.803 [2024-11-20 10:04:20.585977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.803 qpair failed and we were unable to recover it. 00:30:49.803 [2024-11-20 10:04:20.586321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.803 [2024-11-20 10:04:20.586345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.803 qpair failed and we were unable to recover it. 00:30:49.803 [2024-11-20 10:04:20.586732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.803 [2024-11-20 10:04:20.586754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.803 qpair failed and we were unable to recover it. 00:30:49.803 [2024-11-20 10:04:20.587124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.804 [2024-11-20 10:04:20.587146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.804 qpair failed and we were unable to recover it. 00:30:49.804 [2024-11-20 10:04:20.587531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.804 [2024-11-20 10:04:20.587554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.804 qpair failed and we were unable to recover it. 00:30:49.804 [2024-11-20 10:04:20.587946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.804 [2024-11-20 10:04:20.587971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.804 qpair failed and we were unable to recover it. 00:30:49.804 [2024-11-20 10:04:20.588349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.804 [2024-11-20 10:04:20.588374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.804 qpair failed and we were unable to recover it. 00:30:49.804 [2024-11-20 10:04:20.588729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.804 [2024-11-20 10:04:20.588752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.804 qpair failed and we were unable to recover it. 00:30:49.804 [2024-11-20 10:04:20.589003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.804 [2024-11-20 10:04:20.589025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.804 qpair failed and we were unable to recover it. 00:30:49.804 [2024-11-20 10:04:20.589412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.804 [2024-11-20 10:04:20.589436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.804 qpair failed and we were unable to recover it. 00:30:49.804 [2024-11-20 10:04:20.589845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.804 [2024-11-20 10:04:20.589871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.804 qpair failed and we were unable to recover it. 00:30:49.804 [2024-11-20 10:04:20.590224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.804 [2024-11-20 10:04:20.590254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.804 qpair failed and we were unable to recover it. 00:30:49.804 [2024-11-20 10:04:20.590512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.804 [2024-11-20 10:04:20.590536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.804 qpair failed and we were unable to recover it. 00:30:49.804 [2024-11-20 10:04:20.590862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.804 [2024-11-20 10:04:20.590893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.804 qpair failed and we were unable to recover it. 00:30:49.804 [2024-11-20 10:04:20.591220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.804 [2024-11-20 10:04:20.591246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.804 qpair failed and we were unable to recover it. 00:30:49.804 [2024-11-20 10:04:20.591617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.804 [2024-11-20 10:04:20.591641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.804 qpair failed and we were unable to recover it. 00:30:49.804 [2024-11-20 10:04:20.592009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.804 [2024-11-20 10:04:20.592032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.804 qpair failed and we were unable to recover it. 00:30:49.804 [2024-11-20 10:04:20.592296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.804 [2024-11-20 10:04:20.592321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.804 qpair failed and we were unable to recover it. 00:30:49.804 [2024-11-20 10:04:20.592513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.804 [2024-11-20 10:04:20.592536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.804 qpair failed and we were unable to recover it. 00:30:49.804 [2024-11-20 10:04:20.592889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.804 [2024-11-20 10:04:20.592912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.804 qpair failed and we were unable to recover it. 00:30:49.804 [2024-11-20 10:04:20.593284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.804 [2024-11-20 10:04:20.593311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.804 qpair failed and we were unable to recover it. 00:30:49.804 [2024-11-20 10:04:20.593647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.804 [2024-11-20 10:04:20.593670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.804 qpair failed and we were unable to recover it. 00:30:49.804 [2024-11-20 10:04:20.594030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.804 [2024-11-20 10:04:20.594059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.804 qpair failed and we were unable to recover it. 00:30:49.804 [2024-11-20 10:04:20.594403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.804 [2024-11-20 10:04:20.594427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.804 qpair failed and we were unable to recover it. 00:30:49.804 [2024-11-20 10:04:20.594746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.804 [2024-11-20 10:04:20.594772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.804 qpair failed and we were unable to recover it. 00:30:49.804 [2024-11-20 10:04:20.595131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.804 [2024-11-20 10:04:20.595155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.804 qpair failed and we were unable to recover it. 00:30:49.804 [2024-11-20 10:04:20.595519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.804 [2024-11-20 10:04:20.595542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.804 qpair failed and we were unable to recover it. 00:30:49.804 [2024-11-20 10:04:20.595898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.804 [2024-11-20 10:04:20.595922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.804 qpair failed and we were unable to recover it. 00:30:49.804 [2024-11-20 10:04:20.596226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.804 [2024-11-20 10:04:20.596251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.804 qpair failed and we were unable to recover it. 00:30:49.804 [2024-11-20 10:04:20.596567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.804 [2024-11-20 10:04:20.596590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.804 qpair failed and we were unable to recover it. 00:30:49.804 [2024-11-20 10:04:20.596956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.804 [2024-11-20 10:04:20.596979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.804 qpair failed and we were unable to recover it. 00:30:49.804 [2024-11-20 10:04:20.597211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.804 [2024-11-20 10:04:20.597235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.804 qpair failed and we were unable to recover it. 00:30:49.804 [2024-11-20 10:04:20.597579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.804 [2024-11-20 10:04:20.597605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.804 qpair failed and we were unable to recover it. 00:30:49.804 [2024-11-20 10:04:20.597970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.804 [2024-11-20 10:04:20.598006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.804 qpair failed and we were unable to recover it. 00:30:49.804 [2024-11-20 10:04:20.598423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.804 [2024-11-20 10:04:20.598447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.804 qpair failed and we were unable to recover it. 00:30:49.804 [2024-11-20 10:04:20.598860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.804 [2024-11-20 10:04:20.598884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.804 qpair failed and we were unable to recover it. 00:30:49.804 [2024-11-20 10:04:20.599240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.804 [2024-11-20 10:04:20.599266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.804 qpair failed and we were unable to recover it. 00:30:49.805 [2024-11-20 10:04:20.599608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.805 [2024-11-20 10:04:20.599638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.805 qpair failed and we were unable to recover it. 00:30:49.805 [2024-11-20 10:04:20.599986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.805 [2024-11-20 10:04:20.600010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.805 qpair failed and we were unable to recover it. 00:30:49.805 [2024-11-20 10:04:20.600338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.805 [2024-11-20 10:04:20.600361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.805 qpair failed and we were unable to recover it. 00:30:49.805 [2024-11-20 10:04:20.600723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.805 [2024-11-20 10:04:20.600746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.805 qpair failed and we were unable to recover it. 00:30:49.805 [2024-11-20 10:04:20.601068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.805 [2024-11-20 10:04:20.601091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.805 qpair failed and we were unable to recover it. 00:30:49.805 [2024-11-20 10:04:20.601462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.805 [2024-11-20 10:04:20.601485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.805 qpair failed and we were unable to recover it. 00:30:49.805 [2024-11-20 10:04:20.601856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.805 [2024-11-20 10:04:20.601881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.805 qpair failed and we were unable to recover it. 00:30:49.805 [2024-11-20 10:04:20.602218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.805 [2024-11-20 10:04:20.602245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.805 qpair failed and we were unable to recover it. 00:30:49.805 [2024-11-20 10:04:20.602613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.805 [2024-11-20 10:04:20.602640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.805 qpair failed and we were unable to recover it. 00:30:49.805 [2024-11-20 10:04:20.602895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.805 [2024-11-20 10:04:20.602919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.805 qpair failed and we were unable to recover it. 00:30:49.805 [2024-11-20 10:04:20.603275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.805 [2024-11-20 10:04:20.603303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.805 qpair failed and we were unable to recover it. 00:30:49.805 [2024-11-20 10:04:20.603605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.805 [2024-11-20 10:04:20.603629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.805 qpair failed and we were unable to recover it. 00:30:49.805 [2024-11-20 10:04:20.607210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.805 [2024-11-20 10:04:20.607269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.805 qpair failed and we were unable to recover it. 00:30:49.805 [2024-11-20 10:04:20.607651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.805 [2024-11-20 10:04:20.607684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.805 qpair failed and we were unable to recover it. 00:30:49.805 [2024-11-20 10:04:20.607853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.805 [2024-11-20 10:04:20.607885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.805 qpair failed and we were unable to recover it. 00:30:49.805 [2024-11-20 10:04:20.608288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.805 [2024-11-20 10:04:20.608317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.805 qpair failed and we were unable to recover it. 00:30:49.805 [2024-11-20 10:04:20.608684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.805 [2024-11-20 10:04:20.608712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.805 qpair failed and we were unable to recover it. 00:30:49.805 [2024-11-20 10:04:20.609087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.805 [2024-11-20 10:04:20.609117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.805 qpair failed and we were unable to recover it. 00:30:49.805 [2024-11-20 10:04:20.609502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.805 [2024-11-20 10:04:20.609532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.805 qpair failed and we were unable to recover it. 00:30:49.805 [2024-11-20 10:04:20.609912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.805 [2024-11-20 10:04:20.609938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.805 qpair failed and we were unable to recover it. 00:30:49.805 [2024-11-20 10:04:20.610289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.805 [2024-11-20 10:04:20.610319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.805 qpair failed and we were unable to recover it. 00:30:49.805 [2024-11-20 10:04:20.610699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.805 [2024-11-20 10:04:20.610727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.805 qpair failed and we were unable to recover it. 00:30:49.805 [2024-11-20 10:04:20.611237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.805 [2024-11-20 10:04:20.611269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.805 qpair failed and we were unable to recover it. 00:30:49.805 [2024-11-20 10:04:20.611634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.805 [2024-11-20 10:04:20.611664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.805 qpair failed and we were unable to recover it. 00:30:49.805 [2024-11-20 10:04:20.612031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.805 [2024-11-20 10:04:20.612063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.805 qpair failed and we were unable to recover it. 00:30:49.805 [2024-11-20 10:04:20.612426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.805 [2024-11-20 10:04:20.612457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.805 qpair failed and we were unable to recover it. 00:30:49.805 [2024-11-20 10:04:20.612749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.805 [2024-11-20 10:04:20.612778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.805 qpair failed and we were unable to recover it. 00:30:49.805 [2024-11-20 10:04:20.613126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.805 [2024-11-20 10:04:20.613155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.805 qpair failed and we were unable to recover it. 00:30:49.805 [2024-11-20 10:04:20.613533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.805 [2024-11-20 10:04:20.613561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.805 qpair failed and we were unable to recover it. 00:30:49.805 [2024-11-20 10:04:20.613913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.805 [2024-11-20 10:04:20.613943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.805 qpair failed and we were unable to recover it. 00:30:49.805 [2024-11-20 10:04:20.614296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.805 [2024-11-20 10:04:20.614325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.805 qpair failed and we were unable to recover it. 00:30:49.805 [2024-11-20 10:04:20.614679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.805 [2024-11-20 10:04:20.614709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.805 qpair failed and we were unable to recover it. 00:30:49.805 [2024-11-20 10:04:20.615067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.805 [2024-11-20 10:04:20.615094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.805 qpair failed and we were unable to recover it. 00:30:49.805 [2024-11-20 10:04:20.615462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.805 [2024-11-20 10:04:20.615491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.805 qpair failed and we were unable to recover it. 00:30:49.805 [2024-11-20 10:04:20.615851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.805 [2024-11-20 10:04:20.615881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.805 qpair failed and we were unable to recover it. 00:30:49.805 [2024-11-20 10:04:20.616233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.805 [2024-11-20 10:04:20.616261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.806 qpair failed and we were unable to recover it. 00:30:49.806 [2024-11-20 10:04:20.616644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.806 [2024-11-20 10:04:20.616682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.806 qpair failed and we were unable to recover it. 00:30:49.806 [2024-11-20 10:04:20.617026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.806 [2024-11-20 10:04:20.617057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.806 qpair failed and we were unable to recover it. 00:30:49.806 [2024-11-20 10:04:20.617330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.806 [2024-11-20 10:04:20.617359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.806 qpair failed and we were unable to recover it. 00:30:49.806 [2024-11-20 10:04:20.617729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.806 [2024-11-20 10:04:20.617758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.806 qpair failed and we were unable to recover it. 00:30:49.806 [2024-11-20 10:04:20.618122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.806 [2024-11-20 10:04:20.618151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.806 qpair failed and we were unable to recover it. 00:30:49.806 [2024-11-20 10:04:20.618491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.806 [2024-11-20 10:04:20.618520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.806 qpair failed and we were unable to recover it. 00:30:49.806 [2024-11-20 10:04:20.618872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.806 [2024-11-20 10:04:20.618902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.806 qpair failed and we were unable to recover it. 00:30:49.806 [2024-11-20 10:04:20.619274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.806 [2024-11-20 10:04:20.619303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.806 qpair failed and we were unable to recover it. 00:30:49.806 [2024-11-20 10:04:20.619674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.806 [2024-11-20 10:04:20.619702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.806 qpair failed and we were unable to recover it. 00:30:49.806 [2024-11-20 10:04:20.620047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.806 [2024-11-20 10:04:20.620076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.806 qpair failed and we were unable to recover it. 00:30:49.806 [2024-11-20 10:04:20.620419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.806 [2024-11-20 10:04:20.620447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.806 qpair failed and we were unable to recover it. 00:30:49.806 [2024-11-20 10:04:20.620807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.806 [2024-11-20 10:04:20.620835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.806 qpair failed and we were unable to recover it. 00:30:49.806 [2024-11-20 10:04:20.621202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.806 [2024-11-20 10:04:20.621232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.806 qpair failed and we were unable to recover it. 00:30:49.806 [2024-11-20 10:04:20.621563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.806 [2024-11-20 10:04:20.621591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.806 qpair failed and we were unable to recover it. 00:30:49.806 [2024-11-20 10:04:20.621931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.806 [2024-11-20 10:04:20.621961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.806 qpair failed and we were unable to recover it. 00:30:49.806 [2024-11-20 10:04:20.622314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.806 [2024-11-20 10:04:20.622343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.806 qpair failed and we were unable to recover it. 00:30:49.806 [2024-11-20 10:04:20.622705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.806 [2024-11-20 10:04:20.622733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.806 qpair failed and we were unable to recover it. 00:30:49.806 [2024-11-20 10:04:20.623064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.806 [2024-11-20 10:04:20.623095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.806 qpair failed and we were unable to recover it. 00:30:49.806 [2024-11-20 10:04:20.623460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.806 [2024-11-20 10:04:20.623492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.806 qpair failed and we were unable to recover it. 00:30:49.806 [2024-11-20 10:04:20.623847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.806 [2024-11-20 10:04:20.623881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.806 qpair failed and we were unable to recover it. 00:30:49.806 [2024-11-20 10:04:20.624250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.806 [2024-11-20 10:04:20.624282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.806 qpair failed and we were unable to recover it. 00:30:49.806 [2024-11-20 10:04:20.624660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.806 [2024-11-20 10:04:20.624692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.806 qpair failed and we were unable to recover it. 00:30:49.806 [2024-11-20 10:04:20.624935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.806 [2024-11-20 10:04:20.624965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.806 qpair failed and we were unable to recover it. 00:30:49.806 [2024-11-20 10:04:20.625306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.806 [2024-11-20 10:04:20.625340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.806 qpair failed and we were unable to recover it. 00:30:49.806 [2024-11-20 10:04:20.625708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.806 [2024-11-20 10:04:20.625740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.806 qpair failed and we were unable to recover it. 00:30:49.806 [2024-11-20 10:04:20.625984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.806 [2024-11-20 10:04:20.626013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.806 qpair failed and we were unable to recover it. 00:30:49.806 [2024-11-20 10:04:20.626348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.806 [2024-11-20 10:04:20.626381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.806 qpair failed and we were unable to recover it. 00:30:49.806 [2024-11-20 10:04:20.626727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.806 [2024-11-20 10:04:20.626760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.806 qpair failed and we were unable to recover it. 00:30:49.806 [2024-11-20 10:04:20.627114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.806 [2024-11-20 10:04:20.627145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.806 qpair failed and we were unable to recover it. 00:30:49.806 [2024-11-20 10:04:20.627500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.806 [2024-11-20 10:04:20.627532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.806 qpair failed and we were unable to recover it. 00:30:49.806 [2024-11-20 10:04:20.627906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.806 [2024-11-20 10:04:20.627940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.806 qpair failed and we were unable to recover it. 00:30:49.806 [2024-11-20 10:04:20.628287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.806 [2024-11-20 10:04:20.628319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.806 qpair failed and we were unable to recover it. 00:30:49.806 [2024-11-20 10:04:20.628727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.806 [2024-11-20 10:04:20.628758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.806 qpair failed and we were unable to recover it. 00:30:49.807 [2024-11-20 10:04:20.629102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.807 [2024-11-20 10:04:20.629132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.807 qpair failed and we were unable to recover it. 00:30:49.807 [2024-11-20 10:04:20.629532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.807 [2024-11-20 10:04:20.629565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.807 qpair failed and we were unable to recover it. 00:30:49.807 [2024-11-20 10:04:20.629824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.807 [2024-11-20 10:04:20.629854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.807 qpair failed and we were unable to recover it. 00:30:49.807 [2024-11-20 10:04:20.630201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.807 [2024-11-20 10:04:20.630235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.807 qpair failed and we were unable to recover it. 00:30:49.807 [2024-11-20 10:04:20.630620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.807 [2024-11-20 10:04:20.630651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.807 qpair failed and we were unable to recover it. 00:30:49.807 [2024-11-20 10:04:20.631007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.807 [2024-11-20 10:04:20.631041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.807 qpair failed and we were unable to recover it. 00:30:49.807 [2024-11-20 10:04:20.631415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.807 [2024-11-20 10:04:20.631447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.807 qpair failed and we were unable to recover it. 00:30:49.807 [2024-11-20 10:04:20.631810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.807 [2024-11-20 10:04:20.631849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.807 qpair failed and we were unable to recover it. 00:30:49.807 [2024-11-20 10:04:20.632209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.807 [2024-11-20 10:04:20.632243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.807 qpair failed and we were unable to recover it. 00:30:49.807 [2024-11-20 10:04:20.632615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.807 [2024-11-20 10:04:20.632646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.807 qpair failed and we were unable to recover it. 00:30:49.807 [2024-11-20 10:04:20.633003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.807 [2024-11-20 10:04:20.633035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.807 qpair failed and we were unable to recover it. 00:30:49.807 [2024-11-20 10:04:20.633450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.807 [2024-11-20 10:04:20.633485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.807 qpair failed and we were unable to recover it. 00:30:49.807 [2024-11-20 10:04:20.633840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.807 [2024-11-20 10:04:20.633871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.807 qpair failed and we were unable to recover it. 00:30:49.807 [2024-11-20 10:04:20.634225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.807 [2024-11-20 10:04:20.634258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.807 qpair failed and we were unable to recover it. 00:30:49.807 [2024-11-20 10:04:20.634647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.807 [2024-11-20 10:04:20.634680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.807 qpair failed and we were unable to recover it. 00:30:49.807 [2024-11-20 10:04:20.635041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.807 [2024-11-20 10:04:20.635074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.807 qpair failed and we were unable to recover it. 00:30:49.807 [2024-11-20 10:04:20.635439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.807 [2024-11-20 10:04:20.635472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.807 qpair failed and we were unable to recover it. 00:30:49.807 [2024-11-20 10:04:20.635828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.807 [2024-11-20 10:04:20.635860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.807 qpair failed and we were unable to recover it. 00:30:49.807 [2024-11-20 10:04:20.636107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.807 [2024-11-20 10:04:20.636137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.807 qpair failed and we were unable to recover it. 00:30:49.807 [2024-11-20 10:04:20.636541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.807 [2024-11-20 10:04:20.636574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.807 qpair failed and we were unable to recover it. 00:30:49.807 [2024-11-20 10:04:20.636932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.807 [2024-11-20 10:04:20.636964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.807 qpair failed and we were unable to recover it. 00:30:49.807 [2024-11-20 10:04:20.637328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.807 [2024-11-20 10:04:20.637361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.807 qpair failed and we were unable to recover it. 00:30:49.807 [2024-11-20 10:04:20.637718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.807 [2024-11-20 10:04:20.637749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.807 qpair failed and we were unable to recover it. 00:30:49.807 [2024-11-20 10:04:20.638152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.807 [2024-11-20 10:04:20.638209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.807 qpair failed and we were unable to recover it. 00:30:49.807 [2024-11-20 10:04:20.638548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.807 [2024-11-20 10:04:20.638578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.807 qpair failed and we were unable to recover it. 00:30:49.807 [2024-11-20 10:04:20.638936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.807 [2024-11-20 10:04:20.638966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.807 qpair failed and we were unable to recover it. 00:30:49.807 [2024-11-20 10:04:20.639327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.807 [2024-11-20 10:04:20.639362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.807 qpair failed and we were unable to recover it. 00:30:49.807 [2024-11-20 10:04:20.639662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.807 [2024-11-20 10:04:20.639693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.807 qpair failed and we were unable to recover it. 00:30:49.807 [2024-11-20 10:04:20.640050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.807 [2024-11-20 10:04:20.640081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.807 qpair failed and we were unable to recover it. 00:30:49.807 [2024-11-20 10:04:20.640455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.807 [2024-11-20 10:04:20.640487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.807 qpair failed and we were unable to recover it. 00:30:49.807 [2024-11-20 10:04:20.640739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.807 [2024-11-20 10:04:20.640774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.808 qpair failed and we were unable to recover it. 00:30:49.808 [2024-11-20 10:04:20.641117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.808 [2024-11-20 10:04:20.641149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.808 qpair failed and we were unable to recover it. 00:30:49.808 [2024-11-20 10:04:20.641538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.808 [2024-11-20 10:04:20.641570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.808 qpair failed and we were unable to recover it. 00:30:49.808 [2024-11-20 10:04:20.641932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.808 [2024-11-20 10:04:20.641964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.808 qpair failed and we were unable to recover it. 00:30:49.808 [2024-11-20 10:04:20.642317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.808 [2024-11-20 10:04:20.642350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.808 qpair failed and we were unable to recover it. 00:30:49.808 [2024-11-20 10:04:20.642696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.808 [2024-11-20 10:04:20.642728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.808 qpair failed and we were unable to recover it. 00:30:49.808 [2024-11-20 10:04:20.643079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.808 [2024-11-20 10:04:20.643109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.808 qpair failed and we were unable to recover it. 00:30:49.808 [2024-11-20 10:04:20.643478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.808 [2024-11-20 10:04:20.643510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.808 qpair failed and we were unable to recover it. 00:30:49.808 [2024-11-20 10:04:20.643944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.808 [2024-11-20 10:04:20.643977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.808 qpair failed and we were unable to recover it. 00:30:49.808 [2024-11-20 10:04:20.644339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.808 [2024-11-20 10:04:20.644371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.808 qpair failed and we were unable to recover it. 00:30:49.808 [2024-11-20 10:04:20.644718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.808 [2024-11-20 10:04:20.644748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.808 qpair failed and we were unable to recover it. 00:30:49.808 [2024-11-20 10:04:20.645105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.808 [2024-11-20 10:04:20.645138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.808 qpair failed and we were unable to recover it. 00:30:49.808 [2024-11-20 10:04:20.645534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.808 [2024-11-20 10:04:20.645565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.808 qpair failed and we were unable to recover it. 00:30:49.808 [2024-11-20 10:04:20.645816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.808 [2024-11-20 10:04:20.645846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.808 qpair failed and we were unable to recover it. 00:30:49.808 [2024-11-20 10:04:20.646194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.808 [2024-11-20 10:04:20.646228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.808 qpair failed and we were unable to recover it. 00:30:49.808 [2024-11-20 10:04:20.648185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.808 [2024-11-20 10:04:20.648249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.808 qpair failed and we were unable to recover it. 00:30:49.808 [2024-11-20 10:04:20.648687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.808 [2024-11-20 10:04:20.648722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.808 qpair failed and we were unable to recover it. 00:30:49.808 [2024-11-20 10:04:20.648977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.808 [2024-11-20 10:04:20.649015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.808 qpair failed and we were unable to recover it. 00:30:49.808 [2024-11-20 10:04:20.649259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.808 [2024-11-20 10:04:20.649291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.808 qpair failed and we were unable to recover it. 00:30:49.808 [2024-11-20 10:04:20.649658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.808 [2024-11-20 10:04:20.649689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.808 qpair failed and we were unable to recover it. 00:30:49.808 [2024-11-20 10:04:20.650092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.808 [2024-11-20 10:04:20.650124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.808 qpair failed and we were unable to recover it. 00:30:49.808 [2024-11-20 10:04:20.650487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.808 [2024-11-20 10:04:20.650523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:49.808 qpair failed and we were unable to recover it. 00:30:50.091 [2024-11-20 10:04:20.650879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.091 [2024-11-20 10:04:20.650913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.091 qpair failed and we were unable to recover it. 00:30:50.091 [2024-11-20 10:04:20.651286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.091 [2024-11-20 10:04:20.651320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.091 qpair failed and we were unable to recover it. 00:30:50.091 [2024-11-20 10:04:20.651533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.091 [2024-11-20 10:04:20.651569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.091 qpair failed and we were unable to recover it. 00:30:50.091 [2024-11-20 10:04:20.651941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.091 [2024-11-20 10:04:20.651974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.091 qpair failed and we were unable to recover it. 00:30:50.091 [2024-11-20 10:04:20.652338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.091 [2024-11-20 10:04:20.652370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.091 qpair failed and we were unable to recover it. 00:30:50.091 [2024-11-20 10:04:20.652743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.091 [2024-11-20 10:04:20.652776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.091 qpair failed and we were unable to recover it. 00:30:50.091 [2024-11-20 10:04:20.653140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.091 [2024-11-20 10:04:20.653183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.091 qpair failed and we were unable to recover it. 00:30:50.091 [2024-11-20 10:04:20.653550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.091 [2024-11-20 10:04:20.653584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.091 qpair failed and we were unable to recover it. 00:30:50.091 [2024-11-20 10:04:20.653745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.091 [2024-11-20 10:04:20.653775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.091 qpair failed and we were unable to recover it. 00:30:50.091 [2024-11-20 10:04:20.654186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.091 [2024-11-20 10:04:20.654219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.091 qpair failed and we were unable to recover it. 00:30:50.091 [2024-11-20 10:04:20.654621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.091 [2024-11-20 10:04:20.654652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.091 qpair failed and we were unable to recover it. 00:30:50.091 [2024-11-20 10:04:20.655022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.091 [2024-11-20 10:04:20.655056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.091 qpair failed and we were unable to recover it. 00:30:50.091 [2024-11-20 10:04:20.655414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.091 [2024-11-20 10:04:20.655449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.091 qpair failed and we were unable to recover it. 00:30:50.091 [2024-11-20 10:04:20.655846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.091 [2024-11-20 10:04:20.655877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.091 qpair failed and we were unable to recover it. 00:30:50.091 [2024-11-20 10:04:20.656250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.092 [2024-11-20 10:04:20.656282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.092 qpair failed and we were unable to recover it. 00:30:50.092 [2024-11-20 10:04:20.656649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.092 [2024-11-20 10:04:20.656680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.092 qpair failed and we were unable to recover it. 00:30:50.092 [2024-11-20 10:04:20.656905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.092 [2024-11-20 10:04:20.656939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.092 qpair failed and we were unable to recover it. 00:30:50.092 [2024-11-20 10:04:20.657318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.092 [2024-11-20 10:04:20.657351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.092 qpair failed and we were unable to recover it. 00:30:50.092 [2024-11-20 10:04:20.657763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.092 [2024-11-20 10:04:20.657796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.092 qpair failed and we were unable to recover it. 00:30:50.092 [2024-11-20 10:04:20.658148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.092 [2024-11-20 10:04:20.658193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.092 qpair failed and we were unable to recover it. 00:30:50.092 [2024-11-20 10:04:20.658558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.092 [2024-11-20 10:04:20.658589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.092 qpair failed and we were unable to recover it. 00:30:50.092 [2024-11-20 10:04:20.658963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.092 [2024-11-20 10:04:20.658995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.092 qpair failed and we were unable to recover it. 00:30:50.092 [2024-11-20 10:04:20.659388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.092 [2024-11-20 10:04:20.659423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.092 qpair failed and we were unable to recover it. 00:30:50.092 [2024-11-20 10:04:20.659647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.092 [2024-11-20 10:04:20.659677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.092 qpair failed and we were unable to recover it. 00:30:50.092 [2024-11-20 10:04:20.660049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.092 [2024-11-20 10:04:20.660083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.092 qpair failed and we were unable to recover it. 00:30:50.092 [2024-11-20 10:04:20.660448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.092 [2024-11-20 10:04:20.660481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.092 qpair failed and we were unable to recover it. 00:30:50.092 [2024-11-20 10:04:20.660840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.092 [2024-11-20 10:04:20.660872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.092 qpair failed and we were unable to recover it. 00:30:50.092 [2024-11-20 10:04:20.661229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.092 [2024-11-20 10:04:20.661261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.092 qpair failed and we were unable to recover it. 00:30:50.092 [2024-11-20 10:04:20.661621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.092 [2024-11-20 10:04:20.661651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.092 qpair failed and we were unable to recover it. 00:30:50.092 [2024-11-20 10:04:20.662012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.092 [2024-11-20 10:04:20.662044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.092 qpair failed and we were unable to recover it. 00:30:50.092 [2024-11-20 10:04:20.662228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.092 [2024-11-20 10:04:20.662259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.092 qpair failed and we were unable to recover it. 00:30:50.092 [2024-11-20 10:04:20.662662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.092 [2024-11-20 10:04:20.662694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.092 qpair failed and we were unable to recover it. 00:30:50.092 [2024-11-20 10:04:20.663061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.092 [2024-11-20 10:04:20.663094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.092 qpair failed and we were unable to recover it. 00:30:50.092 [2024-11-20 10:04:20.663460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.092 [2024-11-20 10:04:20.663492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.092 qpair failed and we were unable to recover it. 00:30:50.092 [2024-11-20 10:04:20.663739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.092 [2024-11-20 10:04:20.663769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.092 qpair failed and we were unable to recover it. 00:30:50.092 [2024-11-20 10:04:20.664132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.092 [2024-11-20 10:04:20.664182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.092 qpair failed and we were unable to recover it. 00:30:50.092 [2024-11-20 10:04:20.664569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.092 [2024-11-20 10:04:20.664601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.092 qpair failed and we were unable to recover it. 00:30:50.092 [2024-11-20 10:04:20.664853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.092 [2024-11-20 10:04:20.664887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.092 qpair failed and we were unable to recover it. 00:30:50.092 [2024-11-20 10:04:20.665263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.092 [2024-11-20 10:04:20.665295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.092 qpair failed and we were unable to recover it. 00:30:50.092 [2024-11-20 10:04:20.665657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.092 [2024-11-20 10:04:20.665687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.092 qpair failed and we were unable to recover it. 00:30:50.092 [2024-11-20 10:04:20.666077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.092 [2024-11-20 10:04:20.666110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.093 qpair failed and we were unable to recover it. 00:30:50.093 [2024-11-20 10:04:20.666474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.093 [2024-11-20 10:04:20.666504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.093 qpair failed and we were unable to recover it. 00:30:50.093 [2024-11-20 10:04:20.666880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.093 [2024-11-20 10:04:20.666913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.093 qpair failed and we were unable to recover it. 00:30:50.093 [2024-11-20 10:04:20.667353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.093 [2024-11-20 10:04:20.667386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.093 qpair failed and we were unable to recover it. 00:30:50.093 [2024-11-20 10:04:20.667748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.093 [2024-11-20 10:04:20.667780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.093 qpair failed and we were unable to recover it. 00:30:50.093 [2024-11-20 10:04:20.668155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.093 [2024-11-20 10:04:20.668200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.093 qpair failed and we were unable to recover it. 00:30:50.093 [2024-11-20 10:04:20.668589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.093 [2024-11-20 10:04:20.668619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.093 qpair failed and we were unable to recover it. 00:30:50.093 [2024-11-20 10:04:20.668991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.093 [2024-11-20 10:04:20.669023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.093 qpair failed and we were unable to recover it. 00:30:50.093 [2024-11-20 10:04:20.669282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.093 [2024-11-20 10:04:20.669314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.093 qpair failed and we were unable to recover it. 00:30:50.093 [2024-11-20 10:04:20.669711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.093 [2024-11-20 10:04:20.669741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.093 qpair failed and we were unable to recover it. 00:30:50.093 [2024-11-20 10:04:20.670126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.093 [2024-11-20 10:04:20.670169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.093 qpair failed and we were unable to recover it. 00:30:50.093 [2024-11-20 10:04:20.670427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.093 [2024-11-20 10:04:20.670457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.093 qpair failed and we were unable to recover it. 00:30:50.093 [2024-11-20 10:04:20.670849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.093 [2024-11-20 10:04:20.670879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.093 qpair failed and we were unable to recover it. 00:30:50.093 [2024-11-20 10:04:20.671354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.093 [2024-11-20 10:04:20.671386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.093 qpair failed and we were unable to recover it. 00:30:50.093 [2024-11-20 10:04:20.671764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.093 [2024-11-20 10:04:20.671800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.093 qpair failed and we were unable to recover it. 00:30:50.093 [2024-11-20 10:04:20.672174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.093 [2024-11-20 10:04:20.672206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.093 qpair failed and we were unable to recover it. 00:30:50.093 [2024-11-20 10:04:20.672605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.093 [2024-11-20 10:04:20.672638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.093 qpair failed and we were unable to recover it. 00:30:50.093 [2024-11-20 10:04:20.672880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.093 [2024-11-20 10:04:20.672912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.093 qpair failed and we were unable to recover it. 00:30:50.093 [2024-11-20 10:04:20.673157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.093 [2024-11-20 10:04:20.673210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.093 qpair failed and we were unable to recover it. 00:30:50.093 [2024-11-20 10:04:20.673567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.093 [2024-11-20 10:04:20.673597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.093 qpair failed and we were unable to recover it. 00:30:50.093 [2024-11-20 10:04:20.673958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.093 [2024-11-20 10:04:20.673990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.093 qpair failed and we were unable to recover it. 00:30:50.093 [2024-11-20 10:04:20.674373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.093 [2024-11-20 10:04:20.674405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.093 qpair failed and we were unable to recover it. 00:30:50.093 [2024-11-20 10:04:20.674666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.093 [2024-11-20 10:04:20.674697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.093 qpair failed and we were unable to recover it. 00:30:50.093 [2024-11-20 10:04:20.675100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.093 [2024-11-20 10:04:20.675133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.093 qpair failed and we were unable to recover it. 00:30:50.093 [2024-11-20 10:04:20.675521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.093 [2024-11-20 10:04:20.675553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.093 qpair failed and we were unable to recover it. 00:30:50.093 [2024-11-20 10:04:20.675938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.093 [2024-11-20 10:04:20.675974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.093 qpair failed and we were unable to recover it. 00:30:50.093 [2024-11-20 10:04:20.676379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.093 [2024-11-20 10:04:20.676412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.094 qpair failed and we were unable to recover it. 00:30:50.094 [2024-11-20 10:04:20.676815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.094 [2024-11-20 10:04:20.676846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.094 qpair failed and we were unable to recover it. 00:30:50.094 [2024-11-20 10:04:20.677222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.094 [2024-11-20 10:04:20.677255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.094 qpair failed and we were unable to recover it. 00:30:50.094 [2024-11-20 10:04:20.677644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.094 [2024-11-20 10:04:20.677675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.094 qpair failed and we were unable to recover it. 00:30:50.094 [2024-11-20 10:04:20.678050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.094 [2024-11-20 10:04:20.678083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.094 qpair failed and we were unable to recover it. 00:30:50.094 [2024-11-20 10:04:20.678462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.094 [2024-11-20 10:04:20.678496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.094 qpair failed and we were unable to recover it. 00:30:50.094 [2024-11-20 10:04:20.678844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.094 [2024-11-20 10:04:20.678876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.094 qpair failed and we were unable to recover it. 00:30:50.094 [2024-11-20 10:04:20.679101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.094 [2024-11-20 10:04:20.679132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.094 qpair failed and we were unable to recover it. 00:30:50.094 [2024-11-20 10:04:20.679532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.094 [2024-11-20 10:04:20.679564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.094 qpair failed and we were unable to recover it. 00:30:50.094 [2024-11-20 10:04:20.679816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.094 [2024-11-20 10:04:20.679853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.094 qpair failed and we were unable to recover it. 00:30:50.094 [2024-11-20 10:04:20.680219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.094 [2024-11-20 10:04:20.680252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.094 qpair failed and we were unable to recover it. 00:30:50.094 [2024-11-20 10:04:20.680627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.094 [2024-11-20 10:04:20.680660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.094 qpair failed and we were unable to recover it. 00:30:50.094 [2024-11-20 10:04:20.681050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.094 [2024-11-20 10:04:20.681080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.094 qpair failed and we were unable to recover it. 00:30:50.094 [2024-11-20 10:04:20.681347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.094 [2024-11-20 10:04:20.681380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.094 qpair failed and we were unable to recover it. 00:30:50.094 [2024-11-20 10:04:20.681734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.094 [2024-11-20 10:04:20.681764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.094 qpair failed and we were unable to recover it. 00:30:50.094 [2024-11-20 10:04:20.682131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.094 [2024-11-20 10:04:20.682176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.094 qpair failed and we were unable to recover it. 00:30:50.094 [2024-11-20 10:04:20.682553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.094 [2024-11-20 10:04:20.682583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.094 qpair failed and we were unable to recover it. 00:30:50.094 [2024-11-20 10:04:20.682843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.094 [2024-11-20 10:04:20.682875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.094 qpair failed and we were unable to recover it. 00:30:50.094 [2024-11-20 10:04:20.683246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.094 [2024-11-20 10:04:20.683278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.094 qpair failed and we were unable to recover it. 00:30:50.094 [2024-11-20 10:04:20.683685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.094 [2024-11-20 10:04:20.683716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.094 qpair failed and we were unable to recover it. 00:30:50.094 [2024-11-20 10:04:20.684078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.094 [2024-11-20 10:04:20.684109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.094 qpair failed and we were unable to recover it. 00:30:50.094 [2024-11-20 10:04:20.684452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.094 [2024-11-20 10:04:20.684483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.094 qpair failed and we were unable to recover it. 00:30:50.094 [2024-11-20 10:04:20.684733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.094 [2024-11-20 10:04:20.684763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.094 qpair failed and we were unable to recover it. 00:30:50.094 [2024-11-20 10:04:20.685144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.094 [2024-11-20 10:04:20.685189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.094 qpair failed and we were unable to recover it. 00:30:50.094 [2024-11-20 10:04:20.685552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.094 [2024-11-20 10:04:20.685582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.094 qpair failed and we were unable to recover it. 00:30:50.094 [2024-11-20 10:04:20.685847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.094 [2024-11-20 10:04:20.685877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.094 qpair failed and we were unable to recover it. 00:30:50.094 [2024-11-20 10:04:20.686129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.095 [2024-11-20 10:04:20.686173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.095 qpair failed and we were unable to recover it. 00:30:50.095 [2024-11-20 10:04:20.686559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.095 [2024-11-20 10:04:20.686590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.095 qpair failed and we were unable to recover it. 00:30:50.095 [2024-11-20 10:04:20.686837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.095 [2024-11-20 10:04:20.686868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.095 qpair failed and we were unable to recover it. 00:30:50.095 [2024-11-20 10:04:20.687222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.095 [2024-11-20 10:04:20.687256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.095 qpair failed and we were unable to recover it. 00:30:50.095 [2024-11-20 10:04:20.687631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.095 [2024-11-20 10:04:20.687662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.095 qpair failed and we were unable to recover it. 00:30:50.095 [2024-11-20 10:04:20.688035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.095 [2024-11-20 10:04:20.688067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.095 qpair failed and we were unable to recover it. 00:30:50.095 [2024-11-20 10:04:20.688461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.095 [2024-11-20 10:04:20.688493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.095 qpair failed and we were unable to recover it. 00:30:50.095 [2024-11-20 10:04:20.688876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.095 [2024-11-20 10:04:20.688906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.095 qpair failed and we were unable to recover it. 00:30:50.095 [2024-11-20 10:04:20.689317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.095 [2024-11-20 10:04:20.689350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.095 qpair failed and we were unable to recover it. 00:30:50.095 [2024-11-20 10:04:20.689728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.095 [2024-11-20 10:04:20.689760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.095 qpair failed and we were unable to recover it. 00:30:50.095 [2024-11-20 10:04:20.690136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.095 [2024-11-20 10:04:20.690181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.095 qpair failed and we were unable to recover it. 00:30:50.095 [2024-11-20 10:04:20.690520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.095 [2024-11-20 10:04:20.690551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.095 qpair failed and we were unable to recover it. 00:30:50.095 [2024-11-20 10:04:20.690916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.095 [2024-11-20 10:04:20.690948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.095 qpair failed and we were unable to recover it. 00:30:50.095 [2024-11-20 10:04:20.691315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.095 [2024-11-20 10:04:20.691347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.095 qpair failed and we were unable to recover it. 00:30:50.095 [2024-11-20 10:04:20.691710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.095 [2024-11-20 10:04:20.691741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.095 qpair failed and we were unable to recover it. 00:30:50.095 [2024-11-20 10:04:20.692091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.095 [2024-11-20 10:04:20.692123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.095 qpair failed and we were unable to recover it. 00:30:50.095 [2024-11-20 10:04:20.692410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.095 [2024-11-20 10:04:20.692442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.095 qpair failed and we were unable to recover it. 00:30:50.095 [2024-11-20 10:04:20.692833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.095 [2024-11-20 10:04:20.692864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.095 qpair failed and we were unable to recover it. 00:30:50.095 [2024-11-20 10:04:20.693291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.095 [2024-11-20 10:04:20.693323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.095 qpair failed and we were unable to recover it. 00:30:50.095 [2024-11-20 10:04:20.693685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.095 [2024-11-20 10:04:20.693715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.095 qpair failed and we were unable to recover it. 00:30:50.095 [2024-11-20 10:04:20.694058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.095 [2024-11-20 10:04:20.694089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.095 qpair failed and we were unable to recover it. 00:30:50.095 [2024-11-20 10:04:20.694382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.095 [2024-11-20 10:04:20.694414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.095 qpair failed and we were unable to recover it. 00:30:50.095 [2024-11-20 10:04:20.694791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.095 [2024-11-20 10:04:20.694822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.095 qpair failed and we were unable to recover it. 00:30:50.095 [2024-11-20 10:04:20.695206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.095 [2024-11-20 10:04:20.695244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.095 qpair failed and we were unable to recover it. 00:30:50.095 [2024-11-20 10:04:20.695520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.095 [2024-11-20 10:04:20.695550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.095 qpair failed and we were unable to recover it. 00:30:50.095 [2024-11-20 10:04:20.695922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.095 [2024-11-20 10:04:20.695955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.095 qpair failed and we were unable to recover it. 00:30:50.095 [2024-11-20 10:04:20.696325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.096 [2024-11-20 10:04:20.696359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.096 qpair failed and we were unable to recover it. 00:30:50.096 [2024-11-20 10:04:20.696616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.096 [2024-11-20 10:04:20.696651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.096 qpair failed and we were unable to recover it. 00:30:50.096 [2024-11-20 10:04:20.696910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.096 [2024-11-20 10:04:20.696940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.096 qpair failed and we were unable to recover it. 00:30:50.096 [2024-11-20 10:04:20.697134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.096 [2024-11-20 10:04:20.697191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.096 qpair failed and we were unable to recover it. 00:30:50.096 [2024-11-20 10:04:20.697557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.096 [2024-11-20 10:04:20.697589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.096 qpair failed and we were unable to recover it. 00:30:50.096 [2024-11-20 10:04:20.697972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.096 [2024-11-20 10:04:20.698003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.096 qpair failed and we were unable to recover it. 00:30:50.096 [2024-11-20 10:04:20.698253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.096 [2024-11-20 10:04:20.698283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.096 qpair failed and we were unable to recover it. 00:30:50.096 [2024-11-20 10:04:20.698661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.096 [2024-11-20 10:04:20.698694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.096 qpair failed and we were unable to recover it. 00:30:50.096 [2024-11-20 10:04:20.699062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.096 [2024-11-20 10:04:20.699094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.096 qpair failed and we were unable to recover it. 00:30:50.096 [2024-11-20 10:04:20.699340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.096 [2024-11-20 10:04:20.699371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.096 qpair failed and we were unable to recover it. 00:30:50.096 [2024-11-20 10:04:20.699622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.096 [2024-11-20 10:04:20.699653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.096 qpair failed and we were unable to recover it. 00:30:50.096 [2024-11-20 10:04:20.699896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.096 [2024-11-20 10:04:20.699927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.096 qpair failed and we were unable to recover it. 00:30:50.096 [2024-11-20 10:04:20.700156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.096 [2024-11-20 10:04:20.700221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.096 qpair failed and we were unable to recover it. 00:30:50.096 [2024-11-20 10:04:20.700447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.096 [2024-11-20 10:04:20.700480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.096 qpair failed and we were unable to recover it. 00:30:50.096 [2024-11-20 10:04:20.700737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.096 [2024-11-20 10:04:20.700768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.096 qpair failed and we were unable to recover it. 00:30:50.096 [2024-11-20 10:04:20.701013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.096 [2024-11-20 10:04:20.701043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.096 qpair failed and we were unable to recover it. 00:30:50.096 [2024-11-20 10:04:20.701393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.096 [2024-11-20 10:04:20.701427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.096 qpair failed and we were unable to recover it. 00:30:50.096 [2024-11-20 10:04:20.701794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.096 [2024-11-20 10:04:20.701825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.096 qpair failed and we were unable to recover it. 00:30:50.096 [2024-11-20 10:04:20.702207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.096 [2024-11-20 10:04:20.702239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.096 qpair failed and we were unable to recover it. 00:30:50.096 [2024-11-20 10:04:20.702408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.096 [2024-11-20 10:04:20.702437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.096 qpair failed and we were unable to recover it. 00:30:50.096 [2024-11-20 10:04:20.702653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.096 [2024-11-20 10:04:20.702683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.096 qpair failed and we were unable to recover it. 00:30:50.096 [2024-11-20 10:04:20.703038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.096 [2024-11-20 10:04:20.703069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.096 qpair failed and we were unable to recover it. 00:30:50.096 [2024-11-20 10:04:20.703442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.096 [2024-11-20 10:04:20.703477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.096 qpair failed and we were unable to recover it. 00:30:50.096 [2024-11-20 10:04:20.703884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.096 [2024-11-20 10:04:20.703915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.096 qpair failed and we were unable to recover it. 00:30:50.096 [2024-11-20 10:04:20.704266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.096 [2024-11-20 10:04:20.704298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.096 qpair failed and we were unable to recover it. 00:30:50.096 [2024-11-20 10:04:20.704661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.096 [2024-11-20 10:04:20.704693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.096 qpair failed and we were unable to recover it. 00:30:50.096 [2024-11-20 10:04:20.704953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.097 [2024-11-20 10:04:20.704988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.097 qpair failed and we were unable to recover it. 00:30:50.097 [2024-11-20 10:04:20.705407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.097 [2024-11-20 10:04:20.705440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.097 qpair failed and we were unable to recover it. 00:30:50.097 [2024-11-20 10:04:20.705804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.097 [2024-11-20 10:04:20.705836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.097 qpair failed and we were unable to recover it. 00:30:50.097 [2024-11-20 10:04:20.706209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.097 [2024-11-20 10:04:20.706242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.097 qpair failed and we were unable to recover it. 00:30:50.097 [2024-11-20 10:04:20.706615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.097 [2024-11-20 10:04:20.706646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.097 qpair failed and we were unable to recover it. 00:30:50.097 [2024-11-20 10:04:20.706885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.097 [2024-11-20 10:04:20.706916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.097 qpair failed and we were unable to recover it. 00:30:50.097 [2024-11-20 10:04:20.707268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.097 [2024-11-20 10:04:20.707302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.097 qpair failed and we were unable to recover it. 00:30:50.097 [2024-11-20 10:04:20.707718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.097 [2024-11-20 10:04:20.707751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.097 qpair failed and we were unable to recover it. 00:30:50.097 [2024-11-20 10:04:20.708112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.097 [2024-11-20 10:04:20.708144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.097 qpair failed and we were unable to recover it. 00:30:50.097 [2024-11-20 10:04:20.708609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.097 [2024-11-20 10:04:20.708641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.097 qpair failed and we were unable to recover it. 00:30:50.097 [2024-11-20 10:04:20.708884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.097 [2024-11-20 10:04:20.708914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.097 qpair failed and we were unable to recover it. 00:30:50.097 [2024-11-20 10:04:20.709274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.097 [2024-11-20 10:04:20.709313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.097 qpair failed and we were unable to recover it. 00:30:50.097 [2024-11-20 10:04:20.709681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.097 [2024-11-20 10:04:20.709711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.097 qpair failed and we were unable to recover it. 00:30:50.097 [2024-11-20 10:04:20.710083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.097 [2024-11-20 10:04:20.710116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.097 qpair failed and we were unable to recover it. 00:30:50.097 [2024-11-20 10:04:20.710484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.097 [2024-11-20 10:04:20.710516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.097 qpair failed and we were unable to recover it. 00:30:50.097 [2024-11-20 10:04:20.710743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.097 [2024-11-20 10:04:20.710772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.097 qpair failed and we were unable to recover it. 00:30:50.097 [2024-11-20 10:04:20.711010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.097 [2024-11-20 10:04:20.711041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.097 qpair failed and we were unable to recover it. 00:30:50.097 [2024-11-20 10:04:20.711271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.097 [2024-11-20 10:04:20.711304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.097 qpair failed and we were unable to recover it. 00:30:50.097 [2024-11-20 10:04:20.711758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.097 [2024-11-20 10:04:20.711790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.097 qpair failed and we were unable to recover it. 00:30:50.097 [2024-11-20 10:04:20.712146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.097 [2024-11-20 10:04:20.712193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.097 qpair failed and we were unable to recover it. 00:30:50.097 [2024-11-20 10:04:20.712563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.097 [2024-11-20 10:04:20.712595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.097 qpair failed and we were unable to recover it. 00:30:50.097 [2024-11-20 10:04:20.712960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.097 [2024-11-20 10:04:20.712990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.097 qpair failed and we were unable to recover it. 00:30:50.097 [2024-11-20 10:04:20.713382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.097 [2024-11-20 10:04:20.713414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.097 qpair failed and we were unable to recover it. 00:30:50.097 [2024-11-20 10:04:20.713780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.097 [2024-11-20 10:04:20.713810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.097 qpair failed and we were unable to recover it. 00:30:50.097 [2024-11-20 10:04:20.714188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.097 [2024-11-20 10:04:20.714221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.097 qpair failed and we were unable to recover it. 00:30:50.098 [2024-11-20 10:04:20.714491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.098 [2024-11-20 10:04:20.714523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.098 qpair failed and we were unable to recover it. 00:30:50.098 [2024-11-20 10:04:20.714870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.098 [2024-11-20 10:04:20.714900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.098 qpair failed and we were unable to recover it. 00:30:50.098 [2024-11-20 10:04:20.715146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.098 [2024-11-20 10:04:20.715187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.098 qpair failed and we were unable to recover it. 00:30:50.098 [2024-11-20 10:04:20.715535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.098 [2024-11-20 10:04:20.715567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.098 qpair failed and we were unable to recover it. 00:30:50.098 [2024-11-20 10:04:20.715950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.098 [2024-11-20 10:04:20.715980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.098 qpair failed and we were unable to recover it. 00:30:50.098 [2024-11-20 10:04:20.716342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.098 [2024-11-20 10:04:20.716373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.098 qpair failed and we were unable to recover it. 00:30:50.098 [2024-11-20 10:04:20.716735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.098 [2024-11-20 10:04:20.716766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.098 qpair failed and we were unable to recover it. 00:30:50.098 [2024-11-20 10:04:20.717119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.098 [2024-11-20 10:04:20.717150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.098 qpair failed and we were unable to recover it. 00:30:50.098 [2024-11-20 10:04:20.717401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.098 [2024-11-20 10:04:20.717432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.098 qpair failed and we were unable to recover it. 00:30:50.098 [2024-11-20 10:04:20.717673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.098 [2024-11-20 10:04:20.717705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.098 qpair failed and we were unable to recover it. 00:30:50.098 [2024-11-20 10:04:20.718069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.098 [2024-11-20 10:04:20.718100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.098 qpair failed and we were unable to recover it. 00:30:50.098 [2024-11-20 10:04:20.718464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.098 [2024-11-20 10:04:20.718496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.098 qpair failed and we were unable to recover it. 00:30:50.098 [2024-11-20 10:04:20.718626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.098 [2024-11-20 10:04:20.718659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.098 qpair failed and we were unable to recover it. 00:30:50.098 [2024-11-20 10:04:20.719032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.098 [2024-11-20 10:04:20.719067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.098 qpair failed and we were unable to recover it. 00:30:50.098 [2024-11-20 10:04:20.719414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.098 [2024-11-20 10:04:20.719448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.098 qpair failed and we were unable to recover it. 00:30:50.098 [2024-11-20 10:04:20.719798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.098 [2024-11-20 10:04:20.719830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.098 qpair failed and we were unable to recover it. 00:30:50.098 [2024-11-20 10:04:20.720047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.098 [2024-11-20 10:04:20.720079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.098 qpair failed and we were unable to recover it. 00:30:50.098 [2024-11-20 10:04:20.720455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.098 [2024-11-20 10:04:20.720487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.098 qpair failed and we were unable to recover it. 00:30:50.098 [2024-11-20 10:04:20.720694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.098 [2024-11-20 10:04:20.720725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.098 qpair failed and we were unable to recover it. 00:30:50.098 [2024-11-20 10:04:20.721076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.098 [2024-11-20 10:04:20.721109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.098 qpair failed and we were unable to recover it. 00:30:50.098 [2024-11-20 10:04:20.721381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.098 [2024-11-20 10:04:20.721413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.098 qpair failed and we were unable to recover it. 00:30:50.098 [2024-11-20 10:04:20.721787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.098 [2024-11-20 10:04:20.721820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.099 qpair failed and we were unable to recover it. 00:30:50.099 [2024-11-20 10:04:20.722181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.099 [2024-11-20 10:04:20.722213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.099 qpair failed and we were unable to recover it. 00:30:50.099 [2024-11-20 10:04:20.722577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.099 [2024-11-20 10:04:20.722608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.099 qpair failed and we were unable to recover it. 00:30:50.099 [2024-11-20 10:04:20.722974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.099 [2024-11-20 10:04:20.723006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.099 qpair failed and we were unable to recover it. 00:30:50.099 [2024-11-20 10:04:20.723385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.099 [2024-11-20 10:04:20.723416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.099 qpair failed and we were unable to recover it. 00:30:50.099 [2024-11-20 10:04:20.723785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.099 [2024-11-20 10:04:20.723823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.099 qpair failed and we were unable to recover it. 00:30:50.099 [2024-11-20 10:04:20.724198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.099 [2024-11-20 10:04:20.724231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.099 qpair failed and we were unable to recover it. 00:30:50.099 [2024-11-20 10:04:20.724583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.099 [2024-11-20 10:04:20.724613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.099 qpair failed and we were unable to recover it. 00:30:50.099 [2024-11-20 10:04:20.724993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.099 [2024-11-20 10:04:20.725024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.099 qpair failed and we were unable to recover it. 00:30:50.099 [2024-11-20 10:04:20.725394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.099 [2024-11-20 10:04:20.725425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.099 qpair failed and we were unable to recover it. 00:30:50.099 [2024-11-20 10:04:20.725682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.099 [2024-11-20 10:04:20.725713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.099 qpair failed and we were unable to recover it. 00:30:50.099 [2024-11-20 10:04:20.726116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.099 [2024-11-20 10:04:20.726146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.099 qpair failed and we were unable to recover it. 00:30:50.099 [2024-11-20 10:04:20.726520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.099 [2024-11-20 10:04:20.726551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.099 qpair failed and we were unable to recover it. 00:30:50.099 [2024-11-20 10:04:20.726868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.099 [2024-11-20 10:04:20.726901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.099 qpair failed and we were unable to recover it. 00:30:50.099 [2024-11-20 10:04:20.727260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.099 [2024-11-20 10:04:20.727291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.099 qpair failed and we were unable to recover it. 00:30:50.099 [2024-11-20 10:04:20.727522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.099 [2024-11-20 10:04:20.727557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.099 qpair failed and we were unable to recover it. 00:30:50.099 [2024-11-20 10:04:20.727955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.099 [2024-11-20 10:04:20.727988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.099 qpair failed and we were unable to recover it. 00:30:50.099 [2024-11-20 10:04:20.728387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.099 [2024-11-20 10:04:20.728419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.099 qpair failed and we were unable to recover it. 00:30:50.099 [2024-11-20 10:04:20.728780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.099 [2024-11-20 10:04:20.728813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.099 qpair failed and we were unable to recover it. 00:30:50.099 [2024-11-20 10:04:20.729204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.099 [2024-11-20 10:04:20.729236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.099 qpair failed and we were unable to recover it. 00:30:50.099 [2024-11-20 10:04:20.729604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.099 [2024-11-20 10:04:20.729635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.099 qpair failed and we were unable to recover it. 00:30:50.099 [2024-11-20 10:04:20.729901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.099 [2024-11-20 10:04:20.729932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.099 qpair failed and we were unable to recover it. 00:30:50.099 [2024-11-20 10:04:20.730173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.099 [2024-11-20 10:04:20.730204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.099 qpair failed and we were unable to recover it. 00:30:50.099 [2024-11-20 10:04:20.730459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.099 [2024-11-20 10:04:20.730489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.099 qpair failed and we were unable to recover it. 00:30:50.099 [2024-11-20 10:04:20.730852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.099 [2024-11-20 10:04:20.730883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.099 qpair failed and we were unable to recover it. 00:30:50.099 [2024-11-20 10:04:20.731344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.099 [2024-11-20 10:04:20.731376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.099 qpair failed and we were unable to recover it. 00:30:50.099 [2024-11-20 10:04:20.731746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.099 [2024-11-20 10:04:20.731779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.099 qpair failed and we were unable to recover it. 00:30:50.100 [2024-11-20 10:04:20.732006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.100 [2024-11-20 10:04:20.732038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.100 qpair failed and we were unable to recover it. 00:30:50.100 [2024-11-20 10:04:20.732418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.100 [2024-11-20 10:04:20.732452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.100 qpair failed and we were unable to recover it. 00:30:50.100 [2024-11-20 10:04:20.732829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.100 [2024-11-20 10:04:20.732859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.100 qpair failed and we were unable to recover it. 00:30:50.100 [2024-11-20 10:04:20.733236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.100 [2024-11-20 10:04:20.733267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.100 qpair failed and we were unable to recover it. 00:30:50.100 [2024-11-20 10:04:20.733648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.100 [2024-11-20 10:04:20.733679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.100 qpair failed and we were unable to recover it. 00:30:50.100 [2024-11-20 10:04:20.733917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.100 [2024-11-20 10:04:20.733948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.100 qpair failed and we were unable to recover it. 00:30:50.100 [2024-11-20 10:04:20.734304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.100 [2024-11-20 10:04:20.734335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.100 qpair failed and we were unable to recover it. 00:30:50.100 [2024-11-20 10:04:20.734556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.100 [2024-11-20 10:04:20.734589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.100 qpair failed and we were unable to recover it. 00:30:50.100 [2024-11-20 10:04:20.734830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.100 [2024-11-20 10:04:20.734860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.100 qpair failed and we were unable to recover it. 00:30:50.100 [2024-11-20 10:04:20.735231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.100 [2024-11-20 10:04:20.735263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.100 qpair failed and we were unable to recover it. 00:30:50.100 [2024-11-20 10:04:20.735621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.100 [2024-11-20 10:04:20.735653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.100 qpair failed and we were unable to recover it. 00:30:50.100 [2024-11-20 10:04:20.736012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.100 [2024-11-20 10:04:20.736043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.100 qpair failed and we were unable to recover it. 00:30:50.100 [2024-11-20 10:04:20.736416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.100 [2024-11-20 10:04:20.736449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.100 qpair failed and we were unable to recover it. 00:30:50.100 [2024-11-20 10:04:20.736812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.100 [2024-11-20 10:04:20.736845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.100 qpair failed and we were unable to recover it. 00:30:50.100 [2024-11-20 10:04:20.737211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.100 [2024-11-20 10:04:20.737263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.100 qpair failed and we were unable to recover it. 00:30:50.100 [2024-11-20 10:04:20.737624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.100 [2024-11-20 10:04:20.737656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.100 qpair failed and we were unable to recover it. 00:30:50.100 [2024-11-20 10:04:20.738013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.100 [2024-11-20 10:04:20.738047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.100 qpair failed and we were unable to recover it. 00:30:50.100 [2024-11-20 10:04:20.738411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.100 [2024-11-20 10:04:20.738442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.100 qpair failed and we were unable to recover it. 00:30:50.100 [2024-11-20 10:04:20.738674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.100 [2024-11-20 10:04:20.738710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.100 qpair failed and we were unable to recover it. 00:30:50.100 [2024-11-20 10:04:20.738968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.100 [2024-11-20 10:04:20.739001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.100 qpair failed and we were unable to recover it. 00:30:50.100 [2024-11-20 10:04:20.739388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.100 [2024-11-20 10:04:20.739419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.100 qpair failed and we were unable to recover it. 00:30:50.100 [2024-11-20 10:04:20.739779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.100 [2024-11-20 10:04:20.739810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.100 qpair failed and we were unable to recover it. 00:30:50.100 [2024-11-20 10:04:20.740193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.100 [2024-11-20 10:04:20.740225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.100 qpair failed and we were unable to recover it. 00:30:50.100 [2024-11-20 10:04:20.740599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.100 [2024-11-20 10:04:20.740632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.100 qpair failed and we were unable to recover it. 00:30:50.100 [2024-11-20 10:04:20.740869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.100 [2024-11-20 10:04:20.740900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.100 qpair failed and we were unable to recover it. 00:30:50.100 [2024-11-20 10:04:20.741319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.100 [2024-11-20 10:04:20.741351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.100 qpair failed and we were unable to recover it. 00:30:50.101 [2024-11-20 10:04:20.741717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.101 [2024-11-20 10:04:20.741747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.101 qpair failed and we were unable to recover it. 00:30:50.101 [2024-11-20 10:04:20.742114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.101 [2024-11-20 10:04:20.742143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.101 qpair failed and we were unable to recover it. 00:30:50.101 [2024-11-20 10:04:20.742506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.101 [2024-11-20 10:04:20.742536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.101 qpair failed and we were unable to recover it. 00:30:50.101 [2024-11-20 10:04:20.742889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.101 [2024-11-20 10:04:20.742926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.101 qpair failed and we were unable to recover it. 00:30:50.101 [2024-11-20 10:04:20.743290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.101 [2024-11-20 10:04:20.743322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.101 qpair failed and we were unable to recover it. 00:30:50.101 [2024-11-20 10:04:20.743698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.101 [2024-11-20 10:04:20.743730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.101 qpair failed and we were unable to recover it. 00:30:50.101 [2024-11-20 10:04:20.744082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.101 [2024-11-20 10:04:20.744114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.101 qpair failed and we were unable to recover it. 00:30:50.101 [2024-11-20 10:04:20.744512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.101 [2024-11-20 10:04:20.744545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.101 qpair failed and we were unable to recover it. 00:30:50.101 [2024-11-20 10:04:20.744903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.101 [2024-11-20 10:04:20.744932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.101 qpair failed and we were unable to recover it. 00:30:50.101 [2024-11-20 10:04:20.745193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.101 [2024-11-20 10:04:20.745225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.101 qpair failed and we were unable to recover it. 00:30:50.101 [2024-11-20 10:04:20.745587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.101 [2024-11-20 10:04:20.745617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.101 qpair failed and we were unable to recover it. 00:30:50.101 [2024-11-20 10:04:20.745975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.101 [2024-11-20 10:04:20.746007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.101 qpair failed and we were unable to recover it. 00:30:50.101 [2024-11-20 10:04:20.746389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.101 [2024-11-20 10:04:20.746421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.101 qpair failed and we were unable to recover it. 00:30:50.101 [2024-11-20 10:04:20.746775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.101 [2024-11-20 10:04:20.746805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.101 qpair failed and we were unable to recover it. 00:30:50.101 [2024-11-20 10:04:20.747240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.101 [2024-11-20 10:04:20.747271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.101 qpair failed and we were unable to recover it. 00:30:50.101 [2024-11-20 10:04:20.747628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.101 [2024-11-20 10:04:20.747659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.101 qpair failed and we were unable to recover it. 00:30:50.101 [2024-11-20 10:04:20.748054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.101 [2024-11-20 10:04:20.748086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.101 qpair failed and we were unable to recover it. 00:30:50.101 [2024-11-20 10:04:20.748460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.101 [2024-11-20 10:04:20.748492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.101 qpair failed and we were unable to recover it. 00:30:50.101 [2024-11-20 10:04:20.748849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.101 [2024-11-20 10:04:20.748879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.101 qpair failed and we were unable to recover it. 00:30:50.101 [2024-11-20 10:04:20.749236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.101 [2024-11-20 10:04:20.749270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.101 qpair failed and we were unable to recover it. 00:30:50.101 [2024-11-20 10:04:20.749658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.101 [2024-11-20 10:04:20.749689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.101 qpair failed and we were unable to recover it. 00:30:50.101 [2024-11-20 10:04:20.750046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.101 [2024-11-20 10:04:20.750078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.101 qpair failed and we were unable to recover it. 00:30:50.101 [2024-11-20 10:04:20.750442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.101 [2024-11-20 10:04:20.750474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.101 qpair failed and we were unable to recover it. 00:30:50.101 [2024-11-20 10:04:20.750844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.101 [2024-11-20 10:04:20.750874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.101 qpair failed and we were unable to recover it. 00:30:50.101 [2024-11-20 10:04:20.751234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.101 [2024-11-20 10:04:20.751266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.101 qpair failed and we were unable to recover it. 00:30:50.101 [2024-11-20 10:04:20.751632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.101 [2024-11-20 10:04:20.751665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.101 qpair failed and we were unable to recover it. 00:30:50.101 [2024-11-20 10:04:20.752026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.102 [2024-11-20 10:04:20.752055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.102 qpair failed and we were unable to recover it. 00:30:50.102 [2024-11-20 10:04:20.752413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.102 [2024-11-20 10:04:20.752446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.102 qpair failed and we were unable to recover it. 00:30:50.102 [2024-11-20 10:04:20.752875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.102 [2024-11-20 10:04:20.752905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.102 qpair failed and we were unable to recover it. 00:30:50.102 [2024-11-20 10:04:20.753231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.102 [2024-11-20 10:04:20.753262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.102 qpair failed and we were unable to recover it. 00:30:50.102 [2024-11-20 10:04:20.753636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.102 [2024-11-20 10:04:20.753665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.102 qpair failed and we were unable to recover it. 00:30:50.102 [2024-11-20 10:04:20.754020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.102 [2024-11-20 10:04:20.754051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.102 qpair failed and we were unable to recover it. 00:30:50.102 [2024-11-20 10:04:20.754426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.102 [2024-11-20 10:04:20.754462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.102 qpair failed and we were unable to recover it. 00:30:50.102 [2024-11-20 10:04:20.754831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.102 [2024-11-20 10:04:20.754865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.102 qpair failed and we were unable to recover it. 00:30:50.102 [2024-11-20 10:04:20.755228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.102 [2024-11-20 10:04:20.755260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.102 qpair failed and we were unable to recover it. 00:30:50.102 [2024-11-20 10:04:20.755614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.102 [2024-11-20 10:04:20.755646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.102 qpair failed and we were unable to recover it. 00:30:50.102 [2024-11-20 10:04:20.756011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.102 [2024-11-20 10:04:20.756041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.102 qpair failed and we were unable to recover it. 00:30:50.102 [2024-11-20 10:04:20.756420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.102 [2024-11-20 10:04:20.756452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.102 qpair failed and we were unable to recover it. 00:30:50.102 [2024-11-20 10:04:20.756793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.102 [2024-11-20 10:04:20.756824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.102 qpair failed and we were unable to recover it. 00:30:50.102 [2024-11-20 10:04:20.757187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.102 [2024-11-20 10:04:20.757218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.102 qpair failed and we were unable to recover it. 00:30:50.102 [2024-11-20 10:04:20.757513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.102 [2024-11-20 10:04:20.757543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.102 qpair failed and we were unable to recover it. 00:30:50.102 [2024-11-20 10:04:20.757974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.102 [2024-11-20 10:04:20.758004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.102 qpair failed and we were unable to recover it. 00:30:50.102 [2024-11-20 10:04:20.758354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.102 [2024-11-20 10:04:20.758385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.102 qpair failed and we were unable to recover it. 00:30:50.102 [2024-11-20 10:04:20.758760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.102 [2024-11-20 10:04:20.758789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.102 qpair failed and we were unable to recover it. 00:30:50.102 [2024-11-20 10:04:20.759173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.102 [2024-11-20 10:04:20.759205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.102 qpair failed and we were unable to recover it. 00:30:50.102 [2024-11-20 10:04:20.759590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.102 [2024-11-20 10:04:20.759620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.102 qpair failed and we were unable to recover it. 00:30:50.102 [2024-11-20 10:04:20.759999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.102 [2024-11-20 10:04:20.760029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.102 qpair failed and we were unable to recover it. 00:30:50.102 [2024-11-20 10:04:20.760395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.102 [2024-11-20 10:04:20.760428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.102 qpair failed and we were unable to recover it. 00:30:50.102 [2024-11-20 10:04:20.760673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.102 [2024-11-20 10:04:20.760707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.102 qpair failed and we were unable to recover it. 00:30:50.102 [2024-11-20 10:04:20.761064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.102 [2024-11-20 10:04:20.761095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.102 qpair failed and we were unable to recover it. 00:30:50.102 [2024-11-20 10:04:20.761431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.102 [2024-11-20 10:04:20.761466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.102 qpair failed and we were unable to recover it. 00:30:50.102 [2024-11-20 10:04:20.761817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.102 [2024-11-20 10:04:20.761848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.102 qpair failed and we were unable to recover it. 00:30:50.103 [2024-11-20 10:04:20.762204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.103 [2024-11-20 10:04:20.762238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.103 qpair failed and we were unable to recover it. 00:30:50.103 [2024-11-20 10:04:20.762611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.103 [2024-11-20 10:04:20.762641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.103 qpair failed and we were unable to recover it. 00:30:50.103 [2024-11-20 10:04:20.762996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.103 [2024-11-20 10:04:20.763026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.103 qpair failed and we were unable to recover it. 00:30:50.103 [2024-11-20 10:04:20.763362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.103 [2024-11-20 10:04:20.763393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.103 qpair failed and we were unable to recover it. 00:30:50.103 [2024-11-20 10:04:20.763753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.103 [2024-11-20 10:04:20.763785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.103 qpair failed and we were unable to recover it. 00:30:50.103 [2024-11-20 10:04:20.764141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.103 [2024-11-20 10:04:20.764185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.103 qpair failed and we were unable to recover it. 00:30:50.103 [2024-11-20 10:04:20.764571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.103 [2024-11-20 10:04:20.764601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.103 qpair failed and we were unable to recover it. 00:30:50.103 [2024-11-20 10:04:20.764996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.103 [2024-11-20 10:04:20.765028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.103 qpair failed and we were unable to recover it. 00:30:50.103 [2024-11-20 10:04:20.765283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.103 [2024-11-20 10:04:20.765314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.103 qpair failed and we were unable to recover it. 00:30:50.103 [2024-11-20 10:04:20.765713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.103 [2024-11-20 10:04:20.765743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.103 qpair failed and we were unable to recover it. 00:30:50.103 [2024-11-20 10:04:20.766094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.103 [2024-11-20 10:04:20.766124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.103 qpair failed and we were unable to recover it. 00:30:50.103 [2024-11-20 10:04:20.766487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.103 [2024-11-20 10:04:20.766520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.103 qpair failed and we were unable to recover it. 00:30:50.103 [2024-11-20 10:04:20.766873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.103 [2024-11-20 10:04:20.766903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.103 qpair failed and we were unable to recover it. 00:30:50.103 [2024-11-20 10:04:20.767263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.103 [2024-11-20 10:04:20.767296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.103 qpair failed and we were unable to recover it. 00:30:50.103 [2024-11-20 10:04:20.767654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.103 [2024-11-20 10:04:20.767684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.103 qpair failed and we were unable to recover it. 00:30:50.103 [2024-11-20 10:04:20.768046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.103 [2024-11-20 10:04:20.768078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.103 qpair failed and we were unable to recover it. 00:30:50.103 [2024-11-20 10:04:20.768438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.103 [2024-11-20 10:04:20.768471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.103 qpair failed and we were unable to recover it. 00:30:50.103 [2024-11-20 10:04:20.768818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.103 [2024-11-20 10:04:20.768847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.103 qpair failed and we were unable to recover it. 00:30:50.103 [2024-11-20 10:04:20.769208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.103 [2024-11-20 10:04:20.769241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.103 qpair failed and we were unable to recover it. 00:30:50.103 [2024-11-20 10:04:20.769482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.103 [2024-11-20 10:04:20.769512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.103 qpair failed and we were unable to recover it. 00:30:50.103 [2024-11-20 10:04:20.769774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.103 [2024-11-20 10:04:20.769810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.103 qpair failed and we were unable to recover it. 00:30:50.103 [2024-11-20 10:04:20.770152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.103 [2024-11-20 10:04:20.770196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.103 qpair failed and we were unable to recover it. 00:30:50.103 [2024-11-20 10:04:20.770539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.103 [2024-11-20 10:04:20.770571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.103 qpair failed and we were unable to recover it. 00:30:50.103 [2024-11-20 10:04:20.770914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.103 [2024-11-20 10:04:20.770944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.103 qpair failed and we were unable to recover it. 00:30:50.103 [2024-11-20 10:04:20.771286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.103 [2024-11-20 10:04:20.771319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.103 qpair failed and we were unable to recover it. 00:30:50.103 [2024-11-20 10:04:20.771681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.103 [2024-11-20 10:04:20.771711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.103 qpair failed and we were unable to recover it. 00:30:50.104 [2024-11-20 10:04:20.772065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.104 [2024-11-20 10:04:20.772096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.104 qpair failed and we were unable to recover it. 00:30:50.104 [2024-11-20 10:04:20.772466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.104 [2024-11-20 10:04:20.772499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.104 qpair failed and we were unable to recover it. 00:30:50.104 [2024-11-20 10:04:20.772896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.104 [2024-11-20 10:04:20.772928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.104 qpair failed and we were unable to recover it. 00:30:50.104 [2024-11-20 10:04:20.773277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.104 [2024-11-20 10:04:20.773309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.104 qpair failed and we were unable to recover it. 00:30:50.104 [2024-11-20 10:04:20.773544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.104 [2024-11-20 10:04:20.773577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.104 qpair failed and we were unable to recover it. 00:30:50.104 [2024-11-20 10:04:20.773953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.104 [2024-11-20 10:04:20.773983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.104 qpair failed and we were unable to recover it. 00:30:50.104 [2024-11-20 10:04:20.774346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.104 [2024-11-20 10:04:20.774378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.104 qpair failed and we were unable to recover it. 00:30:50.104 [2024-11-20 10:04:20.774604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.104 [2024-11-20 10:04:20.774633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.104 qpair failed and we were unable to recover it. 00:30:50.104 [2024-11-20 10:04:20.775004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.104 [2024-11-20 10:04:20.775035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.104 qpair failed and we were unable to recover it. 00:30:50.104 [2024-11-20 10:04:20.775445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.104 [2024-11-20 10:04:20.775477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.104 qpair failed and we were unable to recover it. 00:30:50.104 [2024-11-20 10:04:20.775717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.104 [2024-11-20 10:04:20.775747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.104 qpair failed and we were unable to recover it. 00:30:50.104 [2024-11-20 10:04:20.776093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.104 [2024-11-20 10:04:20.776124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.104 qpair failed and we were unable to recover it. 00:30:50.104 [2024-11-20 10:04:20.776505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.104 [2024-11-20 10:04:20.776538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.104 qpair failed and we were unable to recover it. 00:30:50.104 [2024-11-20 10:04:20.776894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.104 [2024-11-20 10:04:20.776925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.104 qpair failed and we were unable to recover it. 00:30:50.104 [2024-11-20 10:04:20.777279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.104 [2024-11-20 10:04:20.777312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.104 qpair failed and we were unable to recover it. 00:30:50.104 [2024-11-20 10:04:20.779525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.104 [2024-11-20 10:04:20.779592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.104 qpair failed and we were unable to recover it. 00:30:50.104 [2024-11-20 10:04:20.779991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.104 [2024-11-20 10:04:20.780028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.104 qpair failed and we were unable to recover it. 00:30:50.104 [2024-11-20 10:04:20.780405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.104 [2024-11-20 10:04:20.780438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.104 qpair failed and we were unable to recover it. 00:30:50.104 [2024-11-20 10:04:20.780797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.104 [2024-11-20 10:04:20.780829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.104 qpair failed and we were unable to recover it. 00:30:50.104 [2024-11-20 10:04:20.781183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.104 [2024-11-20 10:04:20.781215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.104 qpair failed and we were unable to recover it. 00:30:50.104 [2024-11-20 10:04:20.781455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.104 [2024-11-20 10:04:20.781486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.104 qpair failed and we were unable to recover it. 00:30:50.104 [2024-11-20 10:04:20.781704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.104 [2024-11-20 10:04:20.781744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.104 qpair failed and we were unable to recover it. 00:30:50.105 [2024-11-20 10:04:20.782081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.105 [2024-11-20 10:04:20.782113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.105 qpair failed and we were unable to recover it. 00:30:50.105 [2024-11-20 10:04:20.782469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.105 [2024-11-20 10:04:20.782501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.105 qpair failed and we were unable to recover it. 00:30:50.105 [2024-11-20 10:04:20.782932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.105 [2024-11-20 10:04:20.782962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.105 qpair failed and we were unable to recover it. 00:30:50.105 [2024-11-20 10:04:20.783305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.105 [2024-11-20 10:04:20.783339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.105 qpair failed and we were unable to recover it. 00:30:50.105 [2024-11-20 10:04:20.783691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.105 [2024-11-20 10:04:20.783722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.105 qpair failed and we were unable to recover it. 00:30:50.105 [2024-11-20 10:04:20.784087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.105 [2024-11-20 10:04:20.784120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.105 qpair failed and we were unable to recover it. 00:30:50.105 [2024-11-20 10:04:20.784496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.105 [2024-11-20 10:04:20.784528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.105 qpair failed and we were unable to recover it. 00:30:50.105 [2024-11-20 10:04:20.784941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.105 [2024-11-20 10:04:20.784972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.105 qpair failed and we were unable to recover it. 00:30:50.105 [2024-11-20 10:04:20.785315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.105 [2024-11-20 10:04:20.785346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.105 qpair failed and we were unable to recover it. 00:30:50.105 [2024-11-20 10:04:20.785713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.105 [2024-11-20 10:04:20.785743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.105 qpair failed and we were unable to recover it. 00:30:50.105 [2024-11-20 10:04:20.785998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.105 [2024-11-20 10:04:20.786029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.105 qpair failed and we were unable to recover it. 00:30:50.105 [2024-11-20 10:04:20.786386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.105 [2024-11-20 10:04:20.786417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.105 qpair failed and we were unable to recover it. 00:30:50.105 [2024-11-20 10:04:20.786773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.105 [2024-11-20 10:04:20.786805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.105 qpair failed and we were unable to recover it. 00:30:50.105 [2024-11-20 10:04:20.787183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.105 [2024-11-20 10:04:20.787214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.105 qpair failed and we were unable to recover it. 00:30:50.105 [2024-11-20 10:04:20.787575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.105 [2024-11-20 10:04:20.787606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.105 qpair failed and we were unable to recover it. 00:30:50.105 [2024-11-20 10:04:20.787860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.105 [2024-11-20 10:04:20.787890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.105 qpair failed and we were unable to recover it. 00:30:50.105 [2024-11-20 10:04:20.788229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.105 [2024-11-20 10:04:20.788262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.105 qpair failed and we were unable to recover it. 00:30:50.105 [2024-11-20 10:04:20.788624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.105 [2024-11-20 10:04:20.788655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.105 qpair failed and we were unable to recover it. 00:30:50.105 [2024-11-20 10:04:20.789009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.105 [2024-11-20 10:04:20.789040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.105 qpair failed and we were unable to recover it. 00:30:50.105 [2024-11-20 10:04:20.789393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.105 [2024-11-20 10:04:20.789424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.105 qpair failed and we were unable to recover it. 00:30:50.105 [2024-11-20 10:04:20.789775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.105 [2024-11-20 10:04:20.789805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.105 qpair failed and we were unable to recover it. 00:30:50.105 [2024-11-20 10:04:20.790152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.105 [2024-11-20 10:04:20.790196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.105 qpair failed and we were unable to recover it. 00:30:50.105 [2024-11-20 10:04:20.790550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.105 [2024-11-20 10:04:20.790580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.105 qpair failed and we were unable to recover it. 00:30:50.105 [2024-11-20 10:04:20.790951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.105 [2024-11-20 10:04:20.790982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.105 qpair failed and we were unable to recover it. 00:30:50.105 [2024-11-20 10:04:20.791350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.105 [2024-11-20 10:04:20.791385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.105 qpair failed and we were unable to recover it. 00:30:50.105 [2024-11-20 10:04:20.791726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.105 [2024-11-20 10:04:20.791756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.105 qpair failed and we were unable to recover it. 00:30:50.106 [2024-11-20 10:04:20.792108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.106 [2024-11-20 10:04:20.792141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.106 qpair failed and we were unable to recover it. 00:30:50.106 [2024-11-20 10:04:20.792530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.106 [2024-11-20 10:04:20.792561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.106 qpair failed and we were unable to recover it. 00:30:50.106 [2024-11-20 10:04:20.792915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.106 [2024-11-20 10:04:20.792947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.106 qpair failed and we were unable to recover it. 00:30:50.106 [2024-11-20 10:04:20.793303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.106 [2024-11-20 10:04:20.793334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.106 qpair failed and we were unable to recover it. 00:30:50.106 [2024-11-20 10:04:20.793724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.106 [2024-11-20 10:04:20.793754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.106 qpair failed and we were unable to recover it. 00:30:50.106 [2024-11-20 10:04:20.794107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.106 [2024-11-20 10:04:20.794139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.106 qpair failed and we were unable to recover it. 00:30:50.106 [2024-11-20 10:04:20.794510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.106 [2024-11-20 10:04:20.794541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.106 qpair failed and we were unable to recover it. 00:30:50.106 [2024-11-20 10:04:20.794857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.106 [2024-11-20 10:04:20.794889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.106 qpair failed and we were unable to recover it. 00:30:50.106 [2024-11-20 10:04:20.795144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.106 [2024-11-20 10:04:20.795185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.106 qpair failed and we were unable to recover it. 00:30:50.106 [2024-11-20 10:04:20.795575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.106 [2024-11-20 10:04:20.795606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.106 qpair failed and we were unable to recover it. 00:30:50.106 [2024-11-20 10:04:20.795951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.106 [2024-11-20 10:04:20.795981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.106 qpair failed and we were unable to recover it. 00:30:50.106 [2024-11-20 10:04:20.796336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.106 [2024-11-20 10:04:20.796369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.106 qpair failed and we were unable to recover it. 00:30:50.106 [2024-11-20 10:04:20.796737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.106 [2024-11-20 10:04:20.796767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.106 qpair failed and we were unable to recover it. 00:30:50.106 [2024-11-20 10:04:20.797132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.106 [2024-11-20 10:04:20.797181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.106 qpair failed and we were unable to recover it. 00:30:50.106 [2024-11-20 10:04:20.797526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.106 [2024-11-20 10:04:20.797558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.106 qpair failed and we were unable to recover it. 00:30:50.106 [2024-11-20 10:04:20.797783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.106 [2024-11-20 10:04:20.797813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.106 qpair failed and we were unable to recover it. 00:30:50.106 [2024-11-20 10:04:20.798139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.106 [2024-11-20 10:04:20.798181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.106 qpair failed and we were unable to recover it. 00:30:50.106 [2024-11-20 10:04:20.798554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.106 [2024-11-20 10:04:20.798585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.106 qpair failed and we were unable to recover it. 00:30:50.106 [2024-11-20 10:04:20.798940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.106 [2024-11-20 10:04:20.798972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.106 qpair failed and we were unable to recover it. 00:30:50.106 [2024-11-20 10:04:20.799322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.106 [2024-11-20 10:04:20.799354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.106 qpair failed and we were unable to recover it. 00:30:50.106 [2024-11-20 10:04:20.799605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.106 [2024-11-20 10:04:20.799635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.106 qpair failed and we were unable to recover it. 00:30:50.106 [2024-11-20 10:04:20.799991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.106 [2024-11-20 10:04:20.800021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.106 qpair failed and we were unable to recover it. 00:30:50.106 [2024-11-20 10:04:20.800394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.106 [2024-11-20 10:04:20.800426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.106 qpair failed and we were unable to recover it. 00:30:50.106 [2024-11-20 10:04:20.800781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.106 [2024-11-20 10:04:20.800813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.106 qpair failed and we were unable to recover it. 00:30:50.106 [2024-11-20 10:04:20.801198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.106 [2024-11-20 10:04:20.801231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.106 qpair failed and we were unable to recover it. 00:30:50.106 [2024-11-20 10:04:20.801578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.106 [2024-11-20 10:04:20.801608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.106 qpair failed and we were unable to recover it. 00:30:50.107 [2024-11-20 10:04:20.801968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.107 [2024-11-20 10:04:20.801999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.107 qpair failed and we were unable to recover it. 00:30:50.107 [2024-11-20 10:04:20.802340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.107 [2024-11-20 10:04:20.802372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.107 qpair failed and we were unable to recover it. 00:30:50.107 [2024-11-20 10:04:20.802744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.107 [2024-11-20 10:04:20.802773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.107 qpair failed and we were unable to recover it. 00:30:50.107 [2024-11-20 10:04:20.803132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.107 [2024-11-20 10:04:20.803173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.107 qpair failed and we were unable to recover it. 00:30:50.107 [2024-11-20 10:04:20.803568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.107 [2024-11-20 10:04:20.803598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.107 qpair failed and we were unable to recover it. 00:30:50.107 [2024-11-20 10:04:20.803946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.107 [2024-11-20 10:04:20.803977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.107 qpair failed and we were unable to recover it. 00:30:50.107 [2024-11-20 10:04:20.804333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.107 [2024-11-20 10:04:20.804365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.107 qpair failed and we were unable to recover it. 00:30:50.107 [2024-11-20 10:04:20.804716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.107 [2024-11-20 10:04:20.804745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.107 qpair failed and we were unable to recover it. 00:30:50.107 [2024-11-20 10:04:20.805122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.107 [2024-11-20 10:04:20.805151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.107 qpair failed and we were unable to recover it. 00:30:50.107 [2024-11-20 10:04:20.805570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.107 [2024-11-20 10:04:20.805601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.107 qpair failed and we were unable to recover it. 00:30:50.107 [2024-11-20 10:04:20.805944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.107 [2024-11-20 10:04:20.805975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.107 qpair failed and we were unable to recover it. 00:30:50.107 [2024-11-20 10:04:20.806328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.107 [2024-11-20 10:04:20.806360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.107 qpair failed and we were unable to recover it. 00:30:50.107 [2024-11-20 10:04:20.806715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.107 [2024-11-20 10:04:20.806746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.107 qpair failed and we were unable to recover it. 00:30:50.107 [2024-11-20 10:04:20.807103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.107 [2024-11-20 10:04:20.807133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.107 qpair failed and we were unable to recover it. 00:30:50.107 [2024-11-20 10:04:20.807526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.107 [2024-11-20 10:04:20.807559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.107 qpair failed and we were unable to recover it. 00:30:50.107 [2024-11-20 10:04:20.807932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.107 [2024-11-20 10:04:20.807965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.107 qpair failed and we were unable to recover it. 00:30:50.107 [2024-11-20 10:04:20.808339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.107 [2024-11-20 10:04:20.808370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.107 qpair failed and we were unable to recover it. 00:30:50.107 [2024-11-20 10:04:20.808728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.107 [2024-11-20 10:04:20.808761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.107 qpair failed and we were unable to recover it. 00:30:50.107 [2024-11-20 10:04:20.809176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.107 [2024-11-20 10:04:20.809209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.107 qpair failed and we were unable to recover it. 00:30:50.107 [2024-11-20 10:04:20.809557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.107 [2024-11-20 10:04:20.809588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.107 qpair failed and we were unable to recover it. 00:30:50.107 [2024-11-20 10:04:20.809962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.107 [2024-11-20 10:04:20.809992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.107 qpair failed and we were unable to recover it. 00:30:50.107 [2024-11-20 10:04:20.810370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.107 [2024-11-20 10:04:20.810402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.107 qpair failed and we were unable to recover it. 00:30:50.107 [2024-11-20 10:04:20.810755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.107 [2024-11-20 10:04:20.810787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.107 qpair failed and we were unable to recover it. 00:30:50.107 [2024-11-20 10:04:20.811117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.107 [2024-11-20 10:04:20.811147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.107 qpair failed and we were unable to recover it. 00:30:50.107 [2024-11-20 10:04:20.811546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.107 [2024-11-20 10:04:20.811576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.107 qpair failed and we were unable to recover it. 00:30:50.107 [2024-11-20 10:04:20.811919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.107 [2024-11-20 10:04:20.811950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.107 qpair failed and we were unable to recover it. 00:30:50.108 [2024-11-20 10:04:20.812307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.108 [2024-11-20 10:04:20.812337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.108 qpair failed and we were unable to recover it. 00:30:50.108 [2024-11-20 10:04:20.812693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.108 [2024-11-20 10:04:20.812729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.108 qpair failed and we were unable to recover it. 00:30:50.108 [2024-11-20 10:04:20.813090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.108 [2024-11-20 10:04:20.813121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.108 qpair failed and we were unable to recover it. 00:30:50.108 [2024-11-20 10:04:20.813414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.108 [2024-11-20 10:04:20.813445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.108 qpair failed and we were unable to recover it. 00:30:50.108 [2024-11-20 10:04:20.813791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.108 [2024-11-20 10:04:20.813820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.108 qpair failed and we were unable to recover it. 00:30:50.108 [2024-11-20 10:04:20.814191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.108 [2024-11-20 10:04:20.814223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.108 qpair failed and we were unable to recover it. 00:30:50.108 [2024-11-20 10:04:20.814570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.108 [2024-11-20 10:04:20.814599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.108 qpair failed and we were unable to recover it. 00:30:50.108 [2024-11-20 10:04:20.814875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.108 [2024-11-20 10:04:20.814904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.108 qpair failed and we were unable to recover it. 00:30:50.108 [2024-11-20 10:04:20.815243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.108 [2024-11-20 10:04:20.815275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.108 qpair failed and we were unable to recover it. 00:30:50.108 [2024-11-20 10:04:20.815619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.108 [2024-11-20 10:04:20.815649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.108 qpair failed and we were unable to recover it. 00:30:50.108 [2024-11-20 10:04:20.815896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.108 [2024-11-20 10:04:20.815929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.108 qpair failed and we were unable to recover it. 00:30:50.108 [2024-11-20 10:04:20.816270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.108 [2024-11-20 10:04:20.816302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.108 qpair failed and we were unable to recover it. 00:30:50.108 [2024-11-20 10:04:20.816663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.108 [2024-11-20 10:04:20.816692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.108 qpair failed and we were unable to recover it. 00:30:50.108 [2024-11-20 10:04:20.817051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.108 [2024-11-20 10:04:20.817081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.108 qpair failed and we were unable to recover it. 00:30:50.108 [2024-11-20 10:04:20.817318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.108 [2024-11-20 10:04:20.817350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.108 qpair failed and we were unable to recover it. 00:30:50.108 [2024-11-20 10:04:20.817718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.108 [2024-11-20 10:04:20.817751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.108 qpair failed and we were unable to recover it. 00:30:50.108 [2024-11-20 10:04:20.818103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.108 [2024-11-20 10:04:20.818134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.108 qpair failed and we were unable to recover it. 00:30:50.108 [2024-11-20 10:04:20.818501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.108 [2024-11-20 10:04:20.818533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.108 qpair failed and we were unable to recover it. 00:30:50.108 [2024-11-20 10:04:20.818935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.108 [2024-11-20 10:04:20.818965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.108 qpair failed and we were unable to recover it. 00:30:50.108 [2024-11-20 10:04:20.819310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.108 [2024-11-20 10:04:20.819341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.108 qpair failed and we were unable to recover it. 00:30:50.108 [2024-11-20 10:04:20.819690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.108 [2024-11-20 10:04:20.819722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.108 qpair failed and we were unable to recover it. 00:30:50.108 [2024-11-20 10:04:20.820080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.108 [2024-11-20 10:04:20.820109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.108 qpair failed and we were unable to recover it. 00:30:50.108 [2024-11-20 10:04:20.820395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.108 [2024-11-20 10:04:20.820430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.108 qpair failed and we were unable to recover it. 00:30:50.108 [2024-11-20 10:04:20.820775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.108 [2024-11-20 10:04:20.820805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.108 qpair failed and we were unable to recover it. 00:30:50.108 [2024-11-20 10:04:20.821188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.108 [2024-11-20 10:04:20.821220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.108 qpair failed and we were unable to recover it. 00:30:50.108 [2024-11-20 10:04:20.821565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.108 [2024-11-20 10:04:20.821595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.108 qpair failed and we were unable to recover it. 00:30:50.109 [2024-11-20 10:04:20.821963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.109 [2024-11-20 10:04:20.821993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.109 qpair failed and we were unable to recover it. 00:30:50.109 [2024-11-20 10:04:20.822361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.109 [2024-11-20 10:04:20.822393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.109 qpair failed and we were unable to recover it. 00:30:50.109 [2024-11-20 10:04:20.822740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.109 [2024-11-20 10:04:20.822771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.109 qpair failed and we were unable to recover it. 00:30:50.109 [2024-11-20 10:04:20.823168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.109 [2024-11-20 10:04:20.823199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.109 qpair failed and we were unable to recover it. 00:30:50.109 [2024-11-20 10:04:20.823555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.109 [2024-11-20 10:04:20.823585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.109 qpair failed and we were unable to recover it. 00:30:50.109 [2024-11-20 10:04:20.823937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.109 [2024-11-20 10:04:20.823967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.109 qpair failed and we were unable to recover it. 00:30:50.109 [2024-11-20 10:04:20.824321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.109 [2024-11-20 10:04:20.824353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.109 qpair failed and we were unable to recover it. 00:30:50.109 [2024-11-20 10:04:20.824784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.109 [2024-11-20 10:04:20.824814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.109 qpair failed and we were unable to recover it. 00:30:50.109 [2024-11-20 10:04:20.825049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.109 [2024-11-20 10:04:20.825082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.109 qpair failed and we were unable to recover it. 00:30:50.109 [2024-11-20 10:04:20.825426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.109 [2024-11-20 10:04:20.825459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.109 qpair failed and we were unable to recover it. 00:30:50.109 [2024-11-20 10:04:20.825824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.109 [2024-11-20 10:04:20.825856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.109 qpair failed and we were unable to recover it. 00:30:50.109 [2024-11-20 10:04:20.826213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.109 [2024-11-20 10:04:20.826244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.109 qpair failed and we were unable to recover it. 00:30:50.109 [2024-11-20 10:04:20.826474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.109 [2024-11-20 10:04:20.826504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.109 qpair failed and we were unable to recover it. 00:30:50.109 [2024-11-20 10:04:20.826874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.109 [2024-11-20 10:04:20.826905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.109 qpair failed and we were unable to recover it. 00:30:50.109 [2024-11-20 10:04:20.827273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.109 [2024-11-20 10:04:20.827306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.109 qpair failed and we were unable to recover it. 00:30:50.109 [2024-11-20 10:04:20.827548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.109 [2024-11-20 10:04:20.827588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.109 qpair failed and we were unable to recover it. 00:30:50.109 [2024-11-20 10:04:20.827813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.109 [2024-11-20 10:04:20.827844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.109 qpair failed and we were unable to recover it. 00:30:50.109 [2024-11-20 10:04:20.828050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.109 [2024-11-20 10:04:20.828081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.109 qpair failed and we were unable to recover it. 00:30:50.109 [2024-11-20 10:04:20.828443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.109 [2024-11-20 10:04:20.828475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.109 qpair failed and we were unable to recover it. 00:30:50.109 [2024-11-20 10:04:20.828838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.109 [2024-11-20 10:04:20.828869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.109 qpair failed and we were unable to recover it. 00:30:50.109 [2024-11-20 10:04:20.829220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.109 [2024-11-20 10:04:20.829253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.109 qpair failed and we were unable to recover it. 00:30:50.109 [2024-11-20 10:04:20.829625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.109 [2024-11-20 10:04:20.829655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.109 qpair failed and we were unable to recover it. 00:30:50.109 [2024-11-20 10:04:20.830081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.109 [2024-11-20 10:04:20.830111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.109 qpair failed and we were unable to recover it. 00:30:50.109 [2024-11-20 10:04:20.830490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.109 [2024-11-20 10:04:20.830522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.109 qpair failed and we were unable to recover it. 00:30:50.109 [2024-11-20 10:04:20.830876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.109 [2024-11-20 10:04:20.830905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.109 qpair failed and we were unable to recover it. 00:30:50.109 [2024-11-20 10:04:20.831268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.109 [2024-11-20 10:04:20.831299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.109 qpair failed and we were unable to recover it. 00:30:50.110 [2024-11-20 10:04:20.831658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.110 [2024-11-20 10:04:20.831690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.110 qpair failed and we were unable to recover it. 00:30:50.110 [2024-11-20 10:04:20.831952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.110 [2024-11-20 10:04:20.831982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.110 qpair failed and we were unable to recover it. 00:30:50.110 [2024-11-20 10:04:20.832330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.110 [2024-11-20 10:04:20.832363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.110 qpair failed and we were unable to recover it. 00:30:50.110 [2024-11-20 10:04:20.832707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.110 [2024-11-20 10:04:20.832738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.110 qpair failed and we were unable to recover it. 00:30:50.110 [2024-11-20 10:04:20.833094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.110 [2024-11-20 10:04:20.833126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.110 qpair failed and we were unable to recover it. 00:30:50.110 [2024-11-20 10:04:20.833469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.110 [2024-11-20 10:04:20.833501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.110 qpair failed and we were unable to recover it. 00:30:50.110 [2024-11-20 10:04:20.833844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.110 [2024-11-20 10:04:20.833876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.110 qpair failed and we were unable to recover it. 00:30:50.110 [2024-11-20 10:04:20.834226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.110 [2024-11-20 10:04:20.834258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.110 qpair failed and we were unable to recover it. 00:30:50.110 [2024-11-20 10:04:20.834639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.110 [2024-11-20 10:04:20.834670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.110 qpair failed and we were unable to recover it. 00:30:50.110 [2024-11-20 10:04:20.834892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.110 [2024-11-20 10:04:20.834921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.110 qpair failed and we were unable to recover it. 00:30:50.110 [2024-11-20 10:04:20.835288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.110 [2024-11-20 10:04:20.835321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.110 qpair failed and we were unable to recover it. 00:30:50.110 [2024-11-20 10:04:20.835679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.110 [2024-11-20 10:04:20.835708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.110 qpair failed and we were unable to recover it. 00:30:50.110 [2024-11-20 10:04:20.836045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.110 [2024-11-20 10:04:20.836077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.110 qpair failed and we were unable to recover it. 00:30:50.110 [2024-11-20 10:04:20.836420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.110 [2024-11-20 10:04:20.836452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.110 qpair failed and we were unable to recover it. 00:30:50.110 [2024-11-20 10:04:20.836803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.110 [2024-11-20 10:04:20.836834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.110 qpair failed and we were unable to recover it. 00:30:50.110 [2024-11-20 10:04:20.837194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.110 [2024-11-20 10:04:20.837225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.110 qpair failed and we were unable to recover it. 00:30:50.110 [2024-11-20 10:04:20.837577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.110 [2024-11-20 10:04:20.837607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.110 qpair failed and we were unable to recover it. 00:30:50.110 [2024-11-20 10:04:20.837954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.110 [2024-11-20 10:04:20.837986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.110 qpair failed and we were unable to recover it. 00:30:50.110 [2024-11-20 10:04:20.838336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.110 [2024-11-20 10:04:20.838370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.110 qpair failed and we were unable to recover it. 00:30:50.110 [2024-11-20 10:04:20.838716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.110 [2024-11-20 10:04:20.838748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.110 qpair failed and we were unable to recover it. 00:30:50.110 [2024-11-20 10:04:20.839091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.110 [2024-11-20 10:04:20.839123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.110 qpair failed and we were unable to recover it. 00:30:50.110 [2024-11-20 10:04:20.839491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.110 [2024-11-20 10:04:20.839522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.110 qpair failed and we were unable to recover it. 00:30:50.110 [2024-11-20 10:04:20.839875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.110 [2024-11-20 10:04:20.839906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.110 qpair failed and we were unable to recover it. 00:30:50.110 [2024-11-20 10:04:20.840262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.110 [2024-11-20 10:04:20.840293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.110 qpair failed and we were unable to recover it. 00:30:50.110 [2024-11-20 10:04:20.840648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.110 [2024-11-20 10:04:20.840680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.110 qpair failed and we were unable to recover it. 00:30:50.110 [2024-11-20 10:04:20.841050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.110 [2024-11-20 10:04:20.841082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.110 qpair failed and we were unable to recover it. 00:30:50.111 [2024-11-20 10:04:20.841466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.111 [2024-11-20 10:04:20.841498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.111 qpair failed and we were unable to recover it. 00:30:50.111 [2024-11-20 10:04:20.841838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.111 [2024-11-20 10:04:20.841869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.111 qpair failed and we were unable to recover it. 00:30:50.111 [2024-11-20 10:04:20.842221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.111 [2024-11-20 10:04:20.842251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.111 qpair failed and we were unable to recover it. 00:30:50.111 [2024-11-20 10:04:20.842638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.111 [2024-11-20 10:04:20.842676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.111 qpair failed and we were unable to recover it. 00:30:50.111 [2024-11-20 10:04:20.843046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.111 [2024-11-20 10:04:20.843079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.111 qpair failed and we were unable to recover it. 00:30:50.111 [2024-11-20 10:04:20.843462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.111 [2024-11-20 10:04:20.843494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.111 qpair failed and we were unable to recover it. 00:30:50.111 [2024-11-20 10:04:20.843855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.111 [2024-11-20 10:04:20.843885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.111 qpair failed and we were unable to recover it. 00:30:50.111 [2024-11-20 10:04:20.844247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.111 [2024-11-20 10:04:20.844279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.111 qpair failed and we were unable to recover it. 00:30:50.111 [2024-11-20 10:04:20.844645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.111 [2024-11-20 10:04:20.844675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.111 qpair failed and we were unable to recover it. 00:30:50.111 [2024-11-20 10:04:20.845033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.111 [2024-11-20 10:04:20.845064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.111 qpair failed and we were unable to recover it. 00:30:50.111 [2024-11-20 10:04:20.845466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.111 [2024-11-20 10:04:20.845499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.111 qpair failed and we were unable to recover it. 00:30:50.111 [2024-11-20 10:04:20.845844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.111 [2024-11-20 10:04:20.845877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.111 qpair failed and we were unable to recover it. 00:30:50.111 [2024-11-20 10:04:20.846232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.111 [2024-11-20 10:04:20.846264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.111 qpair failed and we were unable to recover it. 00:30:50.111 [2024-11-20 10:04:20.846632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.111 [2024-11-20 10:04:20.846664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.111 qpair failed and we were unable to recover it. 00:30:50.111 [2024-11-20 10:04:20.847001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.111 [2024-11-20 10:04:20.847031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.111 qpair failed and we were unable to recover it. 00:30:50.111 [2024-11-20 10:04:20.847381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.111 [2024-11-20 10:04:20.847414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.111 qpair failed and we were unable to recover it. 00:30:50.111 [2024-11-20 10:04:20.847763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.111 [2024-11-20 10:04:20.847794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.111 qpair failed and we were unable to recover it. 00:30:50.111 [2024-11-20 10:04:20.848142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.111 [2024-11-20 10:04:20.848188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.111 qpair failed and we were unable to recover it. 00:30:50.111 [2024-11-20 10:04:20.848538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.111 [2024-11-20 10:04:20.848567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.111 qpair failed and we were unable to recover it. 00:30:50.111 [2024-11-20 10:04:20.848918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.111 [2024-11-20 10:04:20.848948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.111 qpair failed and we were unable to recover it. 00:30:50.111 [2024-11-20 10:04:20.849318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.111 [2024-11-20 10:04:20.849349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.111 qpair failed and we were unable to recover it. 00:30:50.111 [2024-11-20 10:04:20.849699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.112 [2024-11-20 10:04:20.849729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.112 qpair failed and we were unable to recover it. 00:30:50.112 [2024-11-20 10:04:20.850086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.112 [2024-11-20 10:04:20.850118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.112 qpair failed and we were unable to recover it. 00:30:50.112 [2024-11-20 10:04:20.850514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.112 [2024-11-20 10:04:20.850546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.112 qpair failed and we were unable to recover it. 00:30:50.112 [2024-11-20 10:04:20.850893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.112 [2024-11-20 10:04:20.850922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.112 qpair failed and we were unable to recover it. 00:30:50.112 [2024-11-20 10:04:20.851277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.112 [2024-11-20 10:04:20.851308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.112 qpair failed and we were unable to recover it. 00:30:50.112 [2024-11-20 10:04:20.851667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.112 [2024-11-20 10:04:20.851696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.112 qpair failed and we were unable to recover it. 00:30:50.112 [2024-11-20 10:04:20.852057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.112 [2024-11-20 10:04:20.852089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.112 qpair failed and we were unable to recover it. 00:30:50.112 [2024-11-20 10:04:20.852439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.112 [2024-11-20 10:04:20.852470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.112 qpair failed and we were unable to recover it. 00:30:50.112 [2024-11-20 10:04:20.852819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.112 [2024-11-20 10:04:20.852850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.112 qpair failed and we were unable to recover it. 00:30:50.112 [2024-11-20 10:04:20.853212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.112 [2024-11-20 10:04:20.853244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.112 qpair failed and we were unable to recover it. 00:30:50.112 [2024-11-20 10:04:20.853602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.112 [2024-11-20 10:04:20.853632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.112 qpair failed and we were unable to recover it. 00:30:50.112 [2024-11-20 10:04:20.853973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.112 [2024-11-20 10:04:20.854002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.112 qpair failed and we were unable to recover it. 00:30:50.112 [2024-11-20 10:04:20.854367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.112 [2024-11-20 10:04:20.854399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.112 qpair failed and we were unable to recover it. 00:30:50.112 [2024-11-20 10:04:20.854819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.112 [2024-11-20 10:04:20.854849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.112 qpair failed and we were unable to recover it. 00:30:50.112 [2024-11-20 10:04:20.855181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.112 [2024-11-20 10:04:20.855212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.112 qpair failed and we were unable to recover it. 00:30:50.112 [2024-11-20 10:04:20.855448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.112 [2024-11-20 10:04:20.855478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.112 qpair failed and we were unable to recover it. 00:30:50.112 [2024-11-20 10:04:20.855823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.112 [2024-11-20 10:04:20.855856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.112 qpair failed and we were unable to recover it. 00:30:50.112 [2024-11-20 10:04:20.856085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.112 [2024-11-20 10:04:20.856114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.112 qpair failed and we were unable to recover it. 00:30:50.112 [2024-11-20 10:04:20.856464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.112 [2024-11-20 10:04:20.856498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.112 qpair failed and we were unable to recover it. 00:30:50.112 [2024-11-20 10:04:20.856848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.112 [2024-11-20 10:04:20.856878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.112 qpair failed and we were unable to recover it. 00:30:50.112 [2024-11-20 10:04:20.857248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.112 [2024-11-20 10:04:20.857282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.112 qpair failed and we were unable to recover it. 00:30:50.112 [2024-11-20 10:04:20.857680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.112 [2024-11-20 10:04:20.857710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.112 qpair failed and we were unable to recover it. 00:30:50.112 [2024-11-20 10:04:20.857949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.112 [2024-11-20 10:04:20.857985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.112 qpair failed and we were unable to recover it. 00:30:50.112 [2024-11-20 10:04:20.858375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.112 [2024-11-20 10:04:20.858407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.112 qpair failed and we were unable to recover it. 00:30:50.112 [2024-11-20 10:04:20.858747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.112 [2024-11-20 10:04:20.858778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.112 qpair failed and we were unable to recover it. 00:30:50.112 [2024-11-20 10:04:20.859132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.112 [2024-11-20 10:04:20.859171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.113 qpair failed and we were unable to recover it. 00:30:50.113 [2024-11-20 10:04:20.859524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.113 [2024-11-20 10:04:20.859555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.113 qpair failed and we were unable to recover it. 00:30:50.113 [2024-11-20 10:04:20.859905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.113 [2024-11-20 10:04:20.859934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.113 qpair failed and we were unable to recover it. 00:30:50.113 [2024-11-20 10:04:20.860294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.113 [2024-11-20 10:04:20.860326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.113 qpair failed and we were unable to recover it. 00:30:50.113 [2024-11-20 10:04:20.860683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.113 [2024-11-20 10:04:20.860712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.113 qpair failed and we were unable to recover it. 00:30:50.113 [2024-11-20 10:04:20.861063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.113 [2024-11-20 10:04:20.861095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.113 qpair failed and we were unable to recover it. 00:30:50.113 [2024-11-20 10:04:20.861458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.113 [2024-11-20 10:04:20.861488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.113 qpair failed and we were unable to recover it. 00:30:50.113 [2024-11-20 10:04:20.861842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.113 [2024-11-20 10:04:20.861874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.113 qpair failed and we were unable to recover it. 00:30:50.113 [2024-11-20 10:04:20.862231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.113 [2024-11-20 10:04:20.862262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.113 qpair failed and we were unable to recover it. 00:30:50.113 [2024-11-20 10:04:20.862488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.113 [2024-11-20 10:04:20.862517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.113 qpair failed and we were unable to recover it. 00:30:50.113 [2024-11-20 10:04:20.862858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.113 [2024-11-20 10:04:20.862888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.113 qpair failed and we were unable to recover it. 00:30:50.113 [2024-11-20 10:04:20.863098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.113 [2024-11-20 10:04:20.863128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.113 qpair failed and we were unable to recover it. 00:30:50.113 [2024-11-20 10:04:20.863471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.113 [2024-11-20 10:04:20.863502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.113 qpair failed and we were unable to recover it. 00:30:50.113 [2024-11-20 10:04:20.863853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.113 [2024-11-20 10:04:20.863884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.113 qpair failed and we were unable to recover it. 00:30:50.113 [2024-11-20 10:04:20.864230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.113 [2024-11-20 10:04:20.864260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.113 qpair failed and we were unable to recover it. 00:30:50.113 [2024-11-20 10:04:20.864609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.113 [2024-11-20 10:04:20.864639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.113 qpair failed and we were unable to recover it. 00:30:50.113 [2024-11-20 10:04:20.865000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.113 [2024-11-20 10:04:20.865030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.113 qpair failed and we were unable to recover it. 00:30:50.113 [2024-11-20 10:04:20.865395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.113 [2024-11-20 10:04:20.865425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.113 qpair failed and we were unable to recover it. 00:30:50.113 [2024-11-20 10:04:20.865673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.113 [2024-11-20 10:04:20.865708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.113 qpair failed and we were unable to recover it. 00:30:50.113 [2024-11-20 10:04:20.866052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.113 [2024-11-20 10:04:20.866085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.113 qpair failed and we were unable to recover it. 00:30:50.113 [2024-11-20 10:04:20.866435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.113 [2024-11-20 10:04:20.866466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.113 qpair failed and we were unable to recover it. 00:30:50.113 [2024-11-20 10:04:20.866820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.113 [2024-11-20 10:04:20.866851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.113 qpair failed and we were unable to recover it. 00:30:50.113 [2024-11-20 10:04:20.867216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.113 [2024-11-20 10:04:20.867249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.113 qpair failed and we were unable to recover it. 00:30:50.113 [2024-11-20 10:04:20.867615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.113 [2024-11-20 10:04:20.867645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.113 qpair failed and we were unable to recover it. 00:30:50.113 [2024-11-20 10:04:20.867997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.113 [2024-11-20 10:04:20.868029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.113 qpair failed and we were unable to recover it. 00:30:50.113 [2024-11-20 10:04:20.868387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.113 [2024-11-20 10:04:20.868419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.113 qpair failed and we were unable to recover it. 00:30:50.113 [2024-11-20 10:04:20.868769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.113 [2024-11-20 10:04:20.868799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.113 qpair failed and we were unable to recover it. 00:30:50.114 [2024-11-20 10:04:20.869153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.114 [2024-11-20 10:04:20.869195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.114 qpair failed and we were unable to recover it. 00:30:50.114 [2024-11-20 10:04:20.869443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.114 [2024-11-20 10:04:20.869472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.114 qpair failed and we were unable to recover it. 00:30:50.114 [2024-11-20 10:04:20.869853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.114 [2024-11-20 10:04:20.869884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.114 qpair failed and we were unable to recover it. 00:30:50.114 [2024-11-20 10:04:20.870245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.114 [2024-11-20 10:04:20.870276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.114 qpair failed and we were unable to recover it. 00:30:50.114 [2024-11-20 10:04:20.870516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.114 [2024-11-20 10:04:20.870546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.114 qpair failed and we were unable to recover it. 00:30:50.114 [2024-11-20 10:04:20.870904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.114 [2024-11-20 10:04:20.870935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.114 qpair failed and we were unable to recover it. 00:30:50.114 [2024-11-20 10:04:20.871285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.114 [2024-11-20 10:04:20.871316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.114 qpair failed and we were unable to recover it. 00:30:50.114 [2024-11-20 10:04:20.871675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.114 [2024-11-20 10:04:20.871705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.114 qpair failed and we were unable to recover it. 00:30:50.114 [2024-11-20 10:04:20.872117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.114 [2024-11-20 10:04:20.872148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.114 qpair failed and we were unable to recover it. 00:30:50.114 [2024-11-20 10:04:20.872531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.114 [2024-11-20 10:04:20.872562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.114 qpair failed and we were unable to recover it. 00:30:50.114 [2024-11-20 10:04:20.872918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.114 [2024-11-20 10:04:20.872955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.114 qpair failed and we were unable to recover it. 00:30:50.114 [2024-11-20 10:04:20.873301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.114 [2024-11-20 10:04:20.873333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.114 qpair failed and we were unable to recover it. 00:30:50.114 [2024-11-20 10:04:20.873699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.114 [2024-11-20 10:04:20.873730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.114 qpair failed and we were unable to recover it. 00:30:50.114 [2024-11-20 10:04:20.874084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.114 [2024-11-20 10:04:20.874115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.114 qpair failed and we were unable to recover it. 00:30:50.114 [2024-11-20 10:04:20.874340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.114 [2024-11-20 10:04:20.874371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.114 qpair failed and we were unable to recover it. 00:30:50.114 [2024-11-20 10:04:20.874738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.114 [2024-11-20 10:04:20.874769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.114 qpair failed and we were unable to recover it. 00:30:50.114 [2024-11-20 10:04:20.875117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.114 [2024-11-20 10:04:20.875148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.114 qpair failed and we were unable to recover it. 00:30:50.114 [2024-11-20 10:04:20.875491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.114 [2024-11-20 10:04:20.875523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.114 qpair failed and we were unable to recover it. 00:30:50.114 [2024-11-20 10:04:20.875880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.114 [2024-11-20 10:04:20.875910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.114 qpair failed and we were unable to recover it. 00:30:50.114 [2024-11-20 10:04:20.876271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.114 [2024-11-20 10:04:20.876305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.114 qpair failed and we were unable to recover it. 00:30:50.114 [2024-11-20 10:04:20.876652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.114 [2024-11-20 10:04:20.876682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.114 qpair failed and we were unable to recover it. 00:30:50.114 [2024-11-20 10:04:20.876887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.114 [2024-11-20 10:04:20.876916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.114 qpair failed and we were unable to recover it. 00:30:50.114 [2024-11-20 10:04:20.877265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.114 [2024-11-20 10:04:20.877296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.114 qpair failed and we were unable to recover it. 00:30:50.114 [2024-11-20 10:04:20.877652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.114 [2024-11-20 10:04:20.877684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.114 qpair failed and we were unable to recover it. 00:30:50.114 [2024-11-20 10:04:20.878039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.114 [2024-11-20 10:04:20.878069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.114 qpair failed and we were unable to recover it. 00:30:50.114 [2024-11-20 10:04:20.878410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.114 [2024-11-20 10:04:20.878444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.114 qpair failed and we were unable to recover it. 00:30:50.115 [2024-11-20 10:04:20.878797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.115 [2024-11-20 10:04:20.878828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.115 qpair failed and we were unable to recover it. 00:30:50.115 [2024-11-20 10:04:20.879177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.115 [2024-11-20 10:04:20.879210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.115 qpair failed and we were unable to recover it. 00:30:50.115 [2024-11-20 10:04:20.879602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.115 [2024-11-20 10:04:20.879633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.115 qpair failed and we were unable to recover it. 00:30:50.115 [2024-11-20 10:04:20.879970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.115 [2024-11-20 10:04:20.880001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.115 qpair failed and we were unable to recover it. 00:30:50.115 [2024-11-20 10:04:20.880335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.115 [2024-11-20 10:04:20.880367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.115 qpair failed and we were unable to recover it. 00:30:50.115 [2024-11-20 10:04:20.880725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.115 [2024-11-20 10:04:20.880756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.115 qpair failed and we were unable to recover it. 00:30:50.115 [2024-11-20 10:04:20.881107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.115 [2024-11-20 10:04:20.881139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.115 qpair failed and we were unable to recover it. 00:30:50.115 [2024-11-20 10:04:20.881541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.115 [2024-11-20 10:04:20.881572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.115 qpair failed and we were unable to recover it. 00:30:50.115 [2024-11-20 10:04:20.881908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.115 [2024-11-20 10:04:20.881941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.115 qpair failed and we were unable to recover it. 00:30:50.115 [2024-11-20 10:04:20.882291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.115 [2024-11-20 10:04:20.882323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.115 qpair failed and we were unable to recover it. 00:30:50.115 [2024-11-20 10:04:20.882682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.115 [2024-11-20 10:04:20.882713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.115 qpair failed and we were unable to recover it. 00:30:50.115 [2024-11-20 10:04:20.883063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.115 [2024-11-20 10:04:20.883092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.115 qpair failed and we were unable to recover it. 00:30:50.115 [2024-11-20 10:04:20.883442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.115 [2024-11-20 10:04:20.883473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.115 qpair failed and we were unable to recover it. 00:30:50.115 [2024-11-20 10:04:20.883817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.115 [2024-11-20 10:04:20.883849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.115 qpair failed and we were unable to recover it. 00:30:50.115 [2024-11-20 10:04:20.884077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.115 [2024-11-20 10:04:20.884112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.115 qpair failed and we were unable to recover it. 00:30:50.115 [2024-11-20 10:04:20.884465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.115 [2024-11-20 10:04:20.884499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.115 qpair failed and we were unable to recover it. 00:30:50.115 [2024-11-20 10:04:20.884842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.115 [2024-11-20 10:04:20.884873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.115 qpair failed and we were unable to recover it. 00:30:50.115 [2024-11-20 10:04:20.885231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.115 [2024-11-20 10:04:20.885264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.115 qpair failed and we were unable to recover it. 00:30:50.115 [2024-11-20 10:04:20.885653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.115 [2024-11-20 10:04:20.885683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.115 qpair failed and we were unable to recover it. 00:30:50.115 [2024-11-20 10:04:20.886015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.115 [2024-11-20 10:04:20.886047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.115 qpair failed and we were unable to recover it. 00:30:50.115 [2024-11-20 10:04:20.886378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.115 [2024-11-20 10:04:20.886408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.115 qpair failed and we were unable to recover it. 00:30:50.115 [2024-11-20 10:04:20.886759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.115 [2024-11-20 10:04:20.886790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.115 qpair failed and we were unable to recover it. 00:30:50.115 [2024-11-20 10:04:20.887147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.115 [2024-11-20 10:04:20.887189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.115 qpair failed and we were unable to recover it. 00:30:50.115 [2024-11-20 10:04:20.887543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.115 [2024-11-20 10:04:20.887574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.115 qpair failed and we were unable to recover it. 00:30:50.115 [2024-11-20 10:04:20.887933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.115 [2024-11-20 10:04:20.887971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.115 qpair failed and we were unable to recover it. 00:30:50.115 [2024-11-20 10:04:20.888279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.115 [2024-11-20 10:04:20.888312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.116 qpair failed and we were unable to recover it. 00:30:50.116 [2024-11-20 10:04:20.888650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.116 [2024-11-20 10:04:20.888681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.116 qpair failed and we were unable to recover it. 00:30:50.116 [2024-11-20 10:04:20.889053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.116 [2024-11-20 10:04:20.889083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.116 qpair failed and we were unable to recover it. 00:30:50.116 [2024-11-20 10:04:20.890963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.116 [2024-11-20 10:04:20.891022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.116 qpair failed and we were unable to recover it. 00:30:50.116 [2024-11-20 10:04:20.891276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.116 [2024-11-20 10:04:20.891311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.116 qpair failed and we were unable to recover it. 00:30:50.116 [2024-11-20 10:04:20.891684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.116 [2024-11-20 10:04:20.891715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.116 qpair failed and we were unable to recover it. 00:30:50.116 [2024-11-20 10:04:20.892054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.116 [2024-11-20 10:04:20.892086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.116 qpair failed and we were unable to recover it. 00:30:50.116 [2024-11-20 10:04:20.892423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.116 [2024-11-20 10:04:20.892455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.116 qpair failed and we were unable to recover it. 00:30:50.116 [2024-11-20 10:04:20.892805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.116 [2024-11-20 10:04:20.892836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.116 qpair failed and we were unable to recover it. 00:30:50.116 [2024-11-20 10:04:20.893079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.116 [2024-11-20 10:04:20.893114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.116 qpair failed and we were unable to recover it. 00:30:50.116 [2024-11-20 10:04:20.893460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.116 [2024-11-20 10:04:20.893493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.116 qpair failed and we were unable to recover it. 00:30:50.116 [2024-11-20 10:04:20.893851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.116 [2024-11-20 10:04:20.893881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.116 qpair failed and we were unable to recover it. 00:30:50.116 [2024-11-20 10:04:20.894250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.116 [2024-11-20 10:04:20.894284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.116 qpair failed and we were unable to recover it. 00:30:50.116 [2024-11-20 10:04:20.894497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.116 [2024-11-20 10:04:20.894528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.116 qpair failed and we were unable to recover it. 00:30:50.116 [2024-11-20 10:04:20.894930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.116 [2024-11-20 10:04:20.894960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.116 qpair failed and we were unable to recover it. 00:30:50.116 [2024-11-20 10:04:20.895315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.116 [2024-11-20 10:04:20.895349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.116 qpair failed and we were unable to recover it. 00:30:50.116 [2024-11-20 10:04:20.895696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.116 [2024-11-20 10:04:20.895726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.116 qpair failed and we were unable to recover it. 00:30:50.116 [2024-11-20 10:04:20.896076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.116 [2024-11-20 10:04:20.896107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.116 qpair failed and we were unable to recover it. 00:30:50.116 [2024-11-20 10:04:20.896373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.116 [2024-11-20 10:04:20.896405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.116 qpair failed and we were unable to recover it. 00:30:50.116 [2024-11-20 10:04:20.896741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.116 [2024-11-20 10:04:20.896774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.116 qpair failed and we were unable to recover it. 00:30:50.116 [2024-11-20 10:04:20.897015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.116 [2024-11-20 10:04:20.897049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.116 qpair failed and we were unable to recover it. 00:30:50.116 [2024-11-20 10:04:20.897395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.116 [2024-11-20 10:04:20.897428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.116 qpair failed and we were unable to recover it. 00:30:50.116 [2024-11-20 10:04:20.897796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.116 [2024-11-20 10:04:20.897828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.116 qpair failed and we were unable to recover it. 00:30:50.116 [2024-11-20 10:04:20.898185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.116 [2024-11-20 10:04:20.898216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.116 qpair failed and we were unable to recover it. 00:30:50.116 [2024-11-20 10:04:20.898574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.116 [2024-11-20 10:04:20.898604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.116 qpair failed and we were unable to recover it. 00:30:50.116 [2024-11-20 10:04:20.898967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.116 [2024-11-20 10:04:20.898998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.116 qpair failed and we were unable to recover it. 00:30:50.116 [2024-11-20 10:04:20.899376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.116 [2024-11-20 10:04:20.899408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.117 qpair failed and we were unable to recover it. 00:30:50.117 [2024-11-20 10:04:20.899767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.117 [2024-11-20 10:04:20.899799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.117 qpair failed and we were unable to recover it. 00:30:50.117 [2024-11-20 10:04:20.900142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.117 [2024-11-20 10:04:20.900201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.117 qpair failed and we were unable to recover it. 00:30:50.117 [2024-11-20 10:04:20.900543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.117 [2024-11-20 10:04:20.900574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.117 qpair failed and we were unable to recover it. 00:30:50.117 [2024-11-20 10:04:20.900926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.117 [2024-11-20 10:04:20.900958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.117 qpair failed and we were unable to recover it. 00:30:50.117 [2024-11-20 10:04:20.901315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.117 [2024-11-20 10:04:20.901348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.117 qpair failed and we were unable to recover it. 00:30:50.117 [2024-11-20 10:04:20.901596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.117 [2024-11-20 10:04:20.901626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.117 qpair failed and we were unable to recover it. 00:30:50.117 [2024-11-20 10:04:20.901969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.117 [2024-11-20 10:04:20.902000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.117 qpair failed and we were unable to recover it. 00:30:50.117 [2024-11-20 10:04:20.902329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.117 [2024-11-20 10:04:20.902361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.117 qpair failed and we were unable to recover it. 00:30:50.117 [2024-11-20 10:04:20.902595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.117 [2024-11-20 10:04:20.902625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.117 qpair failed and we were unable to recover it. 00:30:50.117 [2024-11-20 10:04:20.902911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.117 [2024-11-20 10:04:20.902940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.117 qpair failed and we were unable to recover it. 00:30:50.117 [2024-11-20 10:04:20.903276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.117 [2024-11-20 10:04:20.903308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.117 qpair failed and we were unable to recover it. 00:30:50.117 [2024-11-20 10:04:20.903677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.117 [2024-11-20 10:04:20.903709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.117 qpair failed and we were unable to recover it. 00:30:50.117 [2024-11-20 10:04:20.904063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.117 [2024-11-20 10:04:20.904101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.117 qpair failed and we were unable to recover it. 00:30:50.117 [2024-11-20 10:04:20.904452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.117 [2024-11-20 10:04:20.904485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.117 qpair failed and we were unable to recover it. 00:30:50.117 [2024-11-20 10:04:20.904838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.117 [2024-11-20 10:04:20.904869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.117 qpair failed and we were unable to recover it. 00:30:50.117 [2024-11-20 10:04:20.905232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.117 [2024-11-20 10:04:20.905265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.117 qpair failed and we were unable to recover it. 00:30:50.117 [2024-11-20 10:04:20.905526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.117 [2024-11-20 10:04:20.905557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.117 qpair failed and we were unable to recover it. 00:30:50.117 [2024-11-20 10:04:20.905819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.117 [2024-11-20 10:04:20.905850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.117 qpair failed and we were unable to recover it. 00:30:50.117 [2024-11-20 10:04:20.906187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.117 [2024-11-20 10:04:20.906220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.117 qpair failed and we were unable to recover it. 00:30:50.117 [2024-11-20 10:04:20.906568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.117 [2024-11-20 10:04:20.906599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.117 qpair failed and we were unable to recover it. 00:30:50.117 [2024-11-20 10:04:20.906956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.117 [2024-11-20 10:04:20.906989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.117 qpair failed and we were unable to recover it. 00:30:50.117 [2024-11-20 10:04:20.907332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.117 [2024-11-20 10:04:20.907363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.117 qpair failed and we were unable to recover it. 00:30:50.117 [2024-11-20 10:04:20.907721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.117 [2024-11-20 10:04:20.907752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.117 qpair failed and we were unable to recover it. 00:30:50.117 [2024-11-20 10:04:20.908093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.117 [2024-11-20 10:04:20.908124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.117 qpair failed and we were unable to recover it. 00:30:50.117 [2024-11-20 10:04:20.908500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.117 [2024-11-20 10:04:20.908536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.117 qpair failed and we were unable to recover it. 00:30:50.118 [2024-11-20 10:04:20.908938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.118 [2024-11-20 10:04:20.908970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.118 qpair failed and we were unable to recover it. 00:30:50.118 [2024-11-20 10:04:20.909324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.118 [2024-11-20 10:04:20.909358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.118 qpair failed and we were unable to recover it. 00:30:50.118 [2024-11-20 10:04:20.909712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.118 [2024-11-20 10:04:20.909747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.118 qpair failed and we were unable to recover it. 00:30:50.118 [2024-11-20 10:04:20.909987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.118 [2024-11-20 10:04:20.910022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.118 qpair failed and we were unable to recover it. 00:30:50.118 [2024-11-20 10:04:20.910365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.118 [2024-11-20 10:04:20.910397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.118 qpair failed and we were unable to recover it. 00:30:50.118 [2024-11-20 10:04:20.910737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.118 [2024-11-20 10:04:20.910768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.118 qpair failed and we were unable to recover it. 00:30:50.118 [2024-11-20 10:04:20.911125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.118 [2024-11-20 10:04:20.911155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.118 qpair failed and we were unable to recover it. 00:30:50.118 [2024-11-20 10:04:20.912963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.118 [2024-11-20 10:04:20.913015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.118 qpair failed and we were unable to recover it. 00:30:50.118 [2024-11-20 10:04:20.913297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.118 [2024-11-20 10:04:20.913332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.118 qpair failed and we were unable to recover it. 00:30:50.118 [2024-11-20 10:04:20.913706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.118 [2024-11-20 10:04:20.913737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.118 qpair failed and we were unable to recover it. 00:30:50.118 [2024-11-20 10:04:20.913978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.118 [2024-11-20 10:04:20.914007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.118 qpair failed and we were unable to recover it. 00:30:50.118 [2024-11-20 10:04:20.914364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.118 [2024-11-20 10:04:20.914396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.118 qpair failed and we were unable to recover it. 00:30:50.118 [2024-11-20 10:04:20.914780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.118 [2024-11-20 10:04:20.914811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.118 qpair failed and we were unable to recover it. 00:30:50.118 [2024-11-20 10:04:20.915155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.118 [2024-11-20 10:04:20.915198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.118 qpair failed and we were unable to recover it. 00:30:50.118 [2024-11-20 10:04:20.915580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.118 [2024-11-20 10:04:20.915611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.118 qpair failed and we were unable to recover it. 00:30:50.118 [2024-11-20 10:04:20.915959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.118 [2024-11-20 10:04:20.915989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.118 qpair failed and we were unable to recover it. 00:30:50.118 [2024-11-20 10:04:20.916357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.118 [2024-11-20 10:04:20.916387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.118 qpair failed and we were unable to recover it. 00:30:50.118 [2024-11-20 10:04:20.916745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.118 [2024-11-20 10:04:20.916775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.118 qpair failed and we were unable to recover it. 00:30:50.118 [2024-11-20 10:04:20.917112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.118 [2024-11-20 10:04:20.917145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.118 qpair failed and we were unable to recover it. 00:30:50.118 [2024-11-20 10:04:20.917529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.118 [2024-11-20 10:04:20.917559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.118 qpair failed and we were unable to recover it. 00:30:50.118 [2024-11-20 10:04:20.917767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.118 [2024-11-20 10:04:20.917796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.118 qpair failed and we were unable to recover it. 00:30:50.118 [2024-11-20 10:04:20.918199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.118 [2024-11-20 10:04:20.918233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.118 qpair failed and we were unable to recover it. 00:30:50.118 [2024-11-20 10:04:20.918583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.118 [2024-11-20 10:04:20.918614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.119 qpair failed and we were unable to recover it. 00:30:50.119 [2024-11-20 10:04:20.919005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.119 [2024-11-20 10:04:20.919034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.119 qpair failed and we were unable to recover it. 00:30:50.119 [2024-11-20 10:04:20.919173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.119 [2024-11-20 10:04:20.919208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.119 qpair failed and we were unable to recover it. 00:30:50.119 [2024-11-20 10:04:20.919536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.119 [2024-11-20 10:04:20.919567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.119 qpair failed and we were unable to recover it. 00:30:50.119 [2024-11-20 10:04:20.919810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.119 [2024-11-20 10:04:20.919840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.119 qpair failed and we were unable to recover it. 00:30:50.119 [2024-11-20 10:04:20.920207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.119 [2024-11-20 10:04:20.920244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.119 qpair failed and we were unable to recover it. 00:30:50.119 [2024-11-20 10:04:20.920507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.119 [2024-11-20 10:04:20.920536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.119 qpair failed and we were unable to recover it. 00:30:50.119 [2024-11-20 10:04:20.920876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.119 [2024-11-20 10:04:20.920906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.119 qpair failed and we were unable to recover it. 00:30:50.119 [2024-11-20 10:04:20.921156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.119 [2024-11-20 10:04:20.921201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.119 qpair failed and we were unable to recover it. 00:30:50.119 [2024-11-20 10:04:20.921550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.119 [2024-11-20 10:04:20.921581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.119 qpair failed and we were unable to recover it. 00:30:50.119 [2024-11-20 10:04:20.921987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.119 [2024-11-20 10:04:20.922016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.119 qpair failed and we were unable to recover it. 00:30:50.119 [2024-11-20 10:04:20.922383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.119 [2024-11-20 10:04:20.922414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.119 qpair failed and we were unable to recover it. 00:30:50.119 [2024-11-20 10:04:20.922769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.119 [2024-11-20 10:04:20.922800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.119 qpair failed and we were unable to recover it. 00:30:50.119 [2024-11-20 10:04:20.923122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.119 [2024-11-20 10:04:20.923151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.119 qpair failed and we were unable to recover it. 00:30:50.119 [2024-11-20 10:04:20.923436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.119 [2024-11-20 10:04:20.923465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.119 qpair failed and we were unable to recover it. 00:30:50.119 [2024-11-20 10:04:20.923818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.119 [2024-11-20 10:04:20.923849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.119 qpair failed and we were unable to recover it. 00:30:50.119 [2024-11-20 10:04:20.924207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.119 [2024-11-20 10:04:20.924238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.119 qpair failed and we were unable to recover it. 00:30:50.119 [2024-11-20 10:04:20.924663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.119 [2024-11-20 10:04:20.924693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.119 qpair failed and we were unable to recover it. 00:30:50.119 [2024-11-20 10:04:20.925039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.119 [2024-11-20 10:04:20.925069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.119 qpair failed and we were unable to recover it. 00:30:50.119 [2024-11-20 10:04:20.925333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.119 [2024-11-20 10:04:20.925363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.119 qpair failed and we were unable to recover it. 00:30:50.119 [2024-11-20 10:04:20.925585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.119 [2024-11-20 10:04:20.925614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.119 qpair failed and we were unable to recover it. 00:30:50.119 [2024-11-20 10:04:20.925953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.119 [2024-11-20 10:04:20.925983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.119 qpair failed and we were unable to recover it. 00:30:50.119 [2024-11-20 10:04:20.926156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.119 [2024-11-20 10:04:20.926197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.119 qpair failed and we were unable to recover it. 00:30:50.119 [2024-11-20 10:04:20.926555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.119 [2024-11-20 10:04:20.926584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.119 qpair failed and we were unable to recover it. 00:30:50.119 [2024-11-20 10:04:20.926921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.119 [2024-11-20 10:04:20.926952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.119 qpair failed and we were unable to recover it. 00:30:50.119 [2024-11-20 10:04:20.927217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.119 [2024-11-20 10:04:20.927252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.119 qpair failed and we were unable to recover it. 00:30:50.119 [2024-11-20 10:04:20.927587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.119 [2024-11-20 10:04:20.927618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.119 qpair failed and we were unable to recover it. 00:30:50.120 [2024-11-20 10:04:20.927974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.120 [2024-11-20 10:04:20.928004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.120 qpair failed and we were unable to recover it. 00:30:50.120 [2024-11-20 10:04:20.928223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.120 [2024-11-20 10:04:20.928254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.120 qpair failed and we were unable to recover it. 00:30:50.120 [2024-11-20 10:04:20.928515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.120 [2024-11-20 10:04:20.928545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.120 qpair failed and we were unable to recover it. 00:30:50.120 [2024-11-20 10:04:20.928883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.120 [2024-11-20 10:04:20.928914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.120 qpair failed and we were unable to recover it. 00:30:50.120 [2024-11-20 10:04:20.929274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.120 [2024-11-20 10:04:20.929305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.120 qpair failed and we were unable to recover it. 00:30:50.120 [2024-11-20 10:04:20.929676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.120 [2024-11-20 10:04:20.929706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.120 qpair failed and we were unable to recover it. 00:30:50.120 [2024-11-20 10:04:20.930068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.120 [2024-11-20 10:04:20.930098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.120 qpair failed and we were unable to recover it. 00:30:50.120 [2024-11-20 10:04:20.930462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.120 [2024-11-20 10:04:20.930492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.120 qpair failed and we were unable to recover it. 00:30:50.120 [2024-11-20 10:04:20.930698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.120 [2024-11-20 10:04:20.930726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.120 qpair failed and we were unable to recover it. 00:30:50.120 [2024-11-20 10:04:20.931072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.120 [2024-11-20 10:04:20.931102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.120 qpair failed and we were unable to recover it. 00:30:50.120 [2024-11-20 10:04:20.931454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.120 [2024-11-20 10:04:20.931489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.120 qpair failed and we were unable to recover it. 00:30:50.120 [2024-11-20 10:04:20.931736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.120 [2024-11-20 10:04:20.931766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.120 qpair failed and we were unable to recover it. 00:30:50.120 [2024-11-20 10:04:20.932109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.120 [2024-11-20 10:04:20.932139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.120 qpair failed and we were unable to recover it. 00:30:50.120 [2024-11-20 10:04:20.932523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.120 [2024-11-20 10:04:20.932553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.120 qpair failed and we were unable to recover it. 00:30:50.120 [2024-11-20 10:04:20.932783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.120 [2024-11-20 10:04:20.932815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.120 qpair failed and we were unable to recover it. 00:30:50.120 [2024-11-20 10:04:20.933178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.120 [2024-11-20 10:04:20.933210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.120 qpair failed and we were unable to recover it. 00:30:50.120 [2024-11-20 10:04:20.933569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.120 [2024-11-20 10:04:20.933598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.120 qpair failed and we were unable to recover it. 00:30:50.120 [2024-11-20 10:04:20.934003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.120 [2024-11-20 10:04:20.934033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.120 qpair failed and we were unable to recover it. 00:30:50.120 [2024-11-20 10:04:20.934391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.120 [2024-11-20 10:04:20.934430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.120 qpair failed and we were unable to recover it. 00:30:50.120 [2024-11-20 10:04:20.934781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.120 [2024-11-20 10:04:20.934811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.120 qpair failed and we were unable to recover it. 00:30:50.120 [2024-11-20 10:04:20.935067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.120 [2024-11-20 10:04:20.935096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.120 qpair failed and we were unable to recover it. 00:30:50.120 [2024-11-20 10:04:20.935436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.120 [2024-11-20 10:04:20.935469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.120 qpair failed and we were unable to recover it. 00:30:50.120 [2024-11-20 10:04:20.935802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.120 [2024-11-20 10:04:20.935831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.120 qpair failed and we were unable to recover it. 00:30:50.120 [2024-11-20 10:04:20.936055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.120 [2024-11-20 10:04:20.936084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.120 qpair failed and we were unable to recover it. 00:30:50.120 [2024-11-20 10:04:20.936428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.120 [2024-11-20 10:04:20.936459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.120 qpair failed and we were unable to recover it. 00:30:50.120 [2024-11-20 10:04:20.936819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.120 [2024-11-20 10:04:20.936849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.120 qpair failed and we were unable to recover it. 00:30:50.121 [2024-11-20 10:04:20.937192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.121 [2024-11-20 10:04:20.937223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.121 qpair failed and we were unable to recover it. 00:30:50.121 [2024-11-20 10:04:20.937444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.121 [2024-11-20 10:04:20.937473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.121 qpair failed and we were unable to recover it. 00:30:50.121 [2024-11-20 10:04:20.937824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.121 [2024-11-20 10:04:20.937853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.121 qpair failed and we were unable to recover it. 00:30:50.121 [2024-11-20 10:04:20.938221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.121 [2024-11-20 10:04:20.938252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.121 qpair failed and we were unable to recover it. 00:30:50.121 [2024-11-20 10:04:20.938605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.121 [2024-11-20 10:04:20.938636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.121 qpair failed and we were unable to recover it. 00:30:50.121 [2024-11-20 10:04:20.938975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.121 [2024-11-20 10:04:20.939004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.121 qpair failed and we were unable to recover it. 00:30:50.121 [2024-11-20 10:04:20.939375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.121 [2024-11-20 10:04:20.939407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.121 qpair failed and we were unable to recover it. 00:30:50.121 [2024-11-20 10:04:20.939825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.121 [2024-11-20 10:04:20.939856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.121 qpair failed and we were unable to recover it. 00:30:50.121 [2024-11-20 10:04:20.940084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.121 [2024-11-20 10:04:20.940114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.121 qpair failed and we were unable to recover it. 00:30:50.121 [2024-11-20 10:04:20.940355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.121 [2024-11-20 10:04:20.940385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.121 qpair failed and we were unable to recover it. 00:30:50.121 [2024-11-20 10:04:20.940735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.121 [2024-11-20 10:04:20.940766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.121 qpair failed and we were unable to recover it. 00:30:50.121 [2024-11-20 10:04:20.941128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.121 [2024-11-20 10:04:20.941168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.121 qpair failed and we were unable to recover it. 00:30:50.121 [2024-11-20 10:04:20.941518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.121 [2024-11-20 10:04:20.941547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.121 qpair failed and we were unable to recover it. 00:30:50.121 [2024-11-20 10:04:20.941907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.121 [2024-11-20 10:04:20.941938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.121 qpair failed and we were unable to recover it. 00:30:50.121 [2024-11-20 10:04:20.942194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.121 [2024-11-20 10:04:20.942224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.121 qpair failed and we were unable to recover it. 00:30:50.121 [2024-11-20 10:04:20.942571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.121 [2024-11-20 10:04:20.942600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.121 qpair failed and we were unable to recover it. 00:30:50.121 [2024-11-20 10:04:20.942953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.121 [2024-11-20 10:04:20.942982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.121 qpair failed and we were unable to recover it. 00:30:50.121 [2024-11-20 10:04:20.943378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.121 [2024-11-20 10:04:20.943409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.121 qpair failed and we were unable to recover it. 00:30:50.121 [2024-11-20 10:04:20.943750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.121 [2024-11-20 10:04:20.943780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.121 qpair failed and we were unable to recover it. 00:30:50.121 [2024-11-20 10:04:20.944136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.121 [2024-11-20 10:04:20.944191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.121 qpair failed and we were unable to recover it. 00:30:50.121 [2024-11-20 10:04:20.944432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.121 [2024-11-20 10:04:20.944464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.121 qpair failed and we were unable to recover it. 00:30:50.121 [2024-11-20 10:04:20.944797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.121 [2024-11-20 10:04:20.944826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.121 qpair failed and we were unable to recover it. 00:30:50.121 [2024-11-20 10:04:20.945045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.121 [2024-11-20 10:04:20.945074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.121 qpair failed and we were unable to recover it. 00:30:50.121 [2024-11-20 10:04:20.945413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.121 [2024-11-20 10:04:20.945445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.121 qpair failed and we were unable to recover it. 00:30:50.121 [2024-11-20 10:04:20.945796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.121 [2024-11-20 10:04:20.945826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.121 qpair failed and we were unable to recover it. 00:30:50.121 [2024-11-20 10:04:20.946196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.121 [2024-11-20 10:04:20.946226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.121 qpair failed and we were unable to recover it. 00:30:50.121 [2024-11-20 10:04:20.946594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.121 [2024-11-20 10:04:20.946624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.121 qpair failed and we were unable to recover it. 00:30:50.121 [2024-11-20 10:04:20.946970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.122 [2024-11-20 10:04:20.947000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.122 qpair failed and we were unable to recover it. 00:30:50.122 [2024-11-20 10:04:20.947258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.122 [2024-11-20 10:04:20.947288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.122 qpair failed and we were unable to recover it. 00:30:50.122 [2024-11-20 10:04:20.947705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.122 [2024-11-20 10:04:20.947735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.122 qpair failed and we were unable to recover it. 00:30:50.122 [2024-11-20 10:04:20.948105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.122 [2024-11-20 10:04:20.948136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.122 qpair failed and we were unable to recover it. 00:30:50.122 [2024-11-20 10:04:20.948403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.122 [2024-11-20 10:04:20.948438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.122 qpair failed and we were unable to recover it. 00:30:50.122 [2024-11-20 10:04:20.948768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.122 [2024-11-20 10:04:20.948806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.122 qpair failed and we were unable to recover it. 00:30:50.122 [2024-11-20 10:04:20.949156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.122 [2024-11-20 10:04:20.949200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.122 qpair failed and we were unable to recover it. 00:30:50.122 [2024-11-20 10:04:20.949584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.122 [2024-11-20 10:04:20.949613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.122 qpair failed and we were unable to recover it. 00:30:50.122 [2024-11-20 10:04:20.949846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.122 [2024-11-20 10:04:20.949878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.122 qpair failed and we were unable to recover it. 00:30:50.122 [2024-11-20 10:04:20.950214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.122 [2024-11-20 10:04:20.950246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.122 qpair failed and we were unable to recover it. 00:30:50.122 [2024-11-20 10:04:20.950596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.122 [2024-11-20 10:04:20.950626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.122 qpair failed and we were unable to recover it. 00:30:50.122 [2024-11-20 10:04:20.950875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.122 [2024-11-20 10:04:20.950905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.122 qpair failed and we were unable to recover it. 00:30:50.122 [2024-11-20 10:04:20.951191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.122 [2024-11-20 10:04:20.951221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.122 qpair failed and we were unable to recover it. 00:30:50.122 [2024-11-20 10:04:20.951602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.122 [2024-11-20 10:04:20.951632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.122 qpair failed and we were unable to recover it. 00:30:50.122 [2024-11-20 10:04:20.951992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.122 [2024-11-20 10:04:20.952022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.122 qpair failed and we were unable to recover it. 00:30:50.122 [2024-11-20 10:04:20.952386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.122 [2024-11-20 10:04:20.952418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.122 qpair failed and we were unable to recover it. 00:30:50.122 [2024-11-20 10:04:20.952792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.122 [2024-11-20 10:04:20.952822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.122 qpair failed and we were unable to recover it. 00:30:50.122 [2024-11-20 10:04:20.953179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.122 [2024-11-20 10:04:20.953214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.122 qpair failed and we were unable to recover it. 00:30:50.122 [2024-11-20 10:04:20.953442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.122 [2024-11-20 10:04:20.953472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.122 qpair failed and we were unable to recover it. 00:30:50.122 [2024-11-20 10:04:20.953828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.122 [2024-11-20 10:04:20.953857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.122 qpair failed and we were unable to recover it. 00:30:50.122 [2024-11-20 10:04:20.954205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.122 [2024-11-20 10:04:20.954237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.122 qpair failed and we were unable to recover it. 00:30:50.122 [2024-11-20 10:04:20.954655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.122 [2024-11-20 10:04:20.954684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.122 qpair failed and we were unable to recover it. 00:30:50.122 [2024-11-20 10:04:20.955024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.122 [2024-11-20 10:04:20.955054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.122 qpair failed and we were unable to recover it. 00:30:50.122 [2024-11-20 10:04:20.955300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.122 [2024-11-20 10:04:20.955330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.122 qpair failed and we were unable to recover it. 00:30:50.122 [2024-11-20 10:04:20.955568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.122 [2024-11-20 10:04:20.955597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.122 qpair failed and we were unable to recover it. 00:30:50.122 [2024-11-20 10:04:20.955941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.122 [2024-11-20 10:04:20.955970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.122 qpair failed and we were unable to recover it. 00:30:50.122 [2024-11-20 10:04:20.956247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.122 [2024-11-20 10:04:20.956277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.122 qpair failed and we were unable to recover it. 00:30:50.122 [2024-11-20 10:04:20.956662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.122 [2024-11-20 10:04:20.956691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.122 qpair failed and we were unable to recover it. 00:30:50.122 [2024-11-20 10:04:20.957036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.122 [2024-11-20 10:04:20.957065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.122 qpair failed and we were unable to recover it. 00:30:50.123 [2024-11-20 10:04:20.957425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.123 [2024-11-20 10:04:20.957457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.123 qpair failed and we were unable to recover it. 00:30:50.123 [2024-11-20 10:04:20.957702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.123 [2024-11-20 10:04:20.957732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.123 qpair failed and we were unable to recover it. 00:30:50.123 [2024-11-20 10:04:20.958060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.123 [2024-11-20 10:04:20.958089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.123 qpair failed and we were unable to recover it. 00:30:50.123 [2024-11-20 10:04:20.958416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.123 [2024-11-20 10:04:20.958447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.123 qpair failed and we were unable to recover it. 00:30:50.123 [2024-11-20 10:04:20.958850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.123 [2024-11-20 10:04:20.958879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.123 qpair failed and we were unable to recover it. 00:30:50.123 [2024-11-20 10:04:20.959223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.123 [2024-11-20 10:04:20.959254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.123 qpair failed and we were unable to recover it. 00:30:50.123 [2024-11-20 10:04:20.959464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.123 [2024-11-20 10:04:20.959493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.123 qpair failed and we were unable to recover it. 00:30:50.123 [2024-11-20 10:04:20.959868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.123 [2024-11-20 10:04:20.959898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.123 qpair failed and we were unable to recover it. 00:30:50.123 [2024-11-20 10:04:20.960113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.123 [2024-11-20 10:04:20.960146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.123 qpair failed and we were unable to recover it. 00:30:50.123 [2024-11-20 10:04:20.960400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.123 [2024-11-20 10:04:20.960431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.123 qpair failed and we were unable to recover it. 00:30:50.123 [2024-11-20 10:04:20.960635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.123 [2024-11-20 10:04:20.960665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.123 qpair failed and we were unable to recover it. 00:30:50.123 [2024-11-20 10:04:20.961024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.123 [2024-11-20 10:04:20.961053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.123 qpair failed and we were unable to recover it. 00:30:50.123 [2024-11-20 10:04:20.961299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.123 [2024-11-20 10:04:20.961331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.123 qpair failed and we were unable to recover it. 00:30:50.123 [2024-11-20 10:04:20.961678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.123 [2024-11-20 10:04:20.961708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.123 qpair failed and we were unable to recover it. 00:30:50.123 [2024-11-20 10:04:20.961935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.123 [2024-11-20 10:04:20.961968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.123 qpair failed and we were unable to recover it. 00:30:50.123 [2024-11-20 10:04:20.962357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.123 [2024-11-20 10:04:20.962390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.123 qpair failed and we were unable to recover it. 00:30:50.123 [2024-11-20 10:04:20.962727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.123 [2024-11-20 10:04:20.962764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.123 qpair failed and we were unable to recover it. 00:30:50.123 [2024-11-20 10:04:20.963102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.123 [2024-11-20 10:04:20.963132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.123 qpair failed and we were unable to recover it. 00:30:50.123 [2024-11-20 10:04:20.963515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.123 [2024-11-20 10:04:20.963545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.123 qpair failed and we were unable to recover it. 00:30:50.123 [2024-11-20 10:04:20.963905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.123 [2024-11-20 10:04:20.963935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.123 qpair failed and we were unable to recover it. 00:30:50.123 [2024-11-20 10:04:20.964292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.123 [2024-11-20 10:04:20.964322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.123 qpair failed and we were unable to recover it. 00:30:50.123 [2024-11-20 10:04:20.964573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.123 [2024-11-20 10:04:20.964607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.123 qpair failed and we were unable to recover it. 00:30:50.123 [2024-11-20 10:04:20.964858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.123 [2024-11-20 10:04:20.964887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.123 qpair failed and we were unable to recover it. 00:30:50.123 [2024-11-20 10:04:20.965255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.123 [2024-11-20 10:04:20.965285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.123 qpair failed and we were unable to recover it. 00:30:50.123 [2024-11-20 10:04:20.965631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.123 [2024-11-20 10:04:20.965661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.123 qpair failed and we were unable to recover it. 00:30:50.123 [2024-11-20 10:04:20.966017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.123 [2024-11-20 10:04:20.966046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.123 qpair failed and we were unable to recover it. 00:30:50.123 [2024-11-20 10:04:20.966440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.123 [2024-11-20 10:04:20.966471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.123 qpair failed and we were unable to recover it. 00:30:50.123 [2024-11-20 10:04:20.966809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.123 [2024-11-20 10:04:20.966838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.124 qpair failed and we were unable to recover it. 00:30:50.124 [2024-11-20 10:04:20.967200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.124 [2024-11-20 10:04:20.967231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.124 qpair failed and we were unable to recover it. 00:30:50.124 [2024-11-20 10:04:20.967479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.124 [2024-11-20 10:04:20.967509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.124 qpair failed and we were unable to recover it. 00:30:50.124 [2024-11-20 10:04:20.967871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.124 [2024-11-20 10:04:20.967900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.124 qpair failed and we were unable to recover it. 00:30:50.124 [2024-11-20 10:04:20.968144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.124 [2024-11-20 10:04:20.968192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.124 qpair failed and we were unable to recover it. 00:30:50.124 [2024-11-20 10:04:20.968544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.124 [2024-11-20 10:04:20.968574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.124 qpair failed and we were unable to recover it. 00:30:50.124 [2024-11-20 10:04:20.968923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.124 [2024-11-20 10:04:20.968953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.124 qpair failed and we were unable to recover it. 00:30:50.124 [2024-11-20 10:04:20.969292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.124 [2024-11-20 10:04:20.969324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.124 qpair failed and we were unable to recover it. 00:30:50.124 [2024-11-20 10:04:20.969676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.124 [2024-11-20 10:04:20.969705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.124 qpair failed and we were unable to recover it. 00:30:50.124 [2024-11-20 10:04:20.970047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.124 [2024-11-20 10:04:20.970076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.124 qpair failed and we were unable to recover it. 00:30:50.124 [2024-11-20 10:04:20.970418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.124 [2024-11-20 10:04:20.970448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.124 qpair failed and we were unable to recover it. 00:30:50.124 [2024-11-20 10:04:20.970576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.124 [2024-11-20 10:04:20.970607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.124 qpair failed and we were unable to recover it. 00:30:50.124 [2024-11-20 10:04:20.970976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.124 [2024-11-20 10:04:20.971005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.124 qpair failed and we were unable to recover it. 00:30:50.124 [2024-11-20 10:04:20.971255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.124 [2024-11-20 10:04:20.971286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.124 qpair failed and we were unable to recover it. 00:30:50.124 [2024-11-20 10:04:20.971630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.124 [2024-11-20 10:04:20.971660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.124 qpair failed and we were unable to recover it. 00:30:50.124 [2024-11-20 10:04:20.972021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.124 [2024-11-20 10:04:20.972051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.124 qpair failed and we were unable to recover it. 00:30:50.124 [2024-11-20 10:04:20.972536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.124 [2024-11-20 10:04:20.972574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.124 qpair failed and we were unable to recover it. 00:30:50.124 [2024-11-20 10:04:20.972909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.124 [2024-11-20 10:04:20.972940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.124 qpair failed and we were unable to recover it. 00:30:50.124 [2024-11-20 10:04:20.973302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.124 [2024-11-20 10:04:20.973332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.124 qpair failed and we were unable to recover it. 00:30:50.124 [2024-11-20 10:04:20.973682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.124 [2024-11-20 10:04:20.973712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.124 qpair failed and we were unable to recover it. 00:30:50.124 [2024-11-20 10:04:20.974081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.124 [2024-11-20 10:04:20.974111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.124 qpair failed and we were unable to recover it. 00:30:50.124 [2024-11-20 10:04:20.974467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.124 [2024-11-20 10:04:20.974499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.124 qpair failed and we were unable to recover it. 00:30:50.124 [2024-11-20 10:04:20.974848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.124 [2024-11-20 10:04:20.974878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.124 qpair failed and we were unable to recover it. 00:30:50.124 [2024-11-20 10:04:20.975242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.124 [2024-11-20 10:04:20.975272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.124 qpair failed and we were unable to recover it. 00:30:50.124 [2024-11-20 10:04:20.975624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.124 [2024-11-20 10:04:20.975653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.124 qpair failed and we were unable to recover it. 00:30:50.124 [2024-11-20 10:04:20.975988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.124 [2024-11-20 10:04:20.976018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.124 qpair failed and we were unable to recover it. 00:30:50.124 [2024-11-20 10:04:20.976399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.125 [2024-11-20 10:04:20.976429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.125 qpair failed and we were unable to recover it. 00:30:50.125 [2024-11-20 10:04:20.976789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.125 [2024-11-20 10:04:20.976820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.125 qpair failed and we were unable to recover it. 00:30:50.125 [2024-11-20 10:04:20.977193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.125 [2024-11-20 10:04:20.977224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.125 qpair failed and we were unable to recover it. 00:30:50.125 [2024-11-20 10:04:20.977418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.125 [2024-11-20 10:04:20.977450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.125 qpair failed and we were unable to recover it. 00:30:50.125 [2024-11-20 10:04:20.977790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.125 [2024-11-20 10:04:20.977821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.125 qpair failed and we were unable to recover it. 00:30:50.125 [2024-11-20 10:04:20.978057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.125 [2024-11-20 10:04:20.978086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.125 qpair failed and we were unable to recover it. 00:30:50.125 [2024-11-20 10:04:20.978455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.125 [2024-11-20 10:04:20.978486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.125 qpair failed and we were unable to recover it. 00:30:50.125 [2024-11-20 10:04:20.978845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.125 [2024-11-20 10:04:20.978874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.125 qpair failed and we were unable to recover it. 00:30:50.125 [2024-11-20 10:04:20.979241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.125 [2024-11-20 10:04:20.979272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.125 qpair failed and we were unable to recover it. 00:30:50.125 [2024-11-20 10:04:20.979613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.125 [2024-11-20 10:04:20.979644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.125 qpair failed and we were unable to recover it. 00:30:50.125 [2024-11-20 10:04:20.980004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.125 [2024-11-20 10:04:20.980034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.125 qpair failed and we were unable to recover it. 00:30:50.125 [2024-11-20 10:04:20.980366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.125 [2024-11-20 10:04:20.980397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.125 qpair failed and we were unable to recover it. 00:30:50.125 [2024-11-20 10:04:20.980751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.125 [2024-11-20 10:04:20.980781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.125 qpair failed and we were unable to recover it. 00:30:50.125 [2024-11-20 10:04:20.981132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.125 [2024-11-20 10:04:20.981173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.125 qpair failed and we were unable to recover it. 00:30:50.125 [2024-11-20 10:04:20.981549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.125 [2024-11-20 10:04:20.981579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.125 qpair failed and we were unable to recover it. 00:30:50.125 [2024-11-20 10:04:20.981932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.125 [2024-11-20 10:04:20.981961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.125 qpair failed and we were unable to recover it. 00:30:50.125 [2024-11-20 10:04:20.982318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.125 [2024-11-20 10:04:20.982347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.125 qpair failed and we were unable to recover it. 00:30:50.125 [2024-11-20 10:04:20.982698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.125 [2024-11-20 10:04:20.982727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.125 qpair failed and we were unable to recover it. 00:30:50.125 [2024-11-20 10:04:20.983058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.125 [2024-11-20 10:04:20.983087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.455 qpair failed and we were unable to recover it. 00:30:50.455 [2024-11-20 10:04:20.983338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.455 [2024-11-20 10:04:20.983372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.455 qpair failed and we were unable to recover it. 00:30:50.455 [2024-11-20 10:04:20.983800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.455 [2024-11-20 10:04:20.983830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.455 qpair failed and we were unable to recover it. 00:30:50.455 [2024-11-20 10:04:20.984174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.455 [2024-11-20 10:04:20.984208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.455 qpair failed and we were unable to recover it. 00:30:50.455 [2024-11-20 10:04:20.984577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.455 [2024-11-20 10:04:20.984606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.455 qpair failed and we were unable to recover it. 00:30:50.455 [2024-11-20 10:04:20.984968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.455 [2024-11-20 10:04:20.984996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.455 qpair failed and we were unable to recover it. 00:30:50.455 [2024-11-20 10:04:20.985356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.455 [2024-11-20 10:04:20.985386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.455 qpair failed and we were unable to recover it. 00:30:50.455 [2024-11-20 10:04:20.985721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.455 [2024-11-20 10:04:20.985752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.455 qpair failed and we were unable to recover it. 00:30:50.455 [2024-11-20 10:04:20.986089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.455 [2024-11-20 10:04:20.986118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.455 qpair failed and we were unable to recover it. 00:30:50.455 [2024-11-20 10:04:20.986330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.455 [2024-11-20 10:04:20.986360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.455 qpair failed and we were unable to recover it. 00:30:50.455 [2024-11-20 10:04:20.986728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.455 [2024-11-20 10:04:20.986759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.455 qpair failed and we were unable to recover it. 00:30:50.455 [2024-11-20 10:04:20.987104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.455 [2024-11-20 10:04:20.987134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.455 qpair failed and we were unable to recover it. 00:30:50.455 [2024-11-20 10:04:20.987525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.455 [2024-11-20 10:04:20.987561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.455 qpair failed and we were unable to recover it. 00:30:50.455 [2024-11-20 10:04:20.987910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.455 [2024-11-20 10:04:20.987939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.455 qpair failed and we were unable to recover it. 00:30:50.455 [2024-11-20 10:04:20.989684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.455 [2024-11-20 10:04:20.989738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.455 qpair failed and we were unable to recover it. 00:30:50.455 [2024-11-20 10:04:20.990110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.455 [2024-11-20 10:04:20.990143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.455 qpair failed and we were unable to recover it. 00:30:50.455 [2024-11-20 10:04:20.990528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.455 [2024-11-20 10:04:20.990559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.455 qpair failed and we were unable to recover it. 00:30:50.455 [2024-11-20 10:04:20.990837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.455 [2024-11-20 10:04:20.990867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.455 qpair failed and we were unable to recover it. 00:30:50.455 [2024-11-20 10:04:20.991282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.455 [2024-11-20 10:04:20.991314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.455 qpair failed and we were unable to recover it. 00:30:50.455 [2024-11-20 10:04:20.991682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.455 [2024-11-20 10:04:20.991711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.455 qpair failed and we were unable to recover it. 00:30:50.455 [2024-11-20 10:04:20.992066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.455 [2024-11-20 10:04:20.992098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.455 qpair failed and we were unable to recover it. 00:30:50.455 [2024-11-20 10:04:20.992329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.455 [2024-11-20 10:04:20.992359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.455 qpair failed and we were unable to recover it. 00:30:50.455 [2024-11-20 10:04:20.992722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.455 [2024-11-20 10:04:20.992751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.455 qpair failed and we were unable to recover it. 00:30:50.455 [2024-11-20 10:04:20.993102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.455 [2024-11-20 10:04:20.993132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.455 qpair failed and we were unable to recover it. 00:30:50.455 [2024-11-20 10:04:20.993489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.455 [2024-11-20 10:04:20.993519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.455 qpair failed and we were unable to recover it. 00:30:50.455 [2024-11-20 10:04:20.993902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.455 [2024-11-20 10:04:20.993931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.455 qpair failed and we were unable to recover it. 00:30:50.455 [2024-11-20 10:04:20.994281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.455 [2024-11-20 10:04:20.994313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.455 qpair failed and we were unable to recover it. 00:30:50.455 [2024-11-20 10:04:20.994664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.455 [2024-11-20 10:04:20.994693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.455 qpair failed and we were unable to recover it. 00:30:50.455 [2024-11-20 10:04:20.995049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.455 [2024-11-20 10:04:20.995079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.455 qpair failed and we were unable to recover it. 00:30:50.455 [2024-11-20 10:04:20.995397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.455 [2024-11-20 10:04:20.995427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.455 qpair failed and we were unable to recover it. 00:30:50.455 [2024-11-20 10:04:20.995785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.455 [2024-11-20 10:04:20.995815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.455 qpair failed and we were unable to recover it. 00:30:50.456 [2024-11-20 10:04:20.996155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.456 [2024-11-20 10:04:20.996211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.456 qpair failed and we were unable to recover it. 00:30:50.456 [2024-11-20 10:04:20.996555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.456 [2024-11-20 10:04:20.996586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.456 qpair failed and we were unable to recover it. 00:30:50.456 [2024-11-20 10:04:20.996948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.456 [2024-11-20 10:04:20.996977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.456 qpair failed and we were unable to recover it. 00:30:50.456 [2024-11-20 10:04:20.997334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.456 [2024-11-20 10:04:20.997365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.456 qpair failed and we were unable to recover it. 00:30:50.456 [2024-11-20 10:04:20.997582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.456 [2024-11-20 10:04:20.997610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.456 qpair failed and we were unable to recover it. 00:30:50.456 [2024-11-20 10:04:20.997994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.456 [2024-11-20 10:04:20.998023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.456 qpair failed and we were unable to recover it. 00:30:50.456 [2024-11-20 10:04:20.998378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.456 [2024-11-20 10:04:20.998408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.456 qpair failed and we were unable to recover it. 00:30:50.456 [2024-11-20 10:04:20.998752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.456 [2024-11-20 10:04:20.998781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.456 qpair failed and we were unable to recover it. 00:30:50.456 [2024-11-20 10:04:20.999126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.456 [2024-11-20 10:04:20.999155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.456 qpair failed and we were unable to recover it. 00:30:50.456 [2024-11-20 10:04:21.000696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.456 [2024-11-20 10:04:21.000749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.456 qpair failed and we were unable to recover it. 00:30:50.456 [2024-11-20 10:04:21.001133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.456 [2024-11-20 10:04:21.001194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.456 qpair failed and we were unable to recover it. 00:30:50.456 [2024-11-20 10:04:21.001560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.456 [2024-11-20 10:04:21.001590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.456 qpair failed and we were unable to recover it. 00:30:50.456 [2024-11-20 10:04:21.001937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.456 [2024-11-20 10:04:21.001967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.456 qpair failed and we were unable to recover it. 00:30:50.456 [2024-11-20 10:04:21.002318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.456 [2024-11-20 10:04:21.002348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.456 qpair failed and we were unable to recover it. 00:30:50.456 [2024-11-20 10:04:21.002699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.456 [2024-11-20 10:04:21.002729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.456 qpair failed and we were unable to recover it. 00:30:50.456 [2024-11-20 10:04:21.003062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.456 [2024-11-20 10:04:21.003092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.456 qpair failed and we were unable to recover it. 00:30:50.456 [2024-11-20 10:04:21.003324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.456 [2024-11-20 10:04:21.003354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.456 qpair failed and we were unable to recover it. 00:30:50.456 [2024-11-20 10:04:21.003693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.456 [2024-11-20 10:04:21.003722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.456 qpair failed and we were unable to recover it. 00:30:50.456 [2024-11-20 10:04:21.004079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.456 [2024-11-20 10:04:21.004109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.456 qpair failed and we were unable to recover it. 00:30:50.456 [2024-11-20 10:04:21.004449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.456 [2024-11-20 10:04:21.004479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.456 qpair failed and we were unable to recover it. 00:30:50.456 [2024-11-20 10:04:21.004843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.456 [2024-11-20 10:04:21.004871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.456 qpair failed and we were unable to recover it. 00:30:50.456 [2024-11-20 10:04:21.005227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.456 [2024-11-20 10:04:21.005265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.456 qpair failed and we were unable to recover it. 00:30:50.456 [2024-11-20 10:04:21.005542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.456 [2024-11-20 10:04:21.005571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.456 qpair failed and we were unable to recover it. 00:30:50.456 [2024-11-20 10:04:21.005921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.456 [2024-11-20 10:04:21.005951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.456 qpair failed and we were unable to recover it. 00:30:50.456 [2024-11-20 10:04:21.006188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.456 [2024-11-20 10:04:21.006219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.456 qpair failed and we were unable to recover it. 00:30:50.456 [2024-11-20 10:04:21.006579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.456 [2024-11-20 10:04:21.006607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.456 qpair failed and we were unable to recover it. 00:30:50.456 [2024-11-20 10:04:21.006969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.456 [2024-11-20 10:04:21.006999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.456 qpair failed and we were unable to recover it. 00:30:50.456 [2024-11-20 10:04:21.007239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.456 [2024-11-20 10:04:21.007270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.456 qpair failed and we were unable to recover it. 00:30:50.456 [2024-11-20 10:04:21.007617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.456 [2024-11-20 10:04:21.007647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.456 qpair failed and we were unable to recover it. 00:30:50.456 [2024-11-20 10:04:21.008006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.456 [2024-11-20 10:04:21.008035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.456 qpair failed and we were unable to recover it. 00:30:50.456 [2024-11-20 10:04:21.008452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.456 [2024-11-20 10:04:21.008483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.456 qpair failed and we were unable to recover it. 00:30:50.456 [2024-11-20 10:04:21.008814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.456 [2024-11-20 10:04:21.008844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.456 qpair failed and we were unable to recover it. 00:30:50.456 [2024-11-20 10:04:21.009208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.456 [2024-11-20 10:04:21.009240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.456 qpair failed and we were unable to recover it. 00:30:50.456 [2024-11-20 10:04:21.009599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.456 [2024-11-20 10:04:21.009630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.456 qpair failed and we were unable to recover it. 00:30:50.456 [2024-11-20 10:04:21.009954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.457 [2024-11-20 10:04:21.009983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.457 qpair failed and we were unable to recover it. 00:30:50.457 [2024-11-20 10:04:21.010355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.457 [2024-11-20 10:04:21.010386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.457 qpair failed and we were unable to recover it. 00:30:50.457 [2024-11-20 10:04:21.010728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.457 [2024-11-20 10:04:21.010758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.457 qpair failed and we were unable to recover it. 00:30:50.457 [2024-11-20 10:04:21.011105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.457 [2024-11-20 10:04:21.011135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.457 qpair failed and we were unable to recover it. 00:30:50.457 [2024-11-20 10:04:21.011484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.457 [2024-11-20 10:04:21.011515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.457 qpair failed and we were unable to recover it. 00:30:50.457 [2024-11-20 10:04:21.011867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.457 [2024-11-20 10:04:21.011898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.457 qpair failed and we were unable to recover it. 00:30:50.457 [2024-11-20 10:04:21.012236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.457 [2024-11-20 10:04:21.012269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.457 qpair failed and we were unable to recover it. 00:30:50.457 [2024-11-20 10:04:21.012656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.457 [2024-11-20 10:04:21.012685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.457 qpair failed and we were unable to recover it. 00:30:50.457 [2024-11-20 10:04:21.013037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.457 [2024-11-20 10:04:21.013067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.457 qpair failed and we were unable to recover it. 00:30:50.457 [2024-11-20 10:04:21.013416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.457 [2024-11-20 10:04:21.013448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.457 qpair failed and we were unable to recover it. 00:30:50.457 [2024-11-20 10:04:21.013781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.457 [2024-11-20 10:04:21.013810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.457 qpair failed and we were unable to recover it. 00:30:50.457 [2024-11-20 10:04:21.014167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.457 [2024-11-20 10:04:21.014199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.457 qpair failed and we were unable to recover it. 00:30:50.457 [2024-11-20 10:04:21.014550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.457 [2024-11-20 10:04:21.014580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.457 qpair failed and we were unable to recover it. 00:30:50.457 [2024-11-20 10:04:21.014912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.457 [2024-11-20 10:04:21.014941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.457 qpair failed and we were unable to recover it. 00:30:50.457 [2024-11-20 10:04:21.015283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.457 [2024-11-20 10:04:21.015315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.457 qpair failed and we were unable to recover it. 00:30:50.457 [2024-11-20 10:04:21.015670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.457 [2024-11-20 10:04:21.015700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.457 qpair failed and we were unable to recover it. 00:30:50.457 [2024-11-20 10:04:21.016047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.457 [2024-11-20 10:04:21.016078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.457 qpair failed and we were unable to recover it. 00:30:50.457 [2024-11-20 10:04:21.016425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.457 [2024-11-20 10:04:21.016456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.457 qpair failed and we were unable to recover it. 00:30:50.457 [2024-11-20 10:04:21.016805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.457 [2024-11-20 10:04:21.016835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.457 qpair failed and we were unable to recover it. 00:30:50.457 [2024-11-20 10:04:21.017076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.457 [2024-11-20 10:04:21.017105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.457 qpair failed and we were unable to recover it. 00:30:50.457 [2024-11-20 10:04:21.017470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.457 [2024-11-20 10:04:21.017502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.457 qpair failed and we were unable to recover it. 00:30:50.457 [2024-11-20 10:04:21.017858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.457 [2024-11-20 10:04:21.017888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.457 qpair failed and we were unable to recover it. 00:30:50.457 [2024-11-20 10:04:21.018303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.457 [2024-11-20 10:04:21.018337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.457 qpair failed and we were unable to recover it. 00:30:50.457 [2024-11-20 10:04:21.018673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.457 [2024-11-20 10:04:21.018703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.457 qpair failed and we were unable to recover it. 00:30:50.457 [2024-11-20 10:04:21.019043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.457 [2024-11-20 10:04:21.019072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.457 qpair failed and we were unable to recover it. 00:30:50.457 [2024-11-20 10:04:21.019303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.457 [2024-11-20 10:04:21.019334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.457 qpair failed and we were unable to recover it. 00:30:50.457 [2024-11-20 10:04:21.019694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.457 [2024-11-20 10:04:21.019723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.457 qpair failed and we were unable to recover it. 00:30:50.457 [2024-11-20 10:04:21.020072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.457 [2024-11-20 10:04:21.020108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.457 qpair failed and we were unable to recover it. 00:30:50.457 [2024-11-20 10:04:21.020523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.457 [2024-11-20 10:04:21.020555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.457 qpair failed and we were unable to recover it. 00:30:50.457 [2024-11-20 10:04:21.020894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.457 [2024-11-20 10:04:21.020926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.457 qpair failed and we were unable to recover it. 00:30:50.457 [2024-11-20 10:04:21.021261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.457 [2024-11-20 10:04:21.021291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.457 qpair failed and we were unable to recover it. 00:30:50.457 [2024-11-20 10:04:21.021648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.457 [2024-11-20 10:04:21.021678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.457 qpair failed and we were unable to recover it. 00:30:50.457 [2024-11-20 10:04:21.022015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.457 [2024-11-20 10:04:21.022045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.457 qpair failed and we were unable to recover it. 00:30:50.457 [2024-11-20 10:04:21.022401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.457 [2024-11-20 10:04:21.022432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.457 qpair failed and we were unable to recover it. 00:30:50.457 [2024-11-20 10:04:21.022787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.458 [2024-11-20 10:04:21.022816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.458 qpair failed and we were unable to recover it. 00:30:50.458 [2024-11-20 10:04:21.023156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.458 [2024-11-20 10:04:21.023196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.458 qpair failed and we were unable to recover it. 00:30:50.458 [2024-11-20 10:04:21.023577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.458 [2024-11-20 10:04:21.023607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.458 qpair failed and we were unable to recover it. 00:30:50.458 [2024-11-20 10:04:21.023939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.458 [2024-11-20 10:04:21.023969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.458 qpair failed and we were unable to recover it. 00:30:50.458 [2024-11-20 10:04:21.024319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.458 [2024-11-20 10:04:21.024349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.458 qpair failed and we were unable to recover it. 00:30:50.458 [2024-11-20 10:04:21.024698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.458 [2024-11-20 10:04:21.024729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.458 qpair failed and we were unable to recover it. 00:30:50.458 [2024-11-20 10:04:21.024945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.458 [2024-11-20 10:04:21.024974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.458 qpair failed and we were unable to recover it. 00:30:50.458 [2024-11-20 10:04:21.025326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.458 [2024-11-20 10:04:21.025358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.458 qpair failed and we were unable to recover it. 00:30:50.458 [2024-11-20 10:04:21.025571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.458 [2024-11-20 10:04:21.025601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.458 qpair failed and we were unable to recover it. 00:30:50.458 [2024-11-20 10:04:21.025940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.458 [2024-11-20 10:04:21.025970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.458 qpair failed and we were unable to recover it. 00:30:50.458 [2024-11-20 10:04:21.026278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.458 [2024-11-20 10:04:21.026310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.458 qpair failed and we were unable to recover it. 00:30:50.458 [2024-11-20 10:04:21.026649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.458 [2024-11-20 10:04:21.026678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.458 qpair failed and we were unable to recover it. 00:30:50.458 [2024-11-20 10:04:21.027036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.458 [2024-11-20 10:04:21.027066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.458 qpair failed and we were unable to recover it. 00:30:50.458 [2024-11-20 10:04:21.027417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.458 [2024-11-20 10:04:21.027448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.458 qpair failed and we were unable to recover it. 00:30:50.458 [2024-11-20 10:04:21.027823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.458 [2024-11-20 10:04:21.027852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.458 qpair failed and we were unable to recover it. 00:30:50.458 [2024-11-20 10:04:21.028209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.458 [2024-11-20 10:04:21.028240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.458 qpair failed and we were unable to recover it. 00:30:50.458 [2024-11-20 10:04:21.028623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.458 [2024-11-20 10:04:21.028655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.458 qpair failed and we were unable to recover it. 00:30:50.458 [2024-11-20 10:04:21.029009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.458 [2024-11-20 10:04:21.029038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.458 qpair failed and we were unable to recover it. 00:30:50.458 [2024-11-20 10:04:21.029400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.458 [2024-11-20 10:04:21.029432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.458 qpair failed and we were unable to recover it. 00:30:50.458 [2024-11-20 10:04:21.029787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.458 [2024-11-20 10:04:21.029815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.458 qpair failed and we were unable to recover it. 00:30:50.458 [2024-11-20 10:04:21.030168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.458 [2024-11-20 10:04:21.030201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.458 qpair failed and we were unable to recover it. 00:30:50.458 [2024-11-20 10:04:21.030507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.458 [2024-11-20 10:04:21.030537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.458 qpair failed and we were unable to recover it. 00:30:50.458 [2024-11-20 10:04:21.030882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.458 [2024-11-20 10:04:21.030912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.458 qpair failed and we were unable to recover it. 00:30:50.458 [2024-11-20 10:04:21.031264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.458 [2024-11-20 10:04:21.031295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.458 qpair failed and we were unable to recover it. 00:30:50.458 [2024-11-20 10:04:21.031649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.458 [2024-11-20 10:04:21.031678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.458 qpair failed and we were unable to recover it. 00:30:50.458 [2024-11-20 10:04:21.031902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.458 [2024-11-20 10:04:21.031932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.458 qpair failed and we were unable to recover it. 00:30:50.458 [2024-11-20 10:04:21.032292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.458 [2024-11-20 10:04:21.032324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.458 qpair failed and we were unable to recover it. 00:30:50.458 [2024-11-20 10:04:21.032657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.458 [2024-11-20 10:04:21.032688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.458 qpair failed and we were unable to recover it. 00:30:50.458 [2024-11-20 10:04:21.033034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.458 [2024-11-20 10:04:21.033063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.458 qpair failed and we were unable to recover it. 00:30:50.458 [2024-11-20 10:04:21.033415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.458 [2024-11-20 10:04:21.033447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.458 qpair failed and we were unable to recover it. 00:30:50.458 [2024-11-20 10:04:21.033648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.458 [2024-11-20 10:04:21.033677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.458 qpair failed and we were unable to recover it. 00:30:50.458 [2024-11-20 10:04:21.034013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.458 [2024-11-20 10:04:21.034042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.458 qpair failed and we were unable to recover it. 00:30:50.458 [2024-11-20 10:04:21.034392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.458 [2024-11-20 10:04:21.034424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.458 qpair failed and we were unable to recover it. 00:30:50.458 [2024-11-20 10:04:21.034772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.458 [2024-11-20 10:04:21.034807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.458 qpair failed and we were unable to recover it. 00:30:50.458 [2024-11-20 10:04:21.035149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.458 [2024-11-20 10:04:21.035190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.458 qpair failed and we were unable to recover it. 00:30:50.459 [2024-11-20 10:04:21.035536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.459 [2024-11-20 10:04:21.035566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.459 qpair failed and we were unable to recover it. 00:30:50.459 [2024-11-20 10:04:21.035922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.459 [2024-11-20 10:04:21.035953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.459 qpair failed and we were unable to recover it. 00:30:50.459 [2024-11-20 10:04:21.036290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.459 [2024-11-20 10:04:21.036321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.459 qpair failed and we were unable to recover it. 00:30:50.459 [2024-11-20 10:04:21.036545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.459 [2024-11-20 10:04:21.036573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.459 qpair failed and we were unable to recover it. 00:30:50.459 [2024-11-20 10:04:21.036926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.459 [2024-11-20 10:04:21.036956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.459 qpair failed and we were unable to recover it. 00:30:50.459 [2024-11-20 10:04:21.037297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.459 [2024-11-20 10:04:21.037329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.459 qpair failed and we were unable to recover it. 00:30:50.459 [2024-11-20 10:04:21.037660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.459 [2024-11-20 10:04:21.037692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.459 qpair failed and we were unable to recover it. 00:30:50.459 [2024-11-20 10:04:21.038049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.459 [2024-11-20 10:04:21.038078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.459 qpair failed and we were unable to recover it. 00:30:50.459 [2024-11-20 10:04:21.038405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.459 [2024-11-20 10:04:21.038438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.459 qpair failed and we were unable to recover it. 00:30:50.459 [2024-11-20 10:04:21.038792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.459 [2024-11-20 10:04:21.038822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.459 qpair failed and we were unable to recover it. 00:30:50.459 [2024-11-20 10:04:21.039179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.459 [2024-11-20 10:04:21.039209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.459 qpair failed and we were unable to recover it. 00:30:50.459 [2024-11-20 10:04:21.039548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.459 [2024-11-20 10:04:21.039579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.459 qpair failed and we were unable to recover it. 00:30:50.459 [2024-11-20 10:04:21.039927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.459 [2024-11-20 10:04:21.039958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.459 qpair failed and we were unable to recover it. 00:30:50.459 [2024-11-20 10:04:21.040208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.459 [2024-11-20 10:04:21.040238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.459 qpair failed and we were unable to recover it. 00:30:50.459 [2024-11-20 10:04:21.040573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.459 [2024-11-20 10:04:21.040603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.459 qpair failed and we were unable to recover it. 00:30:50.459 [2024-11-20 10:04:21.040956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.459 [2024-11-20 10:04:21.040987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.459 qpair failed and we were unable to recover it. 00:30:50.459 [2024-11-20 10:04:21.041317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.459 [2024-11-20 10:04:21.041348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.459 qpair failed and we were unable to recover it. 00:30:50.459 [2024-11-20 10:04:21.041691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.459 [2024-11-20 10:04:21.041721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.459 qpair failed and we were unable to recover it. 00:30:50.459 [2024-11-20 10:04:21.042076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.459 [2024-11-20 10:04:21.042106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.459 qpair failed and we were unable to recover it. 00:30:50.459 [2024-11-20 10:04:21.042481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.459 [2024-11-20 10:04:21.042512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.459 qpair failed and we were unable to recover it. 00:30:50.459 [2024-11-20 10:04:21.044023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.459 [2024-11-20 10:04:21.044073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.459 qpair failed and we were unable to recover it. 00:30:50.459 [2024-11-20 10:04:21.044534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.459 [2024-11-20 10:04:21.044566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.459 qpair failed and we were unable to recover it. 00:30:50.459 [2024-11-20 10:04:21.044905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.459 [2024-11-20 10:04:21.044935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.459 qpair failed and we were unable to recover it. 00:30:50.459 [2024-11-20 10:04:21.045296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.459 [2024-11-20 10:04:21.045327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.459 qpair failed and we were unable to recover it. 00:30:50.459 [2024-11-20 10:04:21.045680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.459 [2024-11-20 10:04:21.045709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.459 qpair failed and we were unable to recover it. 00:30:50.459 [2024-11-20 10:04:21.046047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.459 [2024-11-20 10:04:21.046078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.459 qpair failed and we were unable to recover it. 00:30:50.459 [2024-11-20 10:04:21.046428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.459 [2024-11-20 10:04:21.046459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.459 qpair failed and we were unable to recover it. 00:30:50.459 [2024-11-20 10:04:21.046806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.459 [2024-11-20 10:04:21.046835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.459 qpair failed and we were unable to recover it. 00:30:50.459 [2024-11-20 10:04:21.047183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.459 [2024-11-20 10:04:21.047214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.459 qpair failed and we were unable to recover it. 00:30:50.459 [2024-11-20 10:04:21.047572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.459 [2024-11-20 10:04:21.047602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.459 qpair failed and we were unable to recover it. 00:30:50.459 [2024-11-20 10:04:21.047948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.459 [2024-11-20 10:04:21.047977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.459 qpair failed and we were unable to recover it. 00:30:50.459 [2024-11-20 10:04:21.048345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.459 [2024-11-20 10:04:21.048376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.459 qpair failed and we were unable to recover it. 00:30:50.459 [2024-11-20 10:04:21.048722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.459 [2024-11-20 10:04:21.048753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.459 qpair failed and we were unable to recover it. 00:30:50.459 [2024-11-20 10:04:21.049112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.459 [2024-11-20 10:04:21.049142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.459 qpair failed and we were unable to recover it. 00:30:50.460 [2024-11-20 10:04:21.049492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.460 [2024-11-20 10:04:21.049524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.460 qpair failed and we were unable to recover it. 00:30:50.460 [2024-11-20 10:04:21.049875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.460 [2024-11-20 10:04:21.049905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.460 qpair failed and we were unable to recover it. 00:30:50.460 [2024-11-20 10:04:21.050266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.460 [2024-11-20 10:04:21.050297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.460 qpair failed and we were unable to recover it. 00:30:50.460 [2024-11-20 10:04:21.050648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.460 [2024-11-20 10:04:21.050677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.460 qpair failed and we were unable to recover it. 00:30:50.460 [2024-11-20 10:04:21.051030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.460 [2024-11-20 10:04:21.051065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.460 qpair failed and we were unable to recover it. 00:30:50.460 [2024-11-20 10:04:21.051446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.460 [2024-11-20 10:04:21.051477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.460 qpair failed and we were unable to recover it. 00:30:50.460 [2024-11-20 10:04:21.051816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.460 [2024-11-20 10:04:21.051847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.460 qpair failed and we were unable to recover it. 00:30:50.460 [2024-11-20 10:04:21.052101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.460 [2024-11-20 10:04:21.052133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.460 qpair failed and we were unable to recover it. 00:30:50.460 [2024-11-20 10:04:21.052496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.460 [2024-11-20 10:04:21.052526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.460 qpair failed and we were unable to recover it. 00:30:50.460 [2024-11-20 10:04:21.052765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.460 [2024-11-20 10:04:21.052795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.460 qpair failed and we were unable to recover it. 00:30:50.460 [2024-11-20 10:04:21.053192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.460 [2024-11-20 10:04:21.053223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.460 qpair failed and we were unable to recover it. 00:30:50.460 [2024-11-20 10:04:21.053562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.460 [2024-11-20 10:04:21.053590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.460 qpair failed and we were unable to recover it. 00:30:50.460 [2024-11-20 10:04:21.053942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.460 [2024-11-20 10:04:21.053970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.460 qpair failed and we were unable to recover it. 00:30:50.460 [2024-11-20 10:04:21.054329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.460 [2024-11-20 10:04:21.054360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.460 qpair failed and we were unable to recover it. 00:30:50.460 [2024-11-20 10:04:21.054734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.460 [2024-11-20 10:04:21.054764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.460 qpair failed and we were unable to recover it. 00:30:50.460 [2024-11-20 10:04:21.055125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.460 [2024-11-20 10:04:21.055154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.460 qpair failed and we were unable to recover it. 00:30:50.460 [2024-11-20 10:04:21.055401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.460 [2024-11-20 10:04:21.055431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.460 qpair failed and we were unable to recover it. 00:30:50.460 [2024-11-20 10:04:21.055756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.460 [2024-11-20 10:04:21.055786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.460 qpair failed and we were unable to recover it. 00:30:50.460 [2024-11-20 10:04:21.056170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.460 [2024-11-20 10:04:21.056201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.460 qpair failed and we were unable to recover it. 00:30:50.460 [2024-11-20 10:04:21.056546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.460 [2024-11-20 10:04:21.056575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.460 qpair failed and we were unable to recover it. 00:30:50.460 [2024-11-20 10:04:21.056942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.460 [2024-11-20 10:04:21.056972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.460 qpair failed and we were unable to recover it. 00:30:50.460 [2024-11-20 10:04:21.057322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.460 [2024-11-20 10:04:21.057352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.460 qpair failed and we were unable to recover it. 00:30:50.460 [2024-11-20 10:04:21.057708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.460 [2024-11-20 10:04:21.057739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.460 qpair failed and we were unable to recover it. 00:30:50.460 [2024-11-20 10:04:21.058088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.460 [2024-11-20 10:04:21.058118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.460 qpair failed and we were unable to recover it. 00:30:50.460 [2024-11-20 10:04:21.058567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.460 [2024-11-20 10:04:21.058602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.460 qpair failed and we were unable to recover it. 00:30:50.460 [2024-11-20 10:04:21.058947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.460 [2024-11-20 10:04:21.058976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.460 qpair failed and we were unable to recover it. 00:30:50.460 [2024-11-20 10:04:21.059326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.460 [2024-11-20 10:04:21.059357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.460 qpair failed and we were unable to recover it. 00:30:50.460 [2024-11-20 10:04:21.059711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.460 [2024-11-20 10:04:21.059740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.460 qpair failed and we were unable to recover it. 00:30:50.460 [2024-11-20 10:04:21.060088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.461 [2024-11-20 10:04:21.060119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.461 qpair failed and we were unable to recover it. 00:30:50.461 [2024-11-20 10:04:21.060478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.461 [2024-11-20 10:04:21.060508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.461 qpair failed and we were unable to recover it. 00:30:50.461 [2024-11-20 10:04:21.060864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.461 [2024-11-20 10:04:21.060894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.461 qpair failed and we were unable to recover it. 00:30:50.461 [2024-11-20 10:04:21.061231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.461 [2024-11-20 10:04:21.061262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.461 qpair failed and we were unable to recover it. 00:30:50.461 [2024-11-20 10:04:21.061660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.461 [2024-11-20 10:04:21.061690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.461 qpair failed and we were unable to recover it. 00:30:50.461 [2024-11-20 10:04:21.062067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.461 [2024-11-20 10:04:21.062096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.461 qpair failed and we were unable to recover it. 00:30:50.461 [2024-11-20 10:04:21.062433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.461 [2024-11-20 10:04:21.062463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.461 qpair failed and we were unable to recover it. 00:30:50.461 [2024-11-20 10:04:21.062812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.461 [2024-11-20 10:04:21.062841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.461 qpair failed and we were unable to recover it. 00:30:50.461 [2024-11-20 10:04:21.063179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.461 [2024-11-20 10:04:21.063211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.461 qpair failed and we were unable to recover it. 00:30:50.461 [2024-11-20 10:04:21.063548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.461 [2024-11-20 10:04:21.063577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.461 qpair failed and we were unable to recover it. 00:30:50.461 [2024-11-20 10:04:21.063925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.461 [2024-11-20 10:04:21.063954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.461 qpair failed and we were unable to recover it. 00:30:50.461 [2024-11-20 10:04:21.064297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.461 [2024-11-20 10:04:21.064328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.461 qpair failed and we were unable to recover it. 00:30:50.461 [2024-11-20 10:04:21.064686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.461 [2024-11-20 10:04:21.064714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.461 qpair failed and we were unable to recover it. 00:30:50.461 [2024-11-20 10:04:21.065075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.461 [2024-11-20 10:04:21.065104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.461 qpair failed and we were unable to recover it. 00:30:50.461 [2024-11-20 10:04:21.065452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.461 [2024-11-20 10:04:21.065482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.461 qpair failed and we were unable to recover it. 00:30:50.461 [2024-11-20 10:04:21.065838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.461 [2024-11-20 10:04:21.065867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.461 qpair failed and we were unable to recover it. 00:30:50.461 [2024-11-20 10:04:21.066217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.461 [2024-11-20 10:04:21.066254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.461 qpair failed and we were unable to recover it. 00:30:50.461 [2024-11-20 10:04:21.066645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.461 [2024-11-20 10:04:21.066674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.461 qpair failed and we were unable to recover it. 00:30:50.461 [2024-11-20 10:04:21.067008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.461 [2024-11-20 10:04:21.067038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.461 qpair failed and we were unable to recover it. 00:30:50.461 [2024-11-20 10:04:21.067397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.461 [2024-11-20 10:04:21.067428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.461 qpair failed and we were unable to recover it. 00:30:50.461 [2024-11-20 10:04:21.067773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.461 [2024-11-20 10:04:21.067802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.461 qpair failed and we were unable to recover it. 00:30:50.461 [2024-11-20 10:04:21.068150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.461 [2024-11-20 10:04:21.068189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.461 qpair failed and we were unable to recover it. 00:30:50.461 [2024-11-20 10:04:21.068538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.461 [2024-11-20 10:04:21.068568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.461 qpair failed and we were unable to recover it. 00:30:50.461 [2024-11-20 10:04:21.068905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.461 [2024-11-20 10:04:21.068934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.461 qpair failed and we were unable to recover it. 00:30:50.461 [2024-11-20 10:04:21.069283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.461 [2024-11-20 10:04:21.069315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.461 qpair failed and we were unable to recover it. 00:30:50.461 [2024-11-20 10:04:21.069550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.461 [2024-11-20 10:04:21.069583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.461 qpair failed and we were unable to recover it. 00:30:50.461 [2024-11-20 10:04:21.069919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.461 [2024-11-20 10:04:21.069948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.461 qpair failed and we were unable to recover it. 00:30:50.461 [2024-11-20 10:04:21.070294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.461 [2024-11-20 10:04:21.070326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.461 qpair failed and we were unable to recover it. 00:30:50.461 [2024-11-20 10:04:21.070678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.461 [2024-11-20 10:04:21.070708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.461 qpair failed and we were unable to recover it. 00:30:50.461 [2024-11-20 10:04:21.070916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.461 [2024-11-20 10:04:21.070945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.461 qpair failed and we were unable to recover it. 00:30:50.461 [2024-11-20 10:04:21.071291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.461 [2024-11-20 10:04:21.071323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.461 qpair failed and we were unable to recover it. 00:30:50.461 [2024-11-20 10:04:21.071549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.461 [2024-11-20 10:04:21.071579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.461 qpair failed and we were unable to recover it. 00:30:50.461 [2024-11-20 10:04:21.071959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.461 [2024-11-20 10:04:21.071988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.461 qpair failed and we were unable to recover it. 00:30:50.461 [2024-11-20 10:04:21.072355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.461 [2024-11-20 10:04:21.072386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.461 qpair failed and we were unable to recover it. 00:30:50.461 [2024-11-20 10:04:21.072719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.461 [2024-11-20 10:04:21.072750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.461 qpair failed and we were unable to recover it. 00:30:50.462 [2024-11-20 10:04:21.073113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.462 [2024-11-20 10:04:21.073142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.462 qpair failed and we were unable to recover it. 00:30:50.462 [2024-11-20 10:04:21.073503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.462 [2024-11-20 10:04:21.073533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.462 qpair failed and we were unable to recover it. 00:30:50.462 [2024-11-20 10:04:21.073882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.462 [2024-11-20 10:04:21.073913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.462 qpair failed and we were unable to recover it. 00:30:50.462 [2024-11-20 10:04:21.074277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.462 [2024-11-20 10:04:21.074309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.462 qpair failed and we were unable to recover it. 00:30:50.462 [2024-11-20 10:04:21.074676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.462 [2024-11-20 10:04:21.074706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.462 qpair failed and we were unable to recover it. 00:30:50.462 [2024-11-20 10:04:21.075058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.462 [2024-11-20 10:04:21.075088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.462 qpair failed and we were unable to recover it. 00:30:50.462 [2024-11-20 10:04:21.075257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.462 [2024-11-20 10:04:21.075288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.462 qpair failed and we were unable to recover it. 00:30:50.462 [2024-11-20 10:04:21.075530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.462 [2024-11-20 10:04:21.075563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.462 qpair failed and we were unable to recover it. 00:30:50.462 [2024-11-20 10:04:21.075900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.462 [2024-11-20 10:04:21.075932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.462 qpair failed and we were unable to recover it. 00:30:50.462 [2024-11-20 10:04:21.076276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.462 [2024-11-20 10:04:21.076307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.462 qpair failed and we were unable to recover it. 00:30:50.462 [2024-11-20 10:04:21.076708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.462 [2024-11-20 10:04:21.076737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.462 qpair failed and we were unable to recover it. 00:30:50.462 [2024-11-20 10:04:21.077083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.462 [2024-11-20 10:04:21.077112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.462 qpair failed and we were unable to recover it. 00:30:50.462 [2024-11-20 10:04:21.077465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.462 [2024-11-20 10:04:21.077497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.462 qpair failed and we were unable to recover it. 00:30:50.462 [2024-11-20 10:04:21.077842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.462 [2024-11-20 10:04:21.077873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.462 qpair failed and we were unable to recover it. 00:30:50.462 [2024-11-20 10:04:21.078206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.462 [2024-11-20 10:04:21.078238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.462 qpair failed and we were unable to recover it. 00:30:50.462 [2024-11-20 10:04:21.078595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.462 [2024-11-20 10:04:21.078625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.462 qpair failed and we were unable to recover it. 00:30:50.462 [2024-11-20 10:04:21.078971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.462 [2024-11-20 10:04:21.079000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.462 qpair failed and we were unable to recover it. 00:30:50.462 [2024-11-20 10:04:21.079328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.462 [2024-11-20 10:04:21.079359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.462 qpair failed and we were unable to recover it. 00:30:50.462 [2024-11-20 10:04:21.079693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.462 [2024-11-20 10:04:21.079723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.462 qpair failed and we were unable to recover it. 00:30:50.462 [2024-11-20 10:04:21.080059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.462 [2024-11-20 10:04:21.080088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.462 qpair failed and we were unable to recover it. 00:30:50.462 [2024-11-20 10:04:21.080352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.462 [2024-11-20 10:04:21.080382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.462 qpair failed and we were unable to recover it. 00:30:50.462 [2024-11-20 10:04:21.080597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.462 [2024-11-20 10:04:21.080631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.462 qpair failed and we were unable to recover it. 00:30:50.462 [2024-11-20 10:04:21.080976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.462 [2024-11-20 10:04:21.081006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.462 qpair failed and we were unable to recover it. 00:30:50.462 [2024-11-20 10:04:21.081372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.462 [2024-11-20 10:04:21.081402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.462 qpair failed and we were unable to recover it. 00:30:50.462 [2024-11-20 10:04:21.081766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.462 [2024-11-20 10:04:21.081796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.462 qpair failed and we were unable to recover it. 00:30:50.462 [2024-11-20 10:04:21.082142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.462 [2024-11-20 10:04:21.082181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.462 qpair failed and we were unable to recover it. 00:30:50.462 [2024-11-20 10:04:21.082538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.462 [2024-11-20 10:04:21.082568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.462 qpair failed and we were unable to recover it. 00:30:50.462 [2024-11-20 10:04:21.082922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.462 [2024-11-20 10:04:21.082951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.462 qpair failed and we were unable to recover it. 00:30:50.462 [2024-11-20 10:04:21.083304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.462 [2024-11-20 10:04:21.083334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.462 qpair failed and we were unable to recover it. 00:30:50.462 [2024-11-20 10:04:21.083708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.462 [2024-11-20 10:04:21.083737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.462 qpair failed and we were unable to recover it. 00:30:50.462 [2024-11-20 10:04:21.084078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.462 [2024-11-20 10:04:21.084107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.462 qpair failed and we were unable to recover it. 00:30:50.462 [2024-11-20 10:04:21.084447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.462 [2024-11-20 10:04:21.084479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.462 qpair failed and we were unable to recover it. 00:30:50.462 [2024-11-20 10:04:21.084827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.462 [2024-11-20 10:04:21.084856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.462 qpair failed and we were unable to recover it. 00:30:50.462 [2024-11-20 10:04:21.085204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.462 [2024-11-20 10:04:21.085235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.462 qpair failed and we were unable to recover it. 00:30:50.462 [2024-11-20 10:04:21.085591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.462 [2024-11-20 10:04:21.085621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.462 qpair failed and we were unable to recover it. 00:30:50.463 [2024-11-20 10:04:21.085974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.463 [2024-11-20 10:04:21.086003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.463 qpair failed and we were unable to recover it. 00:30:50.463 [2024-11-20 10:04:21.086363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.463 [2024-11-20 10:04:21.086393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.463 qpair failed and we were unable to recover it. 00:30:50.463 [2024-11-20 10:04:21.086671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.463 [2024-11-20 10:04:21.086700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.463 qpair failed and we were unable to recover it. 00:30:50.463 [2024-11-20 10:04:21.087058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.463 [2024-11-20 10:04:21.087087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.463 qpair failed and we were unable to recover it. 00:30:50.463 [2024-11-20 10:04:21.087435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.463 [2024-11-20 10:04:21.087466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.463 qpair failed and we were unable to recover it. 00:30:50.463 [2024-11-20 10:04:21.087822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.463 [2024-11-20 10:04:21.087851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.463 qpair failed and we were unable to recover it. 00:30:50.463 [2024-11-20 10:04:21.088076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.463 [2024-11-20 10:04:21.088105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.463 qpair failed and we were unable to recover it. 00:30:50.463 [2024-11-20 10:04:21.088483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.463 [2024-11-20 10:04:21.088513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.463 qpair failed and we were unable to recover it. 00:30:50.463 [2024-11-20 10:04:21.088850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.463 [2024-11-20 10:04:21.088880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.463 qpair failed and we were unable to recover it. 00:30:50.463 [2024-11-20 10:04:21.089246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.463 [2024-11-20 10:04:21.089277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.463 qpair failed and we were unable to recover it. 00:30:50.463 [2024-11-20 10:04:21.089627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.463 [2024-11-20 10:04:21.089656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.463 qpair failed and we were unable to recover it. 00:30:50.463 [2024-11-20 10:04:21.090002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.463 [2024-11-20 10:04:21.090031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.463 qpair failed and we were unable to recover it. 00:30:50.463 [2024-11-20 10:04:21.090381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.463 [2024-11-20 10:04:21.090411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.463 qpair failed and we were unable to recover it. 00:30:50.463 [2024-11-20 10:04:21.090759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.463 [2024-11-20 10:04:21.090789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.463 qpair failed and we were unable to recover it. 00:30:50.463 [2024-11-20 10:04:21.091141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.463 [2024-11-20 10:04:21.091179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.463 qpair failed and we were unable to recover it. 00:30:50.463 [2024-11-20 10:04:21.091528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.463 [2024-11-20 10:04:21.091558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.463 qpair failed and we were unable to recover it. 00:30:50.463 [2024-11-20 10:04:21.091891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.463 [2024-11-20 10:04:21.091921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.463 qpair failed and we were unable to recover it. 00:30:50.463 [2024-11-20 10:04:21.092269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.463 [2024-11-20 10:04:21.092299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.463 qpair failed and we were unable to recover it. 00:30:50.463 [2024-11-20 10:04:21.092644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.463 [2024-11-20 10:04:21.092674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.463 qpair failed and we were unable to recover it. 00:30:50.463 [2024-11-20 10:04:21.093026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.463 [2024-11-20 10:04:21.093056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.463 qpair failed and we were unable to recover it. 00:30:50.463 [2024-11-20 10:04:21.093411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.463 [2024-11-20 10:04:21.093440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.463 qpair failed and we were unable to recover it. 00:30:50.463 [2024-11-20 10:04:21.093642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.463 [2024-11-20 10:04:21.093672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.463 qpair failed and we were unable to recover it. 00:30:50.463 [2024-11-20 10:04:21.094021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.463 [2024-11-20 10:04:21.094049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.463 qpair failed and we were unable to recover it. 00:30:50.463 [2024-11-20 10:04:21.094413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.463 [2024-11-20 10:04:21.094444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.463 qpair failed and we were unable to recover it. 00:30:50.463 [2024-11-20 10:04:21.094802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.463 [2024-11-20 10:04:21.094832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.463 qpair failed and we were unable to recover it. 00:30:50.463 [2024-11-20 10:04:21.095170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.463 [2024-11-20 10:04:21.095200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.463 qpair failed and we were unable to recover it. 00:30:50.463 [2024-11-20 10:04:21.095542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.463 [2024-11-20 10:04:21.095577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.463 qpair failed and we were unable to recover it. 00:30:50.463 [2024-11-20 10:04:21.095934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.463 [2024-11-20 10:04:21.095964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.463 qpair failed and we were unable to recover it. 00:30:50.463 [2024-11-20 10:04:21.096386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.463 [2024-11-20 10:04:21.096418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.463 qpair failed and we were unable to recover it. 00:30:50.463 [2024-11-20 10:04:21.096648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.463 [2024-11-20 10:04:21.096680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.463 qpair failed and we were unable to recover it. 00:30:50.463 [2024-11-20 10:04:21.097011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.463 [2024-11-20 10:04:21.097042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.463 qpair failed and we were unable to recover it. 00:30:50.463 [2024-11-20 10:04:21.097427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.463 [2024-11-20 10:04:21.097458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.463 qpair failed and we were unable to recover it. 00:30:50.463 [2024-11-20 10:04:21.097789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.463 [2024-11-20 10:04:21.097820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.463 qpair failed and we were unable to recover it. 00:30:50.463 [2024-11-20 10:04:21.098177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.463 [2024-11-20 10:04:21.098208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.463 qpair failed and we were unable to recover it. 00:30:50.463 [2024-11-20 10:04:21.098550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.463 [2024-11-20 10:04:21.098580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.464 qpair failed and we were unable to recover it. 00:30:50.464 [2024-11-20 10:04:21.098915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.464 [2024-11-20 10:04:21.098945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.464 qpair failed and we were unable to recover it. 00:30:50.464 [2024-11-20 10:04:21.099295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.464 [2024-11-20 10:04:21.099325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.464 qpair failed and we were unable to recover it. 00:30:50.464 [2024-11-20 10:04:21.099523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.464 [2024-11-20 10:04:21.099552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.464 qpair failed and we were unable to recover it. 00:30:50.464 [2024-11-20 10:04:21.099889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.464 [2024-11-20 10:04:21.099918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.464 qpair failed and we were unable to recover it. 00:30:50.464 [2024-11-20 10:04:21.100214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.464 [2024-11-20 10:04:21.100244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.464 qpair failed and we were unable to recover it. 00:30:50.464 [2024-11-20 10:04:21.100594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.464 [2024-11-20 10:04:21.100624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.464 qpair failed and we were unable to recover it. 00:30:50.464 [2024-11-20 10:04:21.100958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.464 [2024-11-20 10:04:21.100988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.464 qpair failed and we were unable to recover it. 00:30:50.464 [2024-11-20 10:04:21.101242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.464 [2024-11-20 10:04:21.101272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.464 qpair failed and we were unable to recover it. 00:30:50.464 [2024-11-20 10:04:21.101623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.464 [2024-11-20 10:04:21.101652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.464 qpair failed and we were unable to recover it. 00:30:50.464 [2024-11-20 10:04:21.101987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.464 [2024-11-20 10:04:21.102017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.464 qpair failed and we were unable to recover it. 00:30:50.464 [2024-11-20 10:04:21.102372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.464 [2024-11-20 10:04:21.102404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.464 qpair failed and we were unable to recover it. 00:30:50.464 [2024-11-20 10:04:21.102744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.464 [2024-11-20 10:04:21.102774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.464 qpair failed and we were unable to recover it. 00:30:50.464 [2024-11-20 10:04:21.103121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.464 [2024-11-20 10:04:21.103151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.464 qpair failed and we were unable to recover it. 00:30:50.464 [2024-11-20 10:04:21.103483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.464 [2024-11-20 10:04:21.103514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.464 qpair failed and we were unable to recover it. 00:30:50.464 [2024-11-20 10:04:21.103862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.464 [2024-11-20 10:04:21.103892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.464 qpair failed and we were unable to recover it. 00:30:50.464 [2024-11-20 10:04:21.104237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.464 [2024-11-20 10:04:21.104268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.464 qpair failed and we were unable to recover it. 00:30:50.464 [2024-11-20 10:04:21.104631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.464 [2024-11-20 10:04:21.104661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.464 qpair failed and we were unable to recover it. 00:30:50.464 [2024-11-20 10:04:21.105023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.464 [2024-11-20 10:04:21.105053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.464 qpair failed and we were unable to recover it. 00:30:50.464 [2024-11-20 10:04:21.105282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.464 [2024-11-20 10:04:21.105316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.464 qpair failed and we were unable to recover it. 00:30:50.464 [2024-11-20 10:04:21.105661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.464 [2024-11-20 10:04:21.105690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.464 qpair failed and we were unable to recover it. 00:30:50.464 [2024-11-20 10:04:21.106041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.464 [2024-11-20 10:04:21.106071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.464 qpair failed and we were unable to recover it. 00:30:50.464 [2024-11-20 10:04:21.106412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.464 [2024-11-20 10:04:21.106443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.464 qpair failed and we were unable to recover it. 00:30:50.464 [2024-11-20 10:04:21.106809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.464 [2024-11-20 10:04:21.106839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.464 qpair failed and we were unable to recover it. 00:30:50.464 [2024-11-20 10:04:21.107177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.464 [2024-11-20 10:04:21.107208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.464 qpair failed and we were unable to recover it. 00:30:50.464 [2024-11-20 10:04:21.107593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.464 [2024-11-20 10:04:21.107622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.464 qpair failed and we were unable to recover it. 00:30:50.464 [2024-11-20 10:04:21.107960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.464 [2024-11-20 10:04:21.107989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.464 qpair failed and we were unable to recover it. 00:30:50.464 [2024-11-20 10:04:21.108341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.464 [2024-11-20 10:04:21.108370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.464 qpair failed and we were unable to recover it. 00:30:50.464 [2024-11-20 10:04:21.108719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.464 [2024-11-20 10:04:21.108748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.464 qpair failed and we were unable to recover it. 00:30:50.464 [2024-11-20 10:04:21.109086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.464 [2024-11-20 10:04:21.109115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.464 qpair failed and we were unable to recover it. 00:30:50.464 [2024-11-20 10:04:21.109463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.464 [2024-11-20 10:04:21.109493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.464 qpair failed and we were unable to recover it. 00:30:50.464 [2024-11-20 10:04:21.109839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.464 [2024-11-20 10:04:21.109869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.464 qpair failed and we were unable to recover it. 00:30:50.464 [2024-11-20 10:04:21.110223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.464 [2024-11-20 10:04:21.110259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.464 qpair failed and we were unable to recover it. 00:30:50.464 [2024-11-20 10:04:21.110595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.464 [2024-11-20 10:04:21.110624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.464 qpair failed and we were unable to recover it. 00:30:50.464 [2024-11-20 10:04:21.110971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.464 [2024-11-20 10:04:21.111001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.464 qpair failed and we were unable to recover it. 00:30:50.464 [2024-11-20 10:04:21.111373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.465 [2024-11-20 10:04:21.111403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.465 qpair failed and we were unable to recover it. 00:30:50.465 [2024-11-20 10:04:21.111612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.465 [2024-11-20 10:04:21.111641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.465 qpair failed and we were unable to recover it. 00:30:50.465 [2024-11-20 10:04:21.112005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.465 [2024-11-20 10:04:21.112033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.465 qpair failed and we were unable to recover it. 00:30:50.465 [2024-11-20 10:04:21.112394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.465 [2024-11-20 10:04:21.112424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.465 qpair failed and we were unable to recover it. 00:30:50.465 [2024-11-20 10:04:21.112773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.465 [2024-11-20 10:04:21.112803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.465 qpair failed and we were unable to recover it. 00:30:50.465 [2024-11-20 10:04:21.113136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.465 [2024-11-20 10:04:21.113174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.465 qpair failed and we were unable to recover it. 00:30:50.465 [2024-11-20 10:04:21.113506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.465 [2024-11-20 10:04:21.113535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.465 qpair failed and we were unable to recover it. 00:30:50.465 [2024-11-20 10:04:21.113881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.465 [2024-11-20 10:04:21.113909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.465 qpair failed and we were unable to recover it. 00:30:50.465 [2024-11-20 10:04:21.114264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.465 [2024-11-20 10:04:21.114294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.465 qpair failed and we were unable to recover it. 00:30:50.465 [2024-11-20 10:04:21.114654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.465 [2024-11-20 10:04:21.114685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.465 qpair failed and we were unable to recover it. 00:30:50.465 [2024-11-20 10:04:21.114917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.465 [2024-11-20 10:04:21.114945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.465 qpair failed and we were unable to recover it. 00:30:50.465 [2024-11-20 10:04:21.115292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.465 [2024-11-20 10:04:21.115323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.465 qpair failed and we were unable to recover it. 00:30:50.465 [2024-11-20 10:04:21.115527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.465 [2024-11-20 10:04:21.115554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.465 qpair failed and we were unable to recover it. 00:30:50.465 [2024-11-20 10:04:21.115903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.465 [2024-11-20 10:04:21.115931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.465 qpair failed and we were unable to recover it. 00:30:50.465 [2024-11-20 10:04:21.116312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.465 [2024-11-20 10:04:21.116343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.465 qpair failed and we were unable to recover it. 00:30:50.465 [2024-11-20 10:04:21.116681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.465 [2024-11-20 10:04:21.116712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.465 qpair failed and we were unable to recover it. 00:30:50.465 [2024-11-20 10:04:21.117054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.465 [2024-11-20 10:04:21.117084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.465 qpair failed and we were unable to recover it. 00:30:50.465 [2024-11-20 10:04:21.117441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.465 [2024-11-20 10:04:21.117472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.465 qpair failed and we were unable to recover it. 00:30:50.465 [2024-11-20 10:04:21.117818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.465 [2024-11-20 10:04:21.117848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.465 qpair failed and we were unable to recover it. 00:30:50.465 [2024-11-20 10:04:21.118191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.465 [2024-11-20 10:04:21.118221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.465 qpair failed and we were unable to recover it. 00:30:50.465 [2024-11-20 10:04:21.118427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.465 [2024-11-20 10:04:21.118455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.465 qpair failed and we were unable to recover it. 00:30:50.465 [2024-11-20 10:04:21.118785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.465 [2024-11-20 10:04:21.118814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.465 qpair failed and we were unable to recover it. 00:30:50.465 [2024-11-20 10:04:21.119173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.465 [2024-11-20 10:04:21.119204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.465 qpair failed and we were unable to recover it. 00:30:50.465 [2024-11-20 10:04:21.119543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.465 [2024-11-20 10:04:21.119573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.465 qpair failed and we were unable to recover it. 00:30:50.465 [2024-11-20 10:04:21.119916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.465 [2024-11-20 10:04:21.119946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.465 qpair failed and we were unable to recover it. 00:30:50.465 [2024-11-20 10:04:21.120292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.465 [2024-11-20 10:04:21.120324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.465 qpair failed and we were unable to recover it. 00:30:50.465 [2024-11-20 10:04:21.120674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.465 [2024-11-20 10:04:21.120703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.465 qpair failed and we were unable to recover it. 00:30:50.465 [2024-11-20 10:04:21.121096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.465 [2024-11-20 10:04:21.121124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.465 qpair failed and we were unable to recover it. 00:30:50.465 [2024-11-20 10:04:21.121475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.465 [2024-11-20 10:04:21.121507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.465 qpair failed and we were unable to recover it. 00:30:50.465 [2024-11-20 10:04:21.121844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.465 [2024-11-20 10:04:21.121875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.465 qpair failed and we were unable to recover it. 00:30:50.466 [2024-11-20 10:04:21.122207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.466 [2024-11-20 10:04:21.122238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.466 qpair failed and we were unable to recover it. 00:30:50.466 [2024-11-20 10:04:21.122620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.466 [2024-11-20 10:04:21.122650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.466 qpair failed and we were unable to recover it. 00:30:50.466 [2024-11-20 10:04:21.122854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.466 [2024-11-20 10:04:21.122883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.466 qpair failed and we were unable to recover it. 00:30:50.466 [2024-11-20 10:04:21.123101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.466 [2024-11-20 10:04:21.123134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.466 qpair failed and we were unable to recover it. 00:30:50.466 [2024-11-20 10:04:21.123514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.466 [2024-11-20 10:04:21.123546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.466 qpair failed and we were unable to recover it. 00:30:50.466 [2024-11-20 10:04:21.123902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.466 [2024-11-20 10:04:21.123931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.466 qpair failed and we were unable to recover it. 00:30:50.466 [2024-11-20 10:04:21.124283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.466 [2024-11-20 10:04:21.124314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.466 qpair failed and we were unable to recover it. 00:30:50.466 [2024-11-20 10:04:21.124676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.466 [2024-11-20 10:04:21.124713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.466 qpair failed and we were unable to recover it. 00:30:50.466 [2024-11-20 10:04:21.125053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.466 [2024-11-20 10:04:21.125082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.466 qpair failed and we were unable to recover it. 00:30:50.466 [2024-11-20 10:04:21.125433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.466 [2024-11-20 10:04:21.125464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.466 qpair failed and we were unable to recover it. 00:30:50.466 [2024-11-20 10:04:21.125815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.466 [2024-11-20 10:04:21.125845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.466 qpair failed and we were unable to recover it. 00:30:50.466 [2024-11-20 10:04:21.126175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.466 [2024-11-20 10:04:21.126205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.466 qpair failed and we were unable to recover it. 00:30:50.466 [2024-11-20 10:04:21.126443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.466 [2024-11-20 10:04:21.126471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.466 qpair failed and we were unable to recover it. 00:30:50.466 [2024-11-20 10:04:21.126848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.466 [2024-11-20 10:04:21.126877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.466 qpair failed and we were unable to recover it. 00:30:50.466 [2024-11-20 10:04:21.127224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.466 [2024-11-20 10:04:21.127255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.466 qpair failed and we were unable to recover it. 00:30:50.466 [2024-11-20 10:04:21.127601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.466 [2024-11-20 10:04:21.127630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.466 qpair failed and we were unable to recover it. 00:30:50.466 [2024-11-20 10:04:21.128015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.466 [2024-11-20 10:04:21.128044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.466 qpair failed and we were unable to recover it. 00:30:50.466 [2024-11-20 10:04:21.128394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.466 [2024-11-20 10:04:21.128427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.466 qpair failed and we were unable to recover it. 00:30:50.466 [2024-11-20 10:04:21.128834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.466 [2024-11-20 10:04:21.128864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.466 qpair failed and we were unable to recover it. 00:30:50.466 [2024-11-20 10:04:21.129197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.466 [2024-11-20 10:04:21.129228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.466 qpair failed and we were unable to recover it. 00:30:50.466 [2024-11-20 10:04:21.129593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.466 [2024-11-20 10:04:21.129621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.466 qpair failed and we were unable to recover it. 00:30:50.466 [2024-11-20 10:04:21.129963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.466 [2024-11-20 10:04:21.129992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.466 qpair failed and we were unable to recover it. 00:30:50.466 [2024-11-20 10:04:21.130345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.466 [2024-11-20 10:04:21.130375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.466 qpair failed and we were unable to recover it. 00:30:50.466 [2024-11-20 10:04:21.130730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.466 [2024-11-20 10:04:21.130760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.466 qpair failed and we were unable to recover it. 00:30:50.466 [2024-11-20 10:04:21.131105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.466 [2024-11-20 10:04:21.131135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.466 qpair failed and we were unable to recover it. 00:30:50.466 [2024-11-20 10:04:21.131480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.466 [2024-11-20 10:04:21.131509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.466 qpair failed and we were unable to recover it. 00:30:50.466 [2024-11-20 10:04:21.131852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.466 [2024-11-20 10:04:21.131881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.466 qpair failed and we were unable to recover it. 00:30:50.466 [2024-11-20 10:04:21.132120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.466 [2024-11-20 10:04:21.132152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.466 qpair failed and we were unable to recover it. 00:30:50.466 [2024-11-20 10:04:21.132484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.466 [2024-11-20 10:04:21.132514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.466 qpair failed and we were unable to recover it. 00:30:50.466 [2024-11-20 10:04:21.132859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.466 [2024-11-20 10:04:21.132889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.466 qpair failed and we were unable to recover it. 00:30:50.466 [2024-11-20 10:04:21.133224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.466 [2024-11-20 10:04:21.133255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.466 qpair failed and we were unable to recover it. 00:30:50.466 [2024-11-20 10:04:21.133637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.466 [2024-11-20 10:04:21.133666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.466 qpair failed and we were unable to recover it. 00:30:50.466 [2024-11-20 10:04:21.134006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.466 [2024-11-20 10:04:21.134035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.466 qpair failed and we were unable to recover it. 00:30:50.466 [2024-11-20 10:04:21.134397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.466 [2024-11-20 10:04:21.134429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.466 qpair failed and we were unable to recover it. 00:30:50.466 [2024-11-20 10:04:21.134774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.466 [2024-11-20 10:04:21.134805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.467 qpair failed and we were unable to recover it. 00:30:50.467 [2024-11-20 10:04:21.135148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.467 [2024-11-20 10:04:21.135188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.467 qpair failed and we were unable to recover it. 00:30:50.467 [2024-11-20 10:04:21.135527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.467 [2024-11-20 10:04:21.135555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.467 qpair failed and we were unable to recover it. 00:30:50.467 [2024-11-20 10:04:21.135776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.467 [2024-11-20 10:04:21.135804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.467 qpair failed and we were unable to recover it. 00:30:50.467 [2024-11-20 10:04:21.136167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.467 [2024-11-20 10:04:21.136199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.467 qpair failed and we were unable to recover it. 00:30:50.467 [2024-11-20 10:04:21.136532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.467 [2024-11-20 10:04:21.136561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.467 qpair failed and we were unable to recover it. 00:30:50.467 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1564508 Killed "${NVMF_APP[@]}" "$@" 00:30:50.467 [2024-11-20 10:04:21.136929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.467 [2024-11-20 10:04:21.136959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.467 qpair failed and we were unable to recover it. 00:30:50.467 [2024-11-20 10:04:21.137312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.467 [2024-11-20 10:04:21.137343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.467 qpair failed and we were unable to recover it. 00:30:50.467 10:04:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:30:50.467 [2024-11-20 10:04:21.137689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.467 [2024-11-20 10:04:21.137717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.467 qpair failed and we were unable to recover it. 00:30:50.467 10:04:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:50.467 [2024-11-20 10:04:21.138054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.467 [2024-11-20 10:04:21.138083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.467 qpair failed and we were unable to recover it. 00:30:50.467 10:04:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:50.467 [2024-11-20 10:04:21.138445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.467 [2024-11-20 10:04:21.138476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.467 10:04:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:50.467 qpair failed and we were unable to recover it. 00:30:50.467 10:04:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:50.467 [2024-11-20 10:04:21.138837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.467 [2024-11-20 10:04:21.138867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.467 qpair failed and we were unable to recover it. 00:30:50.467 [2024-11-20 10:04:21.139195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.467 [2024-11-20 10:04:21.139225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.467 qpair failed and we were unable to recover it. 00:30:50.467 [2024-11-20 10:04:21.139591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.467 [2024-11-20 10:04:21.139621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.467 qpair failed and we were unable to recover it. 00:30:50.467 [2024-11-20 10:04:21.140029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.467 [2024-11-20 10:04:21.140059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.467 qpair failed and we were unable to recover it. 00:30:50.467 [2024-11-20 10:04:21.140403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.467 [2024-11-20 10:04:21.140433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.467 qpair failed and we were unable to recover it. 00:30:50.467 [2024-11-20 10:04:21.140826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.467 [2024-11-20 10:04:21.140856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.467 qpair failed and we were unable to recover it. 00:30:50.467 [2024-11-20 10:04:21.141195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.467 [2024-11-20 10:04:21.141225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.467 qpair failed and we were unable to recover it. 00:30:50.467 [2024-11-20 10:04:21.141568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.467 [2024-11-20 10:04:21.141597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.467 qpair failed and we were unable to recover it. 00:30:50.467 [2024-11-20 10:04:21.141954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.467 [2024-11-20 10:04:21.141984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.467 qpair failed and we were unable to recover it. 00:30:50.467 [2024-11-20 10:04:21.142251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.467 [2024-11-20 10:04:21.142282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.467 qpair failed and we were unable to recover it. 00:30:50.467 [2024-11-20 10:04:21.142603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.467 [2024-11-20 10:04:21.142633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.467 qpair failed and we were unable to recover it. 00:30:50.467 [2024-11-20 10:04:21.142843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.467 [2024-11-20 10:04:21.142871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.467 qpair failed and we were unable to recover it. 00:30:50.467 [2024-11-20 10:04:21.143226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.467 [2024-11-20 10:04:21.143257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.467 qpair failed and we were unable to recover it. 00:30:50.467 [2024-11-20 10:04:21.143645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.467 [2024-11-20 10:04:21.143675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.467 qpair failed and we were unable to recover it. 00:30:50.467 [2024-11-20 10:04:21.144024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.467 [2024-11-20 10:04:21.144054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.467 qpair failed and we were unable to recover it. 00:30:50.467 [2024-11-20 10:04:21.144396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.467 [2024-11-20 10:04:21.144426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.467 qpair failed and we were unable to recover it. 00:30:50.467 [2024-11-20 10:04:21.144760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.467 [2024-11-20 10:04:21.144788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.467 qpair failed and we were unable to recover it. 00:30:50.467 [2024-11-20 10:04:21.145144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.467 [2024-11-20 10:04:21.145184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.467 qpair failed and we were unable to recover it. 00:30:50.467 [2024-11-20 10:04:21.145545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.467 [2024-11-20 10:04:21.145575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.467 qpair failed and we were unable to recover it. 00:30:50.467 [2024-11-20 10:04:21.145933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.467 [2024-11-20 10:04:21.145962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.467 qpair failed and we were unable to recover it. 00:30:50.467 [2024-11-20 10:04:21.146230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.467 [2024-11-20 10:04:21.146259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.467 qpair failed and we were unable to recover it. 00:30:50.467 [2024-11-20 10:04:21.146471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.467 [2024-11-20 10:04:21.146500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.467 10:04:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1565536 00:30:50.467 qpair failed and we were unable to recover it. 00:30:50.467 10:04:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1565536 00:30:50.467 [2024-11-20 10:04:21.146862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.467 [2024-11-20 10:04:21.146892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.467 qpair failed and we were unable to recover it. 00:30:50.468 10:04:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:50.468 10:04:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1565536 ']' 00:30:50.468 [2024-11-20 10:04:21.147250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.468 [2024-11-20 10:04:21.147281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.468 qpair failed and we were unable to recover it. 00:30:50.468 10:04:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:50.468 [2024-11-20 10:04:21.147622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.468 [2024-11-20 10:04:21.147652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.468 qpair failed and we were unable to recover it. 00:30:50.468 10:04:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:50.468 10:04:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:50.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:50.468 [2024-11-20 10:04:21.148007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.468 [2024-11-20 10:04:21.148036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.468 qpair failed and we were unable to recover it. 00:30:50.468 10:04:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:50.468 [2024-11-20 10:04:21.148397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.468 [2024-11-20 10:04:21.148428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.468 10:04:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:50.468 qpair failed and we were unable to recover it. 00:30:50.468 [2024-11-20 10:04:21.148778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.468 [2024-11-20 10:04:21.148806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.468 qpair failed and we were unable to recover it. 00:30:50.468 [2024-11-20 10:04:21.149104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.468 [2024-11-20 10:04:21.149133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.468 qpair failed and we were unable to recover it. 00:30:50.468 [2024-11-20 10:04:21.149562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.468 [2024-11-20 10:04:21.149593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.468 qpair failed and we were unable to recover it. 00:30:50.468 [2024-11-20 10:04:21.149829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.468 [2024-11-20 10:04:21.149858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.468 qpair failed and we were unable to recover it. 00:30:50.468 [2024-11-20 10:04:21.150100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.468 [2024-11-20 10:04:21.150129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.468 qpair failed and we were unable to recover it. 00:30:50.468 [2024-11-20 10:04:21.150482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.468 [2024-11-20 10:04:21.150514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.468 qpair failed and we were unable to recover it. 00:30:50.468 [2024-11-20 10:04:21.150879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.468 [2024-11-20 10:04:21.150909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.468 qpair failed and we were unable to recover it. 00:30:50.468 [2024-11-20 10:04:21.151229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.468 [2024-11-20 10:04:21.151261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.468 qpair failed and we were unable to recover it. 00:30:50.468 [2024-11-20 10:04:21.151507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.468 [2024-11-20 10:04:21.151537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.468 qpair failed and we were unable to recover it. 00:30:50.468 [2024-11-20 10:04:21.151868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.468 [2024-11-20 10:04:21.151898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.468 qpair failed and we were unable to recover it. 00:30:50.468 [2024-11-20 10:04:21.152239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.468 [2024-11-20 10:04:21.152271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.468 qpair failed and we were unable to recover it. 00:30:50.468 [2024-11-20 10:04:21.152496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.468 [2024-11-20 10:04:21.152531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.468 qpair failed and we were unable to recover it. 00:30:50.468 [2024-11-20 10:04:21.152873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.468 [2024-11-20 10:04:21.152904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.468 qpair failed and we were unable to recover it. 00:30:50.468 [2024-11-20 10:04:21.153237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.468 [2024-11-20 10:04:21.153269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.468 qpair failed and we were unable to recover it. 00:30:50.468 [2024-11-20 10:04:21.153613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.468 [2024-11-20 10:04:21.153644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.468 qpair failed and we were unable to recover it. 00:30:50.468 [2024-11-20 10:04:21.153988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.468 [2024-11-20 10:04:21.154017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.468 qpair failed and we were unable to recover it. 00:30:50.468 [2024-11-20 10:04:21.154368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.468 [2024-11-20 10:04:21.154399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.468 qpair failed and we were unable to recover it. 00:30:50.468 [2024-11-20 10:04:21.154761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.468 [2024-11-20 10:04:21.154791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.468 qpair failed and we were unable to recover it. 00:30:50.468 [2024-11-20 10:04:21.155147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.468 [2024-11-20 10:04:21.155189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.468 qpair failed and we were unable to recover it. 00:30:50.468 [2024-11-20 10:04:21.155350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.468 [2024-11-20 10:04:21.155379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.468 qpair failed and we were unable to recover it. 00:30:50.468 [2024-11-20 10:04:21.155747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.468 [2024-11-20 10:04:21.155777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.468 qpair failed and we were unable to recover it. 00:30:50.468 [2024-11-20 10:04:21.156005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.468 [2024-11-20 10:04:21.156042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.468 qpair failed and we were unable to recover it. 00:30:50.468 [2024-11-20 10:04:21.156317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.468 [2024-11-20 10:04:21.156348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.468 qpair failed and we were unable to recover it. 00:30:50.468 [2024-11-20 10:04:21.156694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.468 [2024-11-20 10:04:21.156724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.468 qpair failed and we were unable to recover it. 00:30:50.468 [2024-11-20 10:04:21.156958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.468 [2024-11-20 10:04:21.156988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.468 qpair failed and we were unable to recover it. 00:30:50.468 [2024-11-20 10:04:21.157333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.468 [2024-11-20 10:04:21.157364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.468 qpair failed and we were unable to recover it. 00:30:50.468 [2024-11-20 10:04:21.157708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.468 [2024-11-20 10:04:21.157738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.468 qpair failed and we were unable to recover it. 00:30:50.468 [2024-11-20 10:04:21.158084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.468 [2024-11-20 10:04:21.158114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.468 qpair failed and we were unable to recover it. 00:30:50.468 [2024-11-20 10:04:21.158393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.468 [2024-11-20 10:04:21.158423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.468 qpair failed and we were unable to recover it. 00:30:50.468 [2024-11-20 10:04:21.158660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.469 [2024-11-20 10:04:21.158689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.469 qpair failed and we were unable to recover it. 00:30:50.469 [2024-11-20 10:04:21.159026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.469 [2024-11-20 10:04:21.159055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.469 qpair failed and we were unable to recover it. 00:30:50.469 [2024-11-20 10:04:21.159288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.469 [2024-11-20 10:04:21.159319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.469 qpair failed and we were unable to recover it. 00:30:50.469 [2024-11-20 10:04:21.159660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.469 [2024-11-20 10:04:21.159690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.469 qpair failed and we were unable to recover it. 00:30:50.469 [2024-11-20 10:04:21.160087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.469 [2024-11-20 10:04:21.160116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.469 qpair failed and we were unable to recover it. 00:30:50.469 [2024-11-20 10:04:21.160461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.469 [2024-11-20 10:04:21.160491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.469 qpair failed and we were unable to recover it. 00:30:50.469 [2024-11-20 10:04:21.160743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.469 [2024-11-20 10:04:21.160777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.469 qpair failed and we were unable to recover it. 00:30:50.469 [2024-11-20 10:04:21.160991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.469 [2024-11-20 10:04:21.161021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.469 qpair failed and we were unable to recover it. 00:30:50.469 [2024-11-20 10:04:21.161364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.469 [2024-11-20 10:04:21.161395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.469 qpair failed and we were unable to recover it. 00:30:50.469 [2024-11-20 10:04:21.161737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.469 [2024-11-20 10:04:21.161767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.469 qpair failed and we were unable to recover it. 00:30:50.469 [2024-11-20 10:04:21.162083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.469 [2024-11-20 10:04:21.162112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.469 qpair failed and we were unable to recover it. 00:30:50.469 [2024-11-20 10:04:21.162400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.469 [2024-11-20 10:04:21.162431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.469 qpair failed and we were unable to recover it. 00:30:50.469 [2024-11-20 10:04:21.162772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.469 [2024-11-20 10:04:21.162801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.469 qpair failed and we were unable to recover it. 00:30:50.469 [2024-11-20 10:04:21.163114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.469 [2024-11-20 10:04:21.163144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.469 qpair failed and we were unable to recover it. 00:30:50.469 [2024-11-20 10:04:21.163577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.469 [2024-11-20 10:04:21.163606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.469 qpair failed and we were unable to recover it. 00:30:50.469 [2024-11-20 10:04:21.163969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.469 [2024-11-20 10:04:21.164000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.469 qpair failed and we were unable to recover it. 00:30:50.469 [2024-11-20 10:04:21.164238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.469 [2024-11-20 10:04:21.164267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.469 qpair failed and we were unable to recover it. 00:30:50.469 [2024-11-20 10:04:21.164665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.469 [2024-11-20 10:04:21.164694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.469 qpair failed and we were unable to recover it. 00:30:50.469 [2024-11-20 10:04:21.164975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.469 [2024-11-20 10:04:21.165004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.469 qpair failed and we were unable to recover it. 00:30:50.469 [2024-11-20 10:04:21.165349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.469 [2024-11-20 10:04:21.165380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.469 qpair failed and we were unable to recover it. 00:30:50.469 [2024-11-20 10:04:21.165761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.469 [2024-11-20 10:04:21.165790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.469 qpair failed and we were unable to recover it. 00:30:50.469 [2024-11-20 10:04:21.166129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.469 [2024-11-20 10:04:21.166168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.469 qpair failed and we were unable to recover it. 00:30:50.469 [2024-11-20 10:04:21.166544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.469 [2024-11-20 10:04:21.166574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.469 qpair failed and we were unable to recover it. 00:30:50.469 [2024-11-20 10:04:21.166802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.469 [2024-11-20 10:04:21.166831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.469 qpair failed and we were unable to recover it. 00:30:50.469 [2024-11-20 10:04:21.167195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.469 [2024-11-20 10:04:21.167225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.469 qpair failed and we were unable to recover it. 00:30:50.469 [2024-11-20 10:04:21.167579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.469 [2024-11-20 10:04:21.167609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.469 qpair failed and we were unable to recover it. 00:30:50.469 [2024-11-20 10:04:21.167856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.469 [2024-11-20 10:04:21.167885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.469 qpair failed and we were unable to recover it. 00:30:50.469 [2024-11-20 10:04:21.168226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.469 [2024-11-20 10:04:21.168258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.469 qpair failed and we were unable to recover it. 00:30:50.469 [2024-11-20 10:04:21.168504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.469 [2024-11-20 10:04:21.168532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.469 qpair failed and we were unable to recover it. 00:30:50.469 [2024-11-20 10:04:21.168880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.469 [2024-11-20 10:04:21.168909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.469 qpair failed and we were unable to recover it. 00:30:50.469 [2024-11-20 10:04:21.169229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.469 [2024-11-20 10:04:21.169259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.469 qpair failed and we were unable to recover it. 00:30:50.469 [2024-11-20 10:04:21.169642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.469 [2024-11-20 10:04:21.169670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.469 qpair failed and we were unable to recover it. 00:30:50.469 [2024-11-20 10:04:21.170035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.469 [2024-11-20 10:04:21.170070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.469 qpair failed and we were unable to recover it. 00:30:50.469 [2024-11-20 10:04:21.170294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.469 [2024-11-20 10:04:21.170325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.469 qpair failed and we were unable to recover it. 00:30:50.470 [2024-11-20 10:04:21.170678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.470 [2024-11-20 10:04:21.170708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.470 qpair failed and we were unable to recover it. 00:30:50.470 [2024-11-20 10:04:21.170958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.470 [2024-11-20 10:04:21.170987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.470 qpair failed and we were unable to recover it. 00:30:50.470 [2024-11-20 10:04:21.171332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.470 [2024-11-20 10:04:21.171362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.470 qpair failed and we were unable to recover it. 00:30:50.470 [2024-11-20 10:04:21.171594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.470 [2024-11-20 10:04:21.171622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.470 qpair failed and we were unable to recover it. 00:30:50.470 [2024-11-20 10:04:21.171873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.470 [2024-11-20 10:04:21.171902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.470 qpair failed and we were unable to recover it. 00:30:50.470 [2024-11-20 10:04:21.172248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.470 [2024-11-20 10:04:21.172278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.470 qpair failed and we were unable to recover it. 00:30:50.470 [2024-11-20 10:04:21.172496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.470 [2024-11-20 10:04:21.172524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.470 qpair failed and we were unable to recover it. 00:30:50.470 [2024-11-20 10:04:21.172851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.470 [2024-11-20 10:04:21.172880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.470 qpair failed and we were unable to recover it. 00:30:50.470 [2024-11-20 10:04:21.173220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.470 [2024-11-20 10:04:21.173250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.470 qpair failed and we were unable to recover it. 00:30:50.470 [2024-11-20 10:04:21.173595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.470 [2024-11-20 10:04:21.173625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.470 qpair failed and we were unable to recover it. 00:30:50.470 [2024-11-20 10:04:21.173877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.470 [2024-11-20 10:04:21.173905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.470 qpair failed and we were unable to recover it. 00:30:50.470 [2024-11-20 10:04:21.174229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.470 [2024-11-20 10:04:21.174259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.470 qpair failed and we were unable to recover it. 00:30:50.470 [2024-11-20 10:04:21.174508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.470 [2024-11-20 10:04:21.174537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.470 qpair failed and we were unable to recover it. 00:30:50.470 [2024-11-20 10:04:21.174865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.470 [2024-11-20 10:04:21.174894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.470 qpair failed and we were unable to recover it. 00:30:50.470 [2024-11-20 10:04:21.175136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.470 [2024-11-20 10:04:21.175173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.470 qpair failed and we were unable to recover it. 00:30:50.470 [2024-11-20 10:04:21.175423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.470 [2024-11-20 10:04:21.175452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.470 qpair failed and we were unable to recover it. 00:30:50.470 [2024-11-20 10:04:21.175826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.470 [2024-11-20 10:04:21.175856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.470 qpair failed and we were unable to recover it. 00:30:50.470 [2024-11-20 10:04:21.176265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.470 [2024-11-20 10:04:21.176294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.470 qpair failed and we were unable to recover it. 00:30:50.470 [2024-11-20 10:04:21.176644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.470 [2024-11-20 10:04:21.176674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.470 qpair failed and we were unable to recover it. 00:30:50.470 [2024-11-20 10:04:21.176893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.470 [2024-11-20 10:04:21.176924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.470 qpair failed and we were unable to recover it. 00:30:50.470 [2024-11-20 10:04:21.177136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.470 [2024-11-20 10:04:21.177176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.470 qpair failed and we were unable to recover it. 00:30:50.470 [2024-11-20 10:04:21.177534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.470 [2024-11-20 10:04:21.177566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.470 qpair failed and we were unable to recover it. 00:30:50.470 [2024-11-20 10:04:21.177908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.470 [2024-11-20 10:04:21.177938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.470 qpair failed and we were unable to recover it. 00:30:50.470 [2024-11-20 10:04:21.178142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.470 [2024-11-20 10:04:21.178194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.470 qpair failed and we were unable to recover it. 00:30:50.470 [2024-11-20 10:04:21.178526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.470 [2024-11-20 10:04:21.178556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.470 qpair failed and we were unable to recover it. 00:30:50.470 [2024-11-20 10:04:21.178791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.470 [2024-11-20 10:04:21.178821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.470 qpair failed and we were unable to recover it. 00:30:50.470 [2024-11-20 10:04:21.179151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.470 [2024-11-20 10:04:21.179192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.470 qpair failed and we were unable to recover it. 00:30:50.470 [2024-11-20 10:04:21.179558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.470 [2024-11-20 10:04:21.179586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.470 qpair failed and we were unable to recover it. 00:30:50.470 [2024-11-20 10:04:21.179930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.470 [2024-11-20 10:04:21.179961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.470 qpair failed and we were unable to recover it. 00:30:50.470 [2024-11-20 10:04:21.180317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.470 [2024-11-20 10:04:21.180347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.470 qpair failed and we were unable to recover it. 00:30:50.470 [2024-11-20 10:04:21.180734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.470 [2024-11-20 10:04:21.180763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.470 qpair failed and we were unable to recover it. 00:30:50.470 [2024-11-20 10:04:21.181123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.470 [2024-11-20 10:04:21.181152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.470 qpair failed and we were unable to recover it. 00:30:50.470 [2024-11-20 10:04:21.181418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.470 [2024-11-20 10:04:21.181448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.470 qpair failed and we were unable to recover it. 00:30:50.470 [2024-11-20 10:04:21.181678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.470 [2024-11-20 10:04:21.181706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.470 qpair failed and we were unable to recover it. 00:30:50.470 [2024-11-20 10:04:21.182094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.470 [2024-11-20 10:04:21.182123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.470 qpair failed and we were unable to recover it. 00:30:50.470 [2024-11-20 10:04:21.182505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.470 [2024-11-20 10:04:21.182537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.470 qpair failed and we were unable to recover it. 00:30:50.471 [2024-11-20 10:04:21.182772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.471 [2024-11-20 10:04:21.182800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.471 qpair failed and we were unable to recover it. 00:30:50.471 [2024-11-20 10:04:21.183151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.471 [2024-11-20 10:04:21.183192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.471 qpair failed and we were unable to recover it. 00:30:50.471 [2024-11-20 10:04:21.183524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.471 [2024-11-20 10:04:21.183560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.471 qpair failed and we were unable to recover it. 00:30:50.471 [2024-11-20 10:04:21.183906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.471 [2024-11-20 10:04:21.183936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.471 qpair failed and we were unable to recover it. 00:30:50.471 [2024-11-20 10:04:21.184142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.471 [2024-11-20 10:04:21.184181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.471 qpair failed and we were unable to recover it. 00:30:50.471 [2024-11-20 10:04:21.184496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.471 [2024-11-20 10:04:21.184527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.471 qpair failed and we were unable to recover it. 00:30:50.471 [2024-11-20 10:04:21.184863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.471 [2024-11-20 10:04:21.184892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.471 qpair failed and we were unable to recover it. 00:30:50.471 [2024-11-20 10:04:21.185128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.471 [2024-11-20 10:04:21.185157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.471 qpair failed and we were unable to recover it. 00:30:50.471 [2024-11-20 10:04:21.185547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.471 [2024-11-20 10:04:21.185576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.471 qpair failed and we were unable to recover it. 00:30:50.471 [2024-11-20 10:04:21.185784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.471 [2024-11-20 10:04:21.185812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.471 qpair failed and we were unable to recover it. 00:30:50.471 [2024-11-20 10:04:21.186173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.471 [2024-11-20 10:04:21.186203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.471 qpair failed and we were unable to recover it. 00:30:50.471 [2024-11-20 10:04:21.186557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.471 [2024-11-20 10:04:21.186585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.471 qpair failed and we were unable to recover it. 00:30:50.471 [2024-11-20 10:04:21.186950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.471 [2024-11-20 10:04:21.186980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.471 qpair failed and we were unable to recover it. 00:30:50.471 [2024-11-20 10:04:21.187332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.471 [2024-11-20 10:04:21.187362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.471 qpair failed and we were unable to recover it. 00:30:50.471 [2024-11-20 10:04:21.187705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.471 [2024-11-20 10:04:21.187733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.471 qpair failed and we were unable to recover it. 00:30:50.471 [2024-11-20 10:04:21.188098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.471 [2024-11-20 10:04:21.188128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.471 qpair failed and we were unable to recover it. 00:30:50.471 [2024-11-20 10:04:21.188493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.471 [2024-11-20 10:04:21.188524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.471 qpair failed and we were unable to recover it. 00:30:50.471 [2024-11-20 10:04:21.188859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.471 [2024-11-20 10:04:21.188888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.471 qpair failed and we were unable to recover it. 00:30:50.471 [2024-11-20 10:04:21.189325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.471 [2024-11-20 10:04:21.189356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.471 qpair failed and we were unable to recover it. 00:30:50.471 [2024-11-20 10:04:21.189653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.471 [2024-11-20 10:04:21.189683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.471 qpair failed and we were unable to recover it. 00:30:50.471 [2024-11-20 10:04:21.189913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.471 [2024-11-20 10:04:21.189941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.471 qpair failed and we were unable to recover it. 00:30:50.471 [2024-11-20 10:04:21.190284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.471 [2024-11-20 10:04:21.190315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.471 qpair failed and we were unable to recover it. 00:30:50.471 [2024-11-20 10:04:21.190704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.471 [2024-11-20 10:04:21.190733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.471 qpair failed and we were unable to recover it. 00:30:50.471 [2024-11-20 10:04:21.190965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.471 [2024-11-20 10:04:21.190992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.471 qpair failed and we were unable to recover it. 00:30:50.471 [2024-11-20 10:04:21.191331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.471 [2024-11-20 10:04:21.191360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.471 qpair failed and we were unable to recover it. 00:30:50.471 [2024-11-20 10:04:21.191682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.471 [2024-11-20 10:04:21.191712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.471 qpair failed and we were unable to recover it. 00:30:50.471 [2024-11-20 10:04:21.192065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.471 [2024-11-20 10:04:21.192095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.471 qpair failed and we were unable to recover it. 00:30:50.471 [2024-11-20 10:04:21.192433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.471 [2024-11-20 10:04:21.192463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.471 qpair failed and we were unable to recover it. 00:30:50.471 [2024-11-20 10:04:21.192697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.471 [2024-11-20 10:04:21.192726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.471 qpair failed and we were unable to recover it. 00:30:50.471 [2024-11-20 10:04:21.193146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.471 [2024-11-20 10:04:21.193185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.471 qpair failed and we were unable to recover it. 00:30:50.471 [2024-11-20 10:04:21.193462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.471 [2024-11-20 10:04:21.193493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.471 qpair failed and we were unable to recover it. 00:30:50.471 [2024-11-20 10:04:21.193731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.471 [2024-11-20 10:04:21.193760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.471 qpair failed and we were unable to recover it. 00:30:50.471 [2024-11-20 10:04:21.194006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.471 [2024-11-20 10:04:21.194034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.471 qpair failed and we were unable to recover it. 00:30:50.471 [2024-11-20 10:04:21.194274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.471 [2024-11-20 10:04:21.194305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.471 qpair failed and we were unable to recover it. 00:30:50.471 [2024-11-20 10:04:21.194651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.471 [2024-11-20 10:04:21.194681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.471 qpair failed and we were unable to recover it. 00:30:50.471 [2024-11-20 10:04:21.195046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.471 [2024-11-20 10:04:21.195075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.471 qpair failed and we were unable to recover it. 00:30:50.471 [2024-11-20 10:04:21.195435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.471 [2024-11-20 10:04:21.195466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.471 qpair failed and we were unable to recover it. 00:30:50.471 [2024-11-20 10:04:21.195694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.471 [2024-11-20 10:04:21.195722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.471 qpair failed and we were unable to recover it. 00:30:50.472 [2024-11-20 10:04:21.196067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.472 [2024-11-20 10:04:21.196096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.472 qpair failed and we were unable to recover it. 00:30:50.472 [2024-11-20 10:04:21.196325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.472 [2024-11-20 10:04:21.196357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.472 qpair failed and we were unable to recover it. 00:30:50.472 [2024-11-20 10:04:21.196712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.472 [2024-11-20 10:04:21.196742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.472 qpair failed and we were unable to recover it. 00:30:50.472 [2024-11-20 10:04:21.197011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.472 [2024-11-20 10:04:21.197040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.472 qpair failed and we were unable to recover it. 00:30:50.472 [2024-11-20 10:04:21.197244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.472 [2024-11-20 10:04:21.197275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.472 qpair failed and we were unable to recover it. 00:30:50.472 [2024-11-20 10:04:21.197626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.472 [2024-11-20 10:04:21.197655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.472 qpair failed and we were unable to recover it. 00:30:50.472 [2024-11-20 10:04:21.198063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.472 [2024-11-20 10:04:21.198093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.472 qpair failed and we were unable to recover it. 00:30:50.472 [2024-11-20 10:04:21.198332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.472 [2024-11-20 10:04:21.198361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.472 qpair failed and we were unable to recover it. 00:30:50.472 [2024-11-20 10:04:21.198710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.472 [2024-11-20 10:04:21.198739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.472 qpair failed and we were unable to recover it. 00:30:50.472 [2024-11-20 10:04:21.199104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.472 [2024-11-20 10:04:21.199133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.472 qpair failed and we were unable to recover it. 00:30:50.472 [2024-11-20 10:04:21.199478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.472 [2024-11-20 10:04:21.199508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.472 qpair failed and we were unable to recover it. 00:30:50.472 [2024-11-20 10:04:21.199748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.472 [2024-11-20 10:04:21.199776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.472 qpair failed and we were unable to recover it. 00:30:50.472 [2024-11-20 10:04:21.200007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.472 [2024-11-20 10:04:21.200034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.472 qpair failed and we were unable to recover it. 00:30:50.472 [2024-11-20 10:04:21.200386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.472 [2024-11-20 10:04:21.200416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.472 qpair failed and we were unable to recover it. 00:30:50.472 [2024-11-20 10:04:21.200770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.472 [2024-11-20 10:04:21.200799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.472 qpair failed and we were unable to recover it. 00:30:50.472 [2024-11-20 10:04:21.201152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.472 [2024-11-20 10:04:21.201192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.472 qpair failed and we were unable to recover it. 00:30:50.472 [2024-11-20 10:04:21.201404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.472 [2024-11-20 10:04:21.201432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.472 qpair failed and we were unable to recover it. 00:30:50.472 [2024-11-20 10:04:21.201794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.472 [2024-11-20 10:04:21.201823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.472 qpair failed and we were unable to recover it. 00:30:50.472 [2024-11-20 10:04:21.202182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.472 [2024-11-20 10:04:21.202214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.472 qpair failed and we were unable to recover it. 00:30:50.472 [2024-11-20 10:04:21.202559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.472 [2024-11-20 10:04:21.202587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.472 qpair failed and we were unable to recover it. 00:30:50.472 [2024-11-20 10:04:21.202942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.472 [2024-11-20 10:04:21.202971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.472 qpair failed and we were unable to recover it. 00:30:50.472 [2024-11-20 10:04:21.203327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.472 [2024-11-20 10:04:21.203358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.472 qpair failed and we were unable to recover it. 00:30:50.472 [2024-11-20 10:04:21.203721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.472 [2024-11-20 10:04:21.203750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.472 qpair failed and we were unable to recover it. 00:30:50.472 [2024-11-20 10:04:21.203904] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:30:50.472 [2024-11-20 10:04:21.203951] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:[2024-11-20 10:04:21.203945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.472 5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:50.472 [2024-11-20 10:04:21.203973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.472 qpair failed and we were unable to recover it. 00:30:50.472 [2024-11-20 10:04:21.204327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.472 [2024-11-20 10:04:21.204356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.472 qpair failed and we were unable to recover it. 00:30:50.472 [2024-11-20 10:04:21.204687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.472 [2024-11-20 10:04:21.204715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.472 qpair failed and we were unable to recover it. 00:30:50.472 [2024-11-20 10:04:21.205090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.472 [2024-11-20 10:04:21.205119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.472 qpair failed and we were unable to recover it. 00:30:50.472 [2024-11-20 10:04:21.205353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.472 [2024-11-20 10:04:21.205385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.472 qpair failed and we were unable to recover it. 00:30:50.472 [2024-11-20 10:04:21.205609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.472 [2024-11-20 10:04:21.205639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.472 qpair failed and we were unable to recover it. 00:30:50.472 [2024-11-20 10:04:21.205823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.472 [2024-11-20 10:04:21.205852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.472 qpair failed and we were unable to recover it. 00:30:50.472 [2024-11-20 10:04:21.206061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.472 [2024-11-20 10:04:21.206091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.472 qpair failed and we were unable to recover it. 00:30:50.472 [2024-11-20 10:04:21.206437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.472 [2024-11-20 10:04:21.206468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.472 qpair failed and we were unable to recover it. 00:30:50.472 [2024-11-20 10:04:21.206684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.472 [2024-11-20 10:04:21.206714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.472 qpair failed and we were unable to recover it. 00:30:50.472 [2024-11-20 10:04:21.207059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.472 [2024-11-20 10:04:21.207087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.472 qpair failed and we were unable to recover it. 00:30:50.472 [2024-11-20 10:04:21.207471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.472 [2024-11-20 10:04:21.207503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.472 qpair failed and we were unable to recover it. 00:30:50.472 [2024-11-20 10:04:21.207864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.472 [2024-11-20 10:04:21.207895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.472 qpair failed and we were unable to recover it. 00:30:50.472 [2024-11-20 10:04:21.208256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.472 [2024-11-20 10:04:21.208288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.472 qpair failed and we were unable to recover it. 00:30:50.472 [2024-11-20 10:04:21.208500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.472 [2024-11-20 10:04:21.208530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.472 qpair failed and we were unable to recover it. 00:30:50.472 [2024-11-20 10:04:21.208798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.473 [2024-11-20 10:04:21.208831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.473 qpair failed and we were unable to recover it. 00:30:50.473 [2024-11-20 10:04:21.209039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.473 [2024-11-20 10:04:21.209069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.473 qpair failed and we were unable to recover it. 00:30:50.473 [2024-11-20 10:04:21.209419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.473 [2024-11-20 10:04:21.209450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.473 qpair failed and we were unable to recover it. 00:30:50.473 [2024-11-20 10:04:21.209787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.473 [2024-11-20 10:04:21.209821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.473 qpair failed and we were unable to recover it. 00:30:50.473 [2024-11-20 10:04:21.210263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.473 [2024-11-20 10:04:21.210294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.473 qpair failed and we were unable to recover it. 00:30:50.473 [2024-11-20 10:04:21.210649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.473 [2024-11-20 10:04:21.210684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.473 qpair failed and we were unable to recover it. 00:30:50.473 [2024-11-20 10:04:21.211041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.473 [2024-11-20 10:04:21.211070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.473 qpair failed and we were unable to recover it. 00:30:50.473 [2024-11-20 10:04:21.211310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.473 [2024-11-20 10:04:21.211340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.473 qpair failed and we were unable to recover it. 00:30:50.473 [2024-11-20 10:04:21.211739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.473 [2024-11-20 10:04:21.211768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.473 qpair failed and we were unable to recover it. 00:30:50.473 [2024-11-20 10:04:21.212097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.473 [2024-11-20 10:04:21.212125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.473 qpair failed and we were unable to recover it. 00:30:50.473 [2024-11-20 10:04:21.212506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.473 [2024-11-20 10:04:21.212537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.473 qpair failed and we were unable to recover it. 00:30:50.473 [2024-11-20 10:04:21.212862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.473 [2024-11-20 10:04:21.212890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.473 qpair failed and we were unable to recover it. 00:30:50.473 [2024-11-20 10:04:21.213115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.473 [2024-11-20 10:04:21.213144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.473 qpair failed and we were unable to recover it. 00:30:50.473 [2024-11-20 10:04:21.213255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.473 [2024-11-20 10:04:21.213282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.473 qpair failed and we were unable to recover it. 00:30:50.473 [2024-11-20 10:04:21.213557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.473 [2024-11-20 10:04:21.213584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.473 qpair failed and we were unable to recover it. 00:30:50.473 [2024-11-20 10:04:21.213931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.473 [2024-11-20 10:04:21.213959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.473 qpair failed and we were unable to recover it. 00:30:50.473 [2024-11-20 10:04:21.214338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.473 [2024-11-20 10:04:21.214367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.473 qpair failed and we were unable to recover it. 00:30:50.473 [2024-11-20 10:04:21.214712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.473 [2024-11-20 10:04:21.214739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.473 qpair failed and we were unable to recover it. 00:30:50.473 [2024-11-20 10:04:21.215100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.473 [2024-11-20 10:04:21.215129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.473 qpair failed and we were unable to recover it. 00:30:50.473 [2024-11-20 10:04:21.215369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.473 [2024-11-20 10:04:21.215398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.473 qpair failed and we were unable to recover it. 00:30:50.473 [2024-11-20 10:04:21.215649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.473 [2024-11-20 10:04:21.215678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.473 qpair failed and we were unable to recover it. 00:30:50.473 [2024-11-20 10:04:21.215874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.473 [2024-11-20 10:04:21.215903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.473 qpair failed and we were unable to recover it. 00:30:50.473 [2024-11-20 10:04:21.216255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.473 [2024-11-20 10:04:21.216284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.473 qpair failed and we were unable to recover it. 00:30:50.473 [2024-11-20 10:04:21.216613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.473 [2024-11-20 10:04:21.216643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.473 qpair failed and we were unable to recover it. 00:30:50.473 [2024-11-20 10:04:21.216995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.473 [2024-11-20 10:04:21.217024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.473 qpair failed and we were unable to recover it. 00:30:50.473 [2024-11-20 10:04:21.217416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.473 [2024-11-20 10:04:21.217446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.473 qpair failed and we were unable to recover it. 00:30:50.473 [2024-11-20 10:04:21.217703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.473 [2024-11-20 10:04:21.217730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.473 qpair failed and we were unable to recover it. 00:30:50.473 [2024-11-20 10:04:21.218002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.473 [2024-11-20 10:04:21.218031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.473 qpair failed and we were unable to recover it. 00:30:50.473 [2024-11-20 10:04:21.218379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.473 [2024-11-20 10:04:21.218411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.473 qpair failed and we were unable to recover it. 00:30:50.473 [2024-11-20 10:04:21.218795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.473 [2024-11-20 10:04:21.218822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.473 qpair failed and we were unable to recover it. 00:30:50.473 [2024-11-20 10:04:21.218922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.473 [2024-11-20 10:04:21.218949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:50.473 qpair failed and we were unable to recover it. 00:30:50.473 [2024-11-20 10:04:21.219001] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x936e00 (9): Bad file descriptor 00:30:50.473 Read completed with error (sct=0, sc=8) 00:30:50.473 starting I/O failed 00:30:50.473 Read completed with error (sct=0, sc=8) 00:30:50.473 starting I/O failed 00:30:50.473 Read completed with error (sct=0, sc=8) 00:30:50.473 starting I/O failed 00:30:50.473 Read completed with error (sct=0, sc=8) 00:30:50.473 starting I/O failed 00:30:50.473 Read completed with error (sct=0, sc=8) 00:30:50.473 starting I/O failed 00:30:50.473 Read completed with error (sct=0, sc=8) 00:30:50.473 starting I/O failed 00:30:50.473 Read completed with error (sct=0, sc=8) 00:30:50.473 starting I/O failed 00:30:50.473 Read completed with error (sct=0, sc=8) 00:30:50.473 starting I/O failed 00:30:50.473 Read completed with error (sct=0, sc=8) 00:30:50.473 starting I/O failed 00:30:50.473 Read completed with error (sct=0, sc=8) 00:30:50.473 starting I/O failed 00:30:50.473 Read completed with error (sct=0, sc=8) 00:30:50.473 starting I/O failed 00:30:50.473 Read completed with error (sct=0, sc=8) 00:30:50.473 starting I/O failed 00:30:50.473 Read completed with error (sct=0, sc=8) 00:30:50.473 starting I/O failed 00:30:50.473 Read completed with error (sct=0, sc=8) 00:30:50.473 starting I/O failed 00:30:50.473 Write completed with error (sct=0, sc=8) 00:30:50.473 starting I/O failed 00:30:50.473 Write completed with error (sct=0, sc=8) 00:30:50.473 starting I/O failed 00:30:50.473 Read completed with error (sct=0, sc=8) 00:30:50.473 starting I/O failed 00:30:50.473 Write completed with error (sct=0, sc=8) 00:30:50.473 starting I/O failed 00:30:50.473 Read completed with error (sct=0, sc=8) 00:30:50.473 starting I/O failed 00:30:50.473 Read completed with error (sct=0, sc=8) 00:30:50.473 starting I/O failed 00:30:50.473 Read completed with error (sct=0, sc=8) 00:30:50.473 starting I/O failed 00:30:50.473 Read completed with error (sct=0, sc=8) 00:30:50.473 starting I/O failed 00:30:50.473 Write completed with error (sct=0, sc=8) 00:30:50.474 starting I/O failed 00:30:50.474 Read completed with error (sct=0, sc=8) 00:30:50.474 starting I/O failed 00:30:50.474 Write completed with error (sct=0, sc=8) 00:30:50.474 starting I/O failed 00:30:50.474 Read completed with error (sct=0, sc=8) 00:30:50.474 starting I/O failed 00:30:50.474 Read completed with error (sct=0, sc=8) 00:30:50.474 starting I/O failed 00:30:50.474 Write completed with error (sct=0, sc=8) 00:30:50.474 starting I/O failed 00:30:50.474 Write completed with error (sct=0, sc=8) 00:30:50.474 starting I/O failed 00:30:50.474 Read completed with error (sct=0, sc=8) 00:30:50.474 starting I/O failed 00:30:50.474 Read completed with error (sct=0, sc=8) 00:30:50.474 starting I/O failed 00:30:50.474 Write completed with error (sct=0, sc=8) 00:30:50.474 starting I/O failed 00:30:50.474 [2024-11-20 10:04:21.219967] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.474 [2024-11-20 10:04:21.220505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.474 [2024-11-20 10:04:21.220610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.474 qpair failed and we were unable to recover it. 00:30:50.474 [2024-11-20 10:04:21.221058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.474 [2024-11-20 10:04:21.221094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.474 qpair failed and we were unable to recover it. 00:30:50.474 [2024-11-20 10:04:21.221533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.474 [2024-11-20 10:04:21.221622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.474 qpair failed and we were unable to recover it. 00:30:50.474 [2024-11-20 10:04:21.221912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.474 [2024-11-20 10:04:21.221951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.474 qpair failed and we were unable to recover it. 00:30:50.474 [2024-11-20 10:04:21.222292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.474 [2024-11-20 10:04:21.222324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.474 qpair failed and we were unable to recover it. 00:30:50.474 [2024-11-20 10:04:21.222637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.474 [2024-11-20 10:04:21.222665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.474 qpair failed and we were unable to recover it. 00:30:50.474 [2024-11-20 10:04:21.222982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.474 [2024-11-20 10:04:21.223011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.474 qpair failed and we were unable to recover it. 00:30:50.474 [2024-11-20 10:04:21.223308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.474 [2024-11-20 10:04:21.223341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.474 qpair failed and we were unable to recover it. 00:30:50.474 [2024-11-20 10:04:21.223739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.474 [2024-11-20 10:04:21.223769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.474 qpair failed and we were unable to recover it. 00:30:50.474 [2024-11-20 10:04:21.224124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.474 [2024-11-20 10:04:21.224153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.474 qpair failed and we were unable to recover it. 00:30:50.474 [2024-11-20 10:04:21.224541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.474 [2024-11-20 10:04:21.224570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.474 qpair failed and we were unable to recover it. 00:30:50.474 [2024-11-20 10:04:21.224916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.474 [2024-11-20 10:04:21.224944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.474 qpair failed and we were unable to recover it. 00:30:50.474 [2024-11-20 10:04:21.225205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.474 [2024-11-20 10:04:21.225240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.474 qpair failed and we were unable to recover it. 00:30:50.474 [2024-11-20 10:04:21.225380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.474 [2024-11-20 10:04:21.225419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.474 qpair failed and we were unable to recover it. 00:30:50.474 [2024-11-20 10:04:21.225764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.474 [2024-11-20 10:04:21.225792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.474 qpair failed and we were unable to recover it. 00:30:50.474 [2024-11-20 10:04:21.225895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.474 [2024-11-20 10:04:21.225921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.474 qpair failed and we were unable to recover it. 00:30:50.474 [2024-11-20 10:04:21.226239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.474 [2024-11-20 10:04:21.226269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.474 qpair failed and we were unable to recover it. 00:30:50.474 [2024-11-20 10:04:21.226508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.474 [2024-11-20 10:04:21.226536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.474 qpair failed and we were unable to recover it. 00:30:50.474 [2024-11-20 10:04:21.226872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.474 [2024-11-20 10:04:21.226901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.474 qpair failed and we were unable to recover it. 00:30:50.474 [2024-11-20 10:04:21.227257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.474 [2024-11-20 10:04:21.227286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.474 qpair failed and we were unable to recover it. 00:30:50.474 [2024-11-20 10:04:21.227685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.474 [2024-11-20 10:04:21.227721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.474 qpair failed and we were unable to recover it. 00:30:50.474 [2024-11-20 10:04:21.227962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.474 [2024-11-20 10:04:21.227990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.474 qpair failed and we were unable to recover it. 00:30:50.474 [2024-11-20 10:04:21.228330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.474 [2024-11-20 10:04:21.228361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.474 qpair failed and we were unable to recover it. 00:30:50.474 [2024-11-20 10:04:21.228690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.474 [2024-11-20 10:04:21.228719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.474 qpair failed and we were unable to recover it. 00:30:50.474 [2024-11-20 10:04:21.228924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.474 [2024-11-20 10:04:21.228954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.474 qpair failed and we were unable to recover it. 00:30:50.474 [2024-11-20 10:04:21.229375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.474 [2024-11-20 10:04:21.229405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.474 qpair failed and we were unable to recover it. 00:30:50.474 [2024-11-20 10:04:21.229647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.474 [2024-11-20 10:04:21.229675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.474 qpair failed and we were unable to recover it. 00:30:50.474 [2024-11-20 10:04:21.230036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.474 [2024-11-20 10:04:21.230064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.474 qpair failed and we were unable to recover it. 00:30:50.474 [2024-11-20 10:04:21.230285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.474 [2024-11-20 10:04:21.230313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.474 qpair failed and we were unable to recover it. 00:30:50.475 [2024-11-20 10:04:21.230665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.475 [2024-11-20 10:04:21.230693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.475 qpair failed and we were unable to recover it. 00:30:50.475 [2024-11-20 10:04:21.231053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.475 [2024-11-20 10:04:21.231081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.475 qpair failed and we were unable to recover it. 00:30:50.475 [2024-11-20 10:04:21.231461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.475 [2024-11-20 10:04:21.231490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.475 qpair failed and we were unable to recover it. 00:30:50.475 [2024-11-20 10:04:21.231837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.475 [2024-11-20 10:04:21.231865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.475 qpair failed and we were unable to recover it. 00:30:50.475 [2024-11-20 10:04:21.232088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.475 [2024-11-20 10:04:21.232115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.475 qpair failed and we were unable to recover it. 00:30:50.475 [2024-11-20 10:04:21.232495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.475 [2024-11-20 10:04:21.232527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.475 qpair failed and we were unable to recover it. 00:30:50.475 [2024-11-20 10:04:21.232852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.475 [2024-11-20 10:04:21.232881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.475 qpair failed and we were unable to recover it. 00:30:50.475 [2024-11-20 10:04:21.233174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.475 [2024-11-20 10:04:21.233204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.475 qpair failed and we were unable to recover it. 00:30:50.475 [2024-11-20 10:04:21.233560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.475 [2024-11-20 10:04:21.233588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.475 qpair failed and we were unable to recover it. 00:30:50.475 [2024-11-20 10:04:21.233952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.475 [2024-11-20 10:04:21.233980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.475 qpair failed and we were unable to recover it. 00:30:50.475 [2024-11-20 10:04:21.234322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.475 [2024-11-20 10:04:21.234352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.475 qpair failed and we were unable to recover it. 00:30:50.475 [2024-11-20 10:04:21.234691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.475 [2024-11-20 10:04:21.234719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.475 qpair failed and we were unable to recover it. 00:30:50.475 [2024-11-20 10:04:21.235016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.475 [2024-11-20 10:04:21.235045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.475 qpair failed and we were unable to recover it. 00:30:50.475 [2024-11-20 10:04:21.235459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.475 [2024-11-20 10:04:21.235488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.475 qpair failed and we were unable to recover it. 00:30:50.475 [2024-11-20 10:04:21.235724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.475 [2024-11-20 10:04:21.235752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.475 qpair failed and we were unable to recover it. 00:30:50.475 [2024-11-20 10:04:21.236093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.475 [2024-11-20 10:04:21.236121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.475 qpair failed and we were unable to recover it. 00:30:50.475 [2024-11-20 10:04:21.236478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.475 [2024-11-20 10:04:21.236509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.475 qpair failed and we were unable to recover it. 00:30:50.475 [2024-11-20 10:04:21.236746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.475 [2024-11-20 10:04:21.236773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.475 qpair failed and we were unable to recover it. 00:30:50.475 [2024-11-20 10:04:21.237138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.475 [2024-11-20 10:04:21.237176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.475 qpair failed and we were unable to recover it. 00:30:50.475 [2024-11-20 10:04:21.237524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.475 [2024-11-20 10:04:21.237553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.475 qpair failed and we were unable to recover it. 00:30:50.475 [2024-11-20 10:04:21.237903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.475 [2024-11-20 10:04:21.237930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.475 qpair failed and we were unable to recover it. 00:30:50.475 [2024-11-20 10:04:21.238306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.475 [2024-11-20 10:04:21.238336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.475 qpair failed and we were unable to recover it. 00:30:50.475 [2024-11-20 10:04:21.238694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.475 [2024-11-20 10:04:21.238722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.475 qpair failed and we were unable to recover it. 00:30:50.475 [2024-11-20 10:04:21.239075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.475 [2024-11-20 10:04:21.239102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.475 qpair failed and we were unable to recover it. 00:30:50.475 [2024-11-20 10:04:21.239338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.475 [2024-11-20 10:04:21.239375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.475 qpair failed and we were unable to recover it. 00:30:50.475 [2024-11-20 10:04:21.239672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.475 [2024-11-20 10:04:21.239701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.475 qpair failed and we were unable to recover it. 00:30:50.475 [2024-11-20 10:04:21.239944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.475 [2024-11-20 10:04:21.239974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.475 qpair failed and we were unable to recover it. 00:30:50.475 [2024-11-20 10:04:21.240305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.475 [2024-11-20 10:04:21.240334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.475 qpair failed and we were unable to recover it. 00:30:50.475 [2024-11-20 10:04:21.240711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.475 [2024-11-20 10:04:21.240740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.475 qpair failed and we were unable to recover it. 00:30:50.475 [2024-11-20 10:04:21.241096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.475 [2024-11-20 10:04:21.241125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.475 qpair failed and we were unable to recover it. 00:30:50.475 [2024-11-20 10:04:21.241386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.475 [2024-11-20 10:04:21.241416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.475 qpair failed and we were unable to recover it. 00:30:50.475 [2024-11-20 10:04:21.241743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.475 [2024-11-20 10:04:21.241778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.475 qpair failed and we were unable to recover it. 00:30:50.475 [2024-11-20 10:04:21.242118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.475 [2024-11-20 10:04:21.242149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.475 qpair failed and we were unable to recover it. 00:30:50.475 [2024-11-20 10:04:21.242482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.475 [2024-11-20 10:04:21.242512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.475 qpair failed and we were unable to recover it. 00:30:50.475 [2024-11-20 10:04:21.242968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.475 [2024-11-20 10:04:21.242997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.475 qpair failed and we were unable to recover it. 00:30:50.475 [2024-11-20 10:04:21.243346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.475 [2024-11-20 10:04:21.243380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.475 qpair failed and we were unable to recover it. 00:30:50.475 [2024-11-20 10:04:21.243616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.475 [2024-11-20 10:04:21.243644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.475 qpair failed and we were unable to recover it. 00:30:50.475 [2024-11-20 10:04:21.243978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.475 [2024-11-20 10:04:21.244007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.475 qpair failed and we were unable to recover it. 00:30:50.475 [2024-11-20 10:04:21.244468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.475 [2024-11-20 10:04:21.244499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.475 qpair failed and we were unable to recover it. 00:30:50.475 [2024-11-20 10:04:21.244858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.475 [2024-11-20 10:04:21.244886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.475 qpair failed and we were unable to recover it. 00:30:50.475 [2024-11-20 10:04:21.245135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.475 [2024-11-20 10:04:21.245170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.475 qpair failed and we were unable to recover it. 00:30:50.476 [2024-11-20 10:04:21.245497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.476 [2024-11-20 10:04:21.245525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.476 qpair failed and we were unable to recover it. 00:30:50.476 [2024-11-20 10:04:21.245736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.476 [2024-11-20 10:04:21.245764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.476 qpair failed and we were unable to recover it. 00:30:50.476 [2024-11-20 10:04:21.246094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.476 [2024-11-20 10:04:21.246123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.476 qpair failed and we were unable to recover it. 00:30:50.476 [2024-11-20 10:04:21.246493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.476 [2024-11-20 10:04:21.246523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.476 qpair failed and we were unable to recover it. 00:30:50.476 [2024-11-20 10:04:21.246850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.476 [2024-11-20 10:04:21.246880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.476 qpair failed and we were unable to recover it. 00:30:50.476 [2024-11-20 10:04:21.247236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.476 [2024-11-20 10:04:21.247266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.476 qpair failed and we were unable to recover it. 00:30:50.476 [2024-11-20 10:04:21.247614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.476 [2024-11-20 10:04:21.247644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.476 qpair failed and we were unable to recover it. 00:30:50.476 [2024-11-20 10:04:21.248000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.476 [2024-11-20 10:04:21.248028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.476 qpair failed and we were unable to recover it. 00:30:50.476 [2024-11-20 10:04:21.248171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.476 [2024-11-20 10:04:21.248201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.476 qpair failed and we were unable to recover it. 00:30:50.476 [2024-11-20 10:04:21.248558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.476 [2024-11-20 10:04:21.248586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.476 qpair failed and we were unable to recover it. 00:30:50.476 [2024-11-20 10:04:21.248902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.476 [2024-11-20 10:04:21.248930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.476 qpair failed and we were unable to recover it. 00:30:50.476 [2024-11-20 10:04:21.249269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.476 [2024-11-20 10:04:21.249299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.476 qpair failed and we were unable to recover it. 00:30:50.476 [2024-11-20 10:04:21.249607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.476 [2024-11-20 10:04:21.249635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.476 qpair failed and we were unable to recover it. 00:30:50.476 [2024-11-20 10:04:21.249978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.476 [2024-11-20 10:04:21.250007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.476 qpair failed and we were unable to recover it. 00:30:50.476 [2024-11-20 10:04:21.250445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.476 [2024-11-20 10:04:21.250474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.476 qpair failed and we were unable to recover it. 00:30:50.476 [2024-11-20 10:04:21.250773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.476 [2024-11-20 10:04:21.250801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.476 qpair failed and we were unable to recover it. 00:30:50.476 [2024-11-20 10:04:21.251071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.476 [2024-11-20 10:04:21.251099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.476 qpair failed and we were unable to recover it. 00:30:50.476 [2024-11-20 10:04:21.251446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.476 [2024-11-20 10:04:21.251477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.476 qpair failed and we were unable to recover it. 00:30:50.476 [2024-11-20 10:04:21.251802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.476 [2024-11-20 10:04:21.251831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.476 qpair failed and we were unable to recover it. 00:30:50.476 [2024-11-20 10:04:21.252145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.476 [2024-11-20 10:04:21.252186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.476 qpair failed and we were unable to recover it. 00:30:50.476 [2024-11-20 10:04:21.252421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.476 [2024-11-20 10:04:21.252449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.476 qpair failed and we were unable to recover it. 00:30:50.476 [2024-11-20 10:04:21.252773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.476 [2024-11-20 10:04:21.252803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.476 qpair failed and we were unable to recover it. 00:30:50.476 [2024-11-20 10:04:21.253114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.476 [2024-11-20 10:04:21.253143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.476 qpair failed and we were unable to recover it. 00:30:50.476 [2024-11-20 10:04:21.253493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.476 [2024-11-20 10:04:21.253522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.476 qpair failed and we were unable to recover it. 00:30:50.476 [2024-11-20 10:04:21.253750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.476 [2024-11-20 10:04:21.253779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.476 qpair failed and we were unable to recover it. 00:30:50.476 [2024-11-20 10:04:21.254143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.476 [2024-11-20 10:04:21.254182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.476 qpair failed and we were unable to recover it. 00:30:50.476 [2024-11-20 10:04:21.254397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.476 [2024-11-20 10:04:21.254426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.476 qpair failed and we were unable to recover it. 00:30:50.476 [2024-11-20 10:04:21.254775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.476 [2024-11-20 10:04:21.254803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.476 qpair failed and we were unable to recover it. 00:30:50.476 [2024-11-20 10:04:21.255133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.476 [2024-11-20 10:04:21.255184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.476 qpair failed and we were unable to recover it. 00:30:50.476 [2024-11-20 10:04:21.255544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.476 [2024-11-20 10:04:21.255573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.476 qpair failed and we were unable to recover it. 00:30:50.476 [2024-11-20 10:04:21.255950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.476 [2024-11-20 10:04:21.255984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.476 qpair failed and we were unable to recover it. 00:30:50.476 [2024-11-20 10:04:21.256325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.476 [2024-11-20 10:04:21.256355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.476 qpair failed and we were unable to recover it. 00:30:50.476 [2024-11-20 10:04:21.256688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.476 [2024-11-20 10:04:21.256717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.476 qpair failed and we were unable to recover it. 00:30:50.476 [2024-11-20 10:04:21.257034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.476 [2024-11-20 10:04:21.257062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.476 qpair failed and we were unable to recover it. 00:30:50.476 [2024-11-20 10:04:21.257420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.476 [2024-11-20 10:04:21.257450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.476 qpair failed and we were unable to recover it. 00:30:50.476 [2024-11-20 10:04:21.257798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.476 [2024-11-20 10:04:21.257827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.476 qpair failed and we were unable to recover it. 00:30:50.476 [2024-11-20 10:04:21.258173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.476 [2024-11-20 10:04:21.258201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.476 qpair failed and we were unable to recover it. 00:30:50.476 [2024-11-20 10:04:21.258540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.476 [2024-11-20 10:04:21.258568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.476 qpair failed and we were unable to recover it. 00:30:50.476 [2024-11-20 10:04:21.258875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.476 [2024-11-20 10:04:21.258903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.476 qpair failed and we were unable to recover it. 00:30:50.476 [2024-11-20 10:04:21.259225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.476 [2024-11-20 10:04:21.259254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.476 qpair failed and we were unable to recover it. 00:30:50.476 [2024-11-20 10:04:21.259451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.476 [2024-11-20 10:04:21.259479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.476 qpair failed and we were unable to recover it. 00:30:50.477 [2024-11-20 10:04:21.259831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.477 [2024-11-20 10:04:21.259860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.477 qpair failed and we were unable to recover it. 00:30:50.477 [2024-11-20 10:04:21.260114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.477 [2024-11-20 10:04:21.260141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.477 qpair failed and we were unable to recover it. 00:30:50.477 [2024-11-20 10:04:21.260497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.477 [2024-11-20 10:04:21.260526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.477 qpair failed and we were unable to recover it. 00:30:50.477 [2024-11-20 10:04:21.260879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.477 [2024-11-20 10:04:21.260908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.477 qpair failed and we were unable to recover it. 00:30:50.477 [2024-11-20 10:04:21.261254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.477 [2024-11-20 10:04:21.261283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.477 qpair failed and we were unable to recover it. 00:30:50.477 [2024-11-20 10:04:21.261580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.477 [2024-11-20 10:04:21.261608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.477 qpair failed and we were unable to recover it. 00:30:50.477 [2024-11-20 10:04:21.261962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.477 [2024-11-20 10:04:21.261991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.477 qpair failed and we were unable to recover it. 00:30:50.477 [2024-11-20 10:04:21.262321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.477 [2024-11-20 10:04:21.262350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.477 qpair failed and we were unable to recover it. 00:30:50.477 [2024-11-20 10:04:21.262685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.477 [2024-11-20 10:04:21.262714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.477 qpair failed and we were unable to recover it. 00:30:50.477 [2024-11-20 10:04:21.262982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.477 [2024-11-20 10:04:21.263010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.477 qpair failed and we were unable to recover it. 00:30:50.477 [2024-11-20 10:04:21.263250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.477 [2024-11-20 10:04:21.263282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.477 qpair failed and we were unable to recover it. 00:30:50.477 [2024-11-20 10:04:21.263636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.477 [2024-11-20 10:04:21.263666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.477 qpair failed and we were unable to recover it. 00:30:50.477 [2024-11-20 10:04:21.263892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.477 [2024-11-20 10:04:21.263920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.477 qpair failed and we were unable to recover it. 00:30:50.477 [2024-11-20 10:04:21.264208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.477 [2024-11-20 10:04:21.264238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.477 qpair failed and we were unable to recover it. 00:30:50.477 [2024-11-20 10:04:21.264447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.477 [2024-11-20 10:04:21.264475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.477 qpair failed and we were unable to recover it. 00:30:50.477 [2024-11-20 10:04:21.264795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.477 [2024-11-20 10:04:21.264823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.477 qpair failed and we were unable to recover it. 00:30:50.477 [2024-11-20 10:04:21.265188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.477 [2024-11-20 10:04:21.265219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.477 qpair failed and we were unable to recover it. 00:30:50.477 [2024-11-20 10:04:21.265415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.477 [2024-11-20 10:04:21.265442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.477 qpair failed and we were unable to recover it. 00:30:50.477 [2024-11-20 10:04:21.265815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.477 [2024-11-20 10:04:21.265843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.477 qpair failed and we were unable to recover it. 00:30:50.477 [2024-11-20 10:04:21.266068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.477 [2024-11-20 10:04:21.266095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.477 qpair failed and we were unable to recover it. 00:30:50.477 [2024-11-20 10:04:21.266444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.477 [2024-11-20 10:04:21.266473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.477 qpair failed and we were unable to recover it. 00:30:50.477 [2024-11-20 10:04:21.266801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.477 [2024-11-20 10:04:21.266828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.477 qpair failed and we were unable to recover it. 00:30:50.477 [2024-11-20 10:04:21.267240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.477 [2024-11-20 10:04:21.267269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.477 qpair failed and we were unable to recover it. 00:30:50.477 [2024-11-20 10:04:21.267576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.477 [2024-11-20 10:04:21.267604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.477 qpair failed and we were unable to recover it. 00:30:50.477 [2024-11-20 10:04:21.267839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.477 [2024-11-20 10:04:21.267867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.477 qpair failed and we were unable to recover it. 00:30:50.477 [2024-11-20 10:04:21.268209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.477 [2024-11-20 10:04:21.268238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.477 qpair failed and we were unable to recover it. 00:30:50.477 [2024-11-20 10:04:21.268594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.477 [2024-11-20 10:04:21.268622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.477 qpair failed and we were unable to recover it. 00:30:50.477 [2024-11-20 10:04:21.268976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.477 [2024-11-20 10:04:21.269004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.477 qpair failed and we were unable to recover it. 00:30:50.477 [2024-11-20 10:04:21.269370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.477 [2024-11-20 10:04:21.269401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.477 qpair failed and we were unable to recover it. 00:30:50.477 [2024-11-20 10:04:21.269598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.477 [2024-11-20 10:04:21.269638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.477 qpair failed and we were unable to recover it. 00:30:50.477 [2024-11-20 10:04:21.269963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.477 [2024-11-20 10:04:21.269991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.477 qpair failed and we were unable to recover it. 00:30:50.477 [2024-11-20 10:04:21.270325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.477 [2024-11-20 10:04:21.270355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.477 qpair failed and we were unable to recover it. 00:30:50.477 [2024-11-20 10:04:21.270715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.477 [2024-11-20 10:04:21.270744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.477 qpair failed and we were unable to recover it. 00:30:50.477 [2024-11-20 10:04:21.271089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.477 [2024-11-20 10:04:21.271117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.477 qpair failed and we were unable to recover it. 00:30:50.477 [2024-11-20 10:04:21.271478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.477 [2024-11-20 10:04:21.271508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.477 qpair failed and we were unable to recover it. 00:30:50.477 [2024-11-20 10:04:21.271862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.477 [2024-11-20 10:04:21.271890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.477 qpair failed and we were unable to recover it. 00:30:50.477 [2024-11-20 10:04:21.272129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.477 [2024-11-20 10:04:21.272157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.477 qpair failed and we were unable to recover it. 00:30:50.477 [2024-11-20 10:04:21.272426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.477 [2024-11-20 10:04:21.272454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.477 qpair failed and we were unable to recover it. 00:30:50.477 [2024-11-20 10:04:21.272763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.477 [2024-11-20 10:04:21.272792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.477 qpair failed and we were unable to recover it. 00:30:50.477 [2024-11-20 10:04:21.272912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.477 [2024-11-20 10:04:21.272940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.477 qpair failed and we were unable to recover it. 00:30:50.477 [2024-11-20 10:04:21.273157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.477 [2024-11-20 10:04:21.273277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.477 qpair failed and we were unable to recover it. 00:30:50.477 [2024-11-20 10:04:21.273621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.477 [2024-11-20 10:04:21.273650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.477 qpair failed and we were unable to recover it. 00:30:50.477 [2024-11-20 10:04:21.273999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.477 [2024-11-20 10:04:21.274028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.477 qpair failed and we were unable to recover it. 00:30:50.477 [2024-11-20 10:04:21.274390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.478 [2024-11-20 10:04:21.274419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.478 qpair failed and we were unable to recover it. 00:30:50.478 [2024-11-20 10:04:21.274626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.478 [2024-11-20 10:04:21.274656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.478 qpair failed and we were unable to recover it. 00:30:50.478 [2024-11-20 10:04:21.274795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.478 [2024-11-20 10:04:21.274828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.478 qpair failed and we were unable to recover it. 00:30:50.478 [2024-11-20 10:04:21.275035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.478 [2024-11-20 10:04:21.275064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.478 qpair failed and we were unable to recover it. 00:30:50.478 [2024-11-20 10:04:21.275320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.478 [2024-11-20 10:04:21.275350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.478 qpair failed and we were unable to recover it. 00:30:50.478 [2024-11-20 10:04:21.275581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.478 [2024-11-20 10:04:21.275609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.478 qpair failed and we were unable to recover it. 00:30:50.478 [2024-11-20 10:04:21.275847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.478 [2024-11-20 10:04:21.275876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.478 qpair failed and we were unable to recover it. 00:30:50.478 [2024-11-20 10:04:21.276234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.478 [2024-11-20 10:04:21.276264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.478 qpair failed and we were unable to recover it. 00:30:50.478 [2024-11-20 10:04:21.276610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.478 [2024-11-20 10:04:21.276637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.478 qpair failed and we were unable to recover it. 00:30:50.478 [2024-11-20 10:04:21.276987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.478 [2024-11-20 10:04:21.277014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.478 qpair failed and we were unable to recover it. 00:30:50.478 [2024-11-20 10:04:21.277365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.478 [2024-11-20 10:04:21.277394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.478 qpair failed and we were unable to recover it. 00:30:50.478 [2024-11-20 10:04:21.277631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.478 [2024-11-20 10:04:21.277662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.478 qpair failed and we were unable to recover it. 00:30:50.478 [2024-11-20 10:04:21.277888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.478 [2024-11-20 10:04:21.277917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.478 qpair failed and we were unable to recover it. 00:30:50.478 [2024-11-20 10:04:21.278150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.478 [2024-11-20 10:04:21.278189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.478 qpair failed and we were unable to recover it. 00:30:50.478 [2024-11-20 10:04:21.278546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.478 [2024-11-20 10:04:21.278574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.478 qpair failed and we were unable to recover it. 00:30:50.478 [2024-11-20 10:04:21.278916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.478 [2024-11-20 10:04:21.278943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.478 qpair failed and we were unable to recover it. 00:30:50.478 [2024-11-20 10:04:21.279189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.478 [2024-11-20 10:04:21.279220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.478 qpair failed and we were unable to recover it. 00:30:50.478 [2024-11-20 10:04:21.279576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.478 [2024-11-20 10:04:21.279605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.478 qpair failed and we were unable to recover it. 00:30:50.478 [2024-11-20 10:04:21.279927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.478 [2024-11-20 10:04:21.279955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.478 qpair failed and we were unable to recover it. 00:30:50.478 [2024-11-20 10:04:21.280292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.478 [2024-11-20 10:04:21.280321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.478 qpair failed and we were unable to recover it. 00:30:50.478 [2024-11-20 10:04:21.280680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.478 [2024-11-20 10:04:21.280708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.478 qpair failed and we were unable to recover it. 00:30:50.478 [2024-11-20 10:04:21.281037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.478 [2024-11-20 10:04:21.281065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.478 qpair failed and we were unable to recover it. 00:30:50.478 [2024-11-20 10:04:21.281407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.478 [2024-11-20 10:04:21.281436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.478 qpair failed and we were unable to recover it. 00:30:50.478 [2024-11-20 10:04:21.281783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.478 [2024-11-20 10:04:21.281812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.478 qpair failed and we were unable to recover it. 00:30:50.478 [2024-11-20 10:04:21.282063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.478 [2024-11-20 10:04:21.282091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.478 qpair failed and we were unable to recover it. 00:30:50.478 [2024-11-20 10:04:21.282464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.478 [2024-11-20 10:04:21.282494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.478 qpair failed and we were unable to recover it. 00:30:50.478 [2024-11-20 10:04:21.282849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.478 [2024-11-20 10:04:21.282884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.478 qpair failed and we were unable to recover it. 00:30:50.478 [2024-11-20 10:04:21.283110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.478 [2024-11-20 10:04:21.283139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.478 qpair failed and we were unable to recover it. 00:30:50.478 [2024-11-20 10:04:21.283504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.478 [2024-11-20 10:04:21.283533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.478 qpair failed and we were unable to recover it. 00:30:50.478 [2024-11-20 10:04:21.283784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.478 [2024-11-20 10:04:21.283818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.478 qpair failed and we were unable to recover it. 00:30:50.478 [2024-11-20 10:04:21.284145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.478 [2024-11-20 10:04:21.284209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.478 qpair failed and we were unable to recover it. 00:30:50.478 [2024-11-20 10:04:21.284522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.478 [2024-11-20 10:04:21.284550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.478 qpair failed and we were unable to recover it. 00:30:50.478 [2024-11-20 10:04:21.284888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.478 [2024-11-20 10:04:21.284917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.478 qpair failed and we were unable to recover it. 00:30:50.478 [2024-11-20 10:04:21.285265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.478 [2024-11-20 10:04:21.285295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.478 qpair failed and we were unable to recover it. 00:30:50.478 [2024-11-20 10:04:21.285648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.478 [2024-11-20 10:04:21.285677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.478 qpair failed and we were unable to recover it. 00:30:50.478 [2024-11-20 10:04:21.285935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.478 [2024-11-20 10:04:21.285963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.478 qpair failed and we were unable to recover it. 00:30:50.478 [2024-11-20 10:04:21.286169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.478 [2024-11-20 10:04:21.286199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.478 qpair failed and we were unable to recover it. 00:30:50.478 [2024-11-20 10:04:21.286537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.478 [2024-11-20 10:04:21.286566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.478 qpair failed and we were unable to recover it. 00:30:50.478 [2024-11-20 10:04:21.286947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.478 [2024-11-20 10:04:21.286975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.479 qpair failed and we were unable to recover it. 00:30:50.479 [2024-11-20 10:04:21.287350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.479 [2024-11-20 10:04:21.287380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.479 qpair failed and we were unable to recover it. 00:30:50.479 [2024-11-20 10:04:21.287805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.479 [2024-11-20 10:04:21.287834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.479 qpair failed and we were unable to recover it. 00:30:50.479 [2024-11-20 10:04:21.288192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.479 [2024-11-20 10:04:21.288222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.479 qpair failed and we were unable to recover it. 00:30:50.479 [2024-11-20 10:04:21.288447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.479 [2024-11-20 10:04:21.288479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.479 qpair failed and we were unable to recover it. 00:30:50.479 [2024-11-20 10:04:21.288702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.479 [2024-11-20 10:04:21.288730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.479 qpair failed and we were unable to recover it. 00:30:50.479 [2024-11-20 10:04:21.289062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.479 [2024-11-20 10:04:21.289090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.479 qpair failed and we were unable to recover it. 00:30:50.479 [2024-11-20 10:04:21.289449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.479 [2024-11-20 10:04:21.289478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.479 qpair failed and we were unable to recover it. 00:30:50.479 [2024-11-20 10:04:21.289846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.479 [2024-11-20 10:04:21.289874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.479 qpair failed and we were unable to recover it. 00:30:50.479 [2024-11-20 10:04:21.290212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.479 [2024-11-20 10:04:21.290241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.479 qpair failed and we were unable to recover it. 00:30:50.479 [2024-11-20 10:04:21.290625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.479 [2024-11-20 10:04:21.290654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.479 qpair failed and we were unable to recover it. 00:30:50.479 [2024-11-20 10:04:21.290999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.479 [2024-11-20 10:04:21.291028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.479 qpair failed and we were unable to recover it. 00:30:50.479 [2024-11-20 10:04:21.291217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.479 [2024-11-20 10:04:21.291246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.479 qpair failed and we were unable to recover it. 00:30:50.479 [2024-11-20 10:04:21.291505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.479 [2024-11-20 10:04:21.291534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.479 qpair failed and we were unable to recover it. 00:30:50.479 [2024-11-20 10:04:21.291784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.479 [2024-11-20 10:04:21.291816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.479 qpair failed and we were unable to recover it. 00:30:50.479 [2024-11-20 10:04:21.292067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.479 [2024-11-20 10:04:21.292096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.479 qpair failed and we were unable to recover it. 00:30:50.479 [2024-11-20 10:04:21.292441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.479 [2024-11-20 10:04:21.292472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.479 qpair failed and we were unable to recover it. 00:30:50.479 [2024-11-20 10:04:21.292859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.479 [2024-11-20 10:04:21.292887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.479 qpair failed and we were unable to recover it. 00:30:50.479 [2024-11-20 10:04:21.293121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.479 [2024-11-20 10:04:21.293149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.479 qpair failed and we were unable to recover it. 00:30:50.479 [2024-11-20 10:04:21.293507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.479 [2024-11-20 10:04:21.293536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.479 qpair failed and we were unable to recover it. 00:30:50.479 [2024-11-20 10:04:21.293749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.479 [2024-11-20 10:04:21.293780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.479 qpair failed and we were unable to recover it. 00:30:50.479 [2024-11-20 10:04:21.294115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.479 [2024-11-20 10:04:21.294144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.479 qpair failed and we were unable to recover it. 00:30:50.479 [2024-11-20 10:04:21.294524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.479 [2024-11-20 10:04:21.294554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.479 qpair failed and we were unable to recover it. 00:30:50.479 [2024-11-20 10:04:21.294912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.479 [2024-11-20 10:04:21.294940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.479 qpair failed and we were unable to recover it. 00:30:50.479 [2024-11-20 10:04:21.295340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.479 [2024-11-20 10:04:21.295369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.479 qpair failed and we were unable to recover it. 00:30:50.479 [2024-11-20 10:04:21.295723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.479 [2024-11-20 10:04:21.295752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.479 qpair failed and we were unable to recover it. 00:30:50.479 [2024-11-20 10:04:21.296092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.479 [2024-11-20 10:04:21.296120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.479 qpair failed and we were unable to recover it. 00:30:50.479 [2024-11-20 10:04:21.296466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.479 [2024-11-20 10:04:21.296496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.479 qpair failed and we were unable to recover it. 00:30:50.479 [2024-11-20 10:04:21.296744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.479 [2024-11-20 10:04:21.296779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.479 qpair failed and we were unable to recover it. 00:30:50.479 [2024-11-20 10:04:21.297140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.479 [2024-11-20 10:04:21.297176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.479 qpair failed and we were unable to recover it. 00:30:50.479 [2024-11-20 10:04:21.297510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.479 [2024-11-20 10:04:21.297538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.479 qpair failed and we were unable to recover it. 00:30:50.479 [2024-11-20 10:04:21.297884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.479 [2024-11-20 10:04:21.297913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.479 qpair failed and we were unable to recover it. 00:30:50.479 [2024-11-20 10:04:21.298309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.479 [2024-11-20 10:04:21.298338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.479 qpair failed and we were unable to recover it. 00:30:50.479 [2024-11-20 10:04:21.298674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.479 [2024-11-20 10:04:21.298701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.479 qpair failed and we were unable to recover it. 00:30:50.479 [2024-11-20 10:04:21.299046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.479 [2024-11-20 10:04:21.299075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.479 qpair failed and we were unable to recover it. 00:30:50.479 [2024-11-20 10:04:21.299478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.479 [2024-11-20 10:04:21.299507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.479 qpair failed and we were unable to recover it. 00:30:50.479 [2024-11-20 10:04:21.299852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.479 [2024-11-20 10:04:21.299881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.479 qpair failed and we were unable to recover it. 00:30:50.479 [2024-11-20 10:04:21.300189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.479 [2024-11-20 10:04:21.300217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.479 qpair failed and we were unable to recover it. 00:30:50.479 [2024-11-20 10:04:21.300554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.479 [2024-11-20 10:04:21.300582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.479 qpair failed and we were unable to recover it. 00:30:50.479 [2024-11-20 10:04:21.300912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.479 [2024-11-20 10:04:21.300940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.479 qpair failed and we were unable to recover it. 00:30:50.479 [2024-11-20 10:04:21.301308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.479 [2024-11-20 10:04:21.301337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.479 qpair failed and we were unable to recover it. 00:30:50.479 [2024-11-20 10:04:21.301699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.479 [2024-11-20 10:04:21.301726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.479 qpair failed and we were unable to recover it. 00:30:50.479 [2024-11-20 10:04:21.302071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.479 [2024-11-20 10:04:21.302101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.479 qpair failed and we were unable to recover it. 00:30:50.479 [2024-11-20 10:04:21.302481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.479 [2024-11-20 10:04:21.302509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.479 qpair failed and we were unable to recover it. 00:30:50.480 [2024-11-20 10:04:21.302853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.480 [2024-11-20 10:04:21.302880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.480 qpair failed and we were unable to recover it. 00:30:50.480 [2024-11-20 10:04:21.303217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.480 [2024-11-20 10:04:21.303247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.480 qpair failed and we were unable to recover it. 00:30:50.480 [2024-11-20 10:04:21.303582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.480 [2024-11-20 10:04:21.303610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.480 qpair failed and we were unable to recover it. 00:30:50.480 [2024-11-20 10:04:21.303981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.480 [2024-11-20 10:04:21.304009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.480 qpair failed and we were unable to recover it. 00:30:50.480 [2024-11-20 10:04:21.304246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.480 [2024-11-20 10:04:21.304278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.480 qpair failed and we were unable to recover it. 00:30:50.480 [2024-11-20 10:04:21.304579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.480 [2024-11-20 10:04:21.304607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.480 qpair failed and we were unable to recover it. 00:30:50.480 [2024-11-20 10:04:21.304922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:50.480 [2024-11-20 10:04:21.304950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.480 [2024-11-20 10:04:21.304977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.480 qpair failed and we were unable to recover it. 00:30:50.480 [2024-11-20 10:04:21.305304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.480 [2024-11-20 10:04:21.305333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.480 qpair failed and we were unable to recover it. 00:30:50.480 [2024-11-20 10:04:21.305693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.480 [2024-11-20 10:04:21.305722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.480 qpair failed and we were unable to recover it. 00:30:50.480 [2024-11-20 10:04:21.306171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.480 [2024-11-20 10:04:21.306200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.480 qpair failed and we were unable to recover it. 00:30:50.480 [2024-11-20 10:04:21.306563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.480 [2024-11-20 10:04:21.306591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.480 qpair failed and we were unable to recover it. 00:30:50.480 [2024-11-20 10:04:21.306950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.480 [2024-11-20 10:04:21.306978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.480 qpair failed and we were unable to recover it. 00:30:50.480 [2024-11-20 10:04:21.307309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.480 [2024-11-20 10:04:21.307339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.480 qpair failed and we were unable to recover it. 00:30:50.480 [2024-11-20 10:04:21.307702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.480 [2024-11-20 10:04:21.307731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.480 qpair failed and we were unable to recover it. 00:30:50.480 [2024-11-20 10:04:21.308088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.480 [2024-11-20 10:04:21.308116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.480 qpair failed and we were unable to recover it. 00:30:50.480 [2024-11-20 10:04:21.308542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.480 [2024-11-20 10:04:21.308572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.480 qpair failed and we were unable to recover it. 00:30:50.480 [2024-11-20 10:04:21.308926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.480 [2024-11-20 10:04:21.308953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.480 qpair failed and we were unable to recover it. 00:30:50.480 [2024-11-20 10:04:21.309290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.480 [2024-11-20 10:04:21.309321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.480 qpair failed and we were unable to recover it. 00:30:50.480 [2024-11-20 10:04:21.309659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.480 [2024-11-20 10:04:21.309687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.480 qpair failed and we were unable to recover it. 00:30:50.480 [2024-11-20 10:04:21.309901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.480 [2024-11-20 10:04:21.309932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.480 qpair failed and we were unable to recover it. 00:30:50.480 [2024-11-20 10:04:21.310280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.480 [2024-11-20 10:04:21.310309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.480 qpair failed and we were unable to recover it. 00:30:50.480 [2024-11-20 10:04:21.310621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.480 [2024-11-20 10:04:21.310649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.480 qpair failed and we were unable to recover it. 00:30:50.480 [2024-11-20 10:04:21.310896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.480 [2024-11-20 10:04:21.310924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.480 qpair failed and we were unable to recover it. 00:30:50.480 [2024-11-20 10:04:21.311279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.480 [2024-11-20 10:04:21.311307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.480 qpair failed and we were unable to recover it. 00:30:50.480 [2024-11-20 10:04:21.311650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.480 [2024-11-20 10:04:21.311685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.480 qpair failed and we were unable to recover it. 00:30:50.480 [2024-11-20 10:04:21.312011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.480 [2024-11-20 10:04:21.312039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.480 qpair failed and we were unable to recover it. 00:30:50.480 [2024-11-20 10:04:21.312381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.480 [2024-11-20 10:04:21.312411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.480 qpair failed and we were unable to recover it. 00:30:50.480 [2024-11-20 10:04:21.312625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.480 [2024-11-20 10:04:21.312653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.480 qpair failed and we were unable to recover it. 00:30:50.480 [2024-11-20 10:04:21.312886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.480 [2024-11-20 10:04:21.312917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.480 qpair failed and we were unable to recover it. 00:30:50.480 [2024-11-20 10:04:21.313230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.480 [2024-11-20 10:04:21.313260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.480 qpair failed and we were unable to recover it. 00:30:50.480 [2024-11-20 10:04:21.313594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.480 [2024-11-20 10:04:21.313623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.480 qpair failed and we were unable to recover it. 00:30:50.480 [2024-11-20 10:04:21.313930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.480 [2024-11-20 10:04:21.313958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.480 qpair failed and we were unable to recover it. 00:30:50.480 [2024-11-20 10:04:21.314220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.480 [2024-11-20 10:04:21.314248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.480 qpair failed and we were unable to recover it. 00:30:50.480 [2024-11-20 10:04:21.314498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.480 [2024-11-20 10:04:21.314526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.480 qpair failed and we were unable to recover it. 00:30:50.480 [2024-11-20 10:04:21.314923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.480 [2024-11-20 10:04:21.314950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.480 qpair failed and we were unable to recover it. 00:30:50.480 [2024-11-20 10:04:21.315322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.480 [2024-11-20 10:04:21.315351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.480 qpair failed and we were unable to recover it. 00:30:50.480 [2024-11-20 10:04:21.315728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.480 [2024-11-20 10:04:21.315755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.480 qpair failed and we were unable to recover it. 00:30:50.480 [2024-11-20 10:04:21.316105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.480 [2024-11-20 10:04:21.316133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.480 qpair failed and we were unable to recover it. 00:30:50.480 [2024-11-20 10:04:21.316465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.480 [2024-11-20 10:04:21.316496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.480 qpair failed and we were unable to recover it. 00:30:50.480 [2024-11-20 10:04:21.316853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.480 [2024-11-20 10:04:21.316881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.480 qpair failed and we were unable to recover it. 00:30:50.480 [2024-11-20 10:04:21.317234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.480 [2024-11-20 10:04:21.317263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.480 qpair failed and we were unable to recover it. 00:30:50.480 [2024-11-20 10:04:21.317617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.480 [2024-11-20 10:04:21.317645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.480 qpair failed and we were unable to recover it. 00:30:50.480 [2024-11-20 10:04:21.317980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.480 [2024-11-20 10:04:21.318008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.480 qpair failed and we were unable to recover it. 00:30:50.480 [2024-11-20 10:04:21.318385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.480 [2024-11-20 10:04:21.318415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.480 qpair failed and we were unable to recover it. 00:30:50.480 [2024-11-20 10:04:21.318754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.480 [2024-11-20 10:04:21.318782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.480 qpair failed and we were unable to recover it. 00:30:50.480 [2024-11-20 10:04:21.318992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.481 [2024-11-20 10:04:21.319019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.481 qpair failed and we were unable to recover it. 00:30:50.481 [2024-11-20 10:04:21.319384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.481 [2024-11-20 10:04:21.319413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.481 qpair failed and we were unable to recover it. 00:30:50.481 [2024-11-20 10:04:21.319762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.481 [2024-11-20 10:04:21.319790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.481 qpair failed and we were unable to recover it. 00:30:50.481 [2024-11-20 10:04:21.320141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.481 [2024-11-20 10:04:21.320176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.481 qpair failed and we were unable to recover it. 00:30:50.481 [2024-11-20 10:04:21.320526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.481 [2024-11-20 10:04:21.320554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.481 qpair failed and we were unable to recover it. 00:30:50.481 [2024-11-20 10:04:21.320917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.481 [2024-11-20 10:04:21.320946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.481 qpair failed and we were unable to recover it. 00:30:50.481 [2024-11-20 10:04:21.321288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.481 [2024-11-20 10:04:21.321319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.481 qpair failed and we were unable to recover it. 00:30:50.481 [2024-11-20 10:04:21.321682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.481 [2024-11-20 10:04:21.321710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.481 qpair failed and we were unable to recover it. 00:30:50.481 [2024-11-20 10:04:21.322068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.481 [2024-11-20 10:04:21.322095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.481 qpair failed and we were unable to recover it. 00:30:50.481 [2024-11-20 10:04:21.322455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.481 [2024-11-20 10:04:21.322484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.481 qpair failed and we were unable to recover it. 00:30:50.481 [2024-11-20 10:04:21.322850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.481 [2024-11-20 10:04:21.322878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.481 qpair failed and we were unable to recover it. 00:30:50.481 [2024-11-20 10:04:21.323231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.481 [2024-11-20 10:04:21.323260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.481 qpair failed and we were unable to recover it. 00:30:50.481 [2024-11-20 10:04:21.323594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.481 [2024-11-20 10:04:21.323622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.481 qpair failed and we were unable to recover it. 00:30:50.481 [2024-11-20 10:04:21.323998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.481 [2024-11-20 10:04:21.324026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.481 qpair failed and we were unable to recover it. 00:30:50.481 [2024-11-20 10:04:21.324388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.481 [2024-11-20 10:04:21.324417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.481 qpair failed and we were unable to recover it. 00:30:50.481 [2024-11-20 10:04:21.324767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.481 [2024-11-20 10:04:21.324797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.481 qpair failed and we were unable to recover it. 00:30:50.481 [2024-11-20 10:04:21.325148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.481 [2024-11-20 10:04:21.325184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.481 qpair failed and we were unable to recover it. 00:30:50.481 [2024-11-20 10:04:21.325534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.481 [2024-11-20 10:04:21.325563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.481 qpair failed and we were unable to recover it. 00:30:50.481 [2024-11-20 10:04:21.325888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.481 [2024-11-20 10:04:21.325916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.481 qpair failed and we were unable to recover it. 00:30:50.481 [2024-11-20 10:04:21.326267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.481 [2024-11-20 10:04:21.326303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.481 qpair failed and we were unable to recover it. 00:30:50.481 [2024-11-20 10:04:21.326663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.481 [2024-11-20 10:04:21.326691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.481 qpair failed and we were unable to recover it. 00:30:50.481 [2024-11-20 10:04:21.327048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.481 [2024-11-20 10:04:21.327077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.481 qpair failed and we were unable to recover it. 00:30:50.481 [2024-11-20 10:04:21.327494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.481 [2024-11-20 10:04:21.327523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.481 qpair failed and we were unable to recover it. 00:30:50.481 [2024-11-20 10:04:21.327861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.481 [2024-11-20 10:04:21.327890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.481 qpair failed and we were unable to recover it. 00:30:50.481 [2024-11-20 10:04:21.328116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.481 [2024-11-20 10:04:21.328145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.481 qpair failed and we were unable to recover it. 00:30:50.481 [2024-11-20 10:04:21.328493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.481 [2024-11-20 10:04:21.328521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.481 qpair failed and we were unable to recover it. 00:30:50.481 [2024-11-20 10:04:21.328875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.481 [2024-11-20 10:04:21.328904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.481 qpair failed and we were unable to recover it. 00:30:50.481 [2024-11-20 10:04:21.329235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.481 [2024-11-20 10:04:21.329266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.481 qpair failed and we were unable to recover it. 00:30:50.481 [2024-11-20 10:04:21.329482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.481 [2024-11-20 10:04:21.329509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.481 qpair failed and we were unable to recover it. 00:30:50.481 [2024-11-20 10:04:21.329865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.481 [2024-11-20 10:04:21.329893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.481 qpair failed and we were unable to recover it. 00:30:50.481 [2024-11-20 10:04:21.330093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.481 [2024-11-20 10:04:21.330126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.481 qpair failed and we were unable to recover it. 00:30:50.481 [2024-11-20 10:04:21.330511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.481 [2024-11-20 10:04:21.330541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.481 qpair failed and we were unable to recover it. 00:30:50.481 [2024-11-20 10:04:21.330894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.481 [2024-11-20 10:04:21.330922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.481 qpair failed and we were unable to recover it. 00:30:50.481 [2024-11-20 10:04:21.331265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.481 [2024-11-20 10:04:21.331296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.481 qpair failed and we were unable to recover it. 00:30:50.481 [2024-11-20 10:04:21.331669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.481 [2024-11-20 10:04:21.331697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.481 qpair failed and we were unable to recover it. 00:30:50.481 [2024-11-20 10:04:21.331950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.481 [2024-11-20 10:04:21.331982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.481 qpair failed and we were unable to recover it. 00:30:50.481 [2024-11-20 10:04:21.332313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.481 [2024-11-20 10:04:21.332343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.481 qpair failed and we were unable to recover it. 00:30:50.481 [2024-11-20 10:04:21.332676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.481 [2024-11-20 10:04:21.332704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.481 qpair failed and we were unable to recover it. 00:30:50.481 [2024-11-20 10:04:21.333070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.481 [2024-11-20 10:04:21.333100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.481 qpair failed and we were unable to recover it. 00:30:50.481 [2024-11-20 10:04:21.333449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.481 [2024-11-20 10:04:21.333479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.481 qpair failed and we were unable to recover it. 00:30:50.481 [2024-11-20 10:04:21.333689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.481 [2024-11-20 10:04:21.333717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.481 qpair failed and we were unable to recover it. 00:30:50.481 [2024-11-20 10:04:21.334042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.481 [2024-11-20 10:04:21.334070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.481 qpair failed and we were unable to recover it. 00:30:50.481 [2024-11-20 10:04:21.334446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.481 [2024-11-20 10:04:21.334476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.481 qpair failed and we were unable to recover it. 00:30:50.481 [2024-11-20 10:04:21.334831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.481 [2024-11-20 10:04:21.334859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.481 qpair failed and we were unable to recover it. 00:30:50.481 [2024-11-20 10:04:21.335212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.481 [2024-11-20 10:04:21.335240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.481 qpair failed and we were unable to recover it. 00:30:50.481 [2024-11-20 10:04:21.335586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.481 [2024-11-20 10:04:21.335615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.481 qpair failed and we were unable to recover it. 00:30:50.482 [2024-11-20 10:04:21.335879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.482 [2024-11-20 10:04:21.335907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.482 qpair failed and we were unable to recover it. 00:30:50.482 [2024-11-20 10:04:21.336220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.482 [2024-11-20 10:04:21.336250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.482 qpair failed and we were unable to recover it. 00:30:50.482 [2024-11-20 10:04:21.336587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.482 [2024-11-20 10:04:21.336617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.482 qpair failed and we were unable to recover it. 00:30:50.482 [2024-11-20 10:04:21.336839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.482 [2024-11-20 10:04:21.336866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.482 qpair failed and we were unable to recover it. 00:30:50.482 [2024-11-20 10:04:21.337188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.482 [2024-11-20 10:04:21.337217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.482 qpair failed and we were unable to recover it. 00:30:50.482 [2024-11-20 10:04:21.337551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.482 [2024-11-20 10:04:21.337580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.482 qpair failed and we were unable to recover it. 00:30:50.482 [2024-11-20 10:04:21.337839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.482 [2024-11-20 10:04:21.337866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.482 qpair failed and we were unable to recover it. 00:30:50.482 [2024-11-20 10:04:21.338231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.482 [2024-11-20 10:04:21.338260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.482 qpair failed and we were unable to recover it. 00:30:50.482 [2024-11-20 10:04:21.338471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.482 [2024-11-20 10:04:21.338503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.482 qpair failed and we were unable to recover it. 00:30:50.482 [2024-11-20 10:04:21.338860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.482 [2024-11-20 10:04:21.338889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.482 qpair failed and we were unable to recover it. 00:30:50.482 [2024-11-20 10:04:21.339254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.482 [2024-11-20 10:04:21.339284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.482 qpair failed and we were unable to recover it. 00:30:50.482 [2024-11-20 10:04:21.339666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.482 [2024-11-20 10:04:21.339694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.482 qpair failed and we were unable to recover it. 00:30:50.482 [2024-11-20 10:04:21.339922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.482 [2024-11-20 10:04:21.339950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.482 [2024-11-20 10:04:21.339953] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:50.482 qpair failed and we were unable to recover it. 00:30:50.482 [2024-11-20 10:04:21.339983] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:50.482 [2024-11-20 10:04:21.339991] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:50.482 [2024-11-20 10:04:21.339998] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:50.482 [2024-11-20 10:04:21.340004] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:50.482 [2024-11-20 10:04:21.340284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.482 [2024-11-20 10:04:21.340313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.482 qpair failed and we were unable to recover it. 00:30:50.482 [2024-11-20 10:04:21.340661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.482 [2024-11-20 10:04:21.340689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.482 qpair failed and we were unable to recover it. 00:30:50.482 [2024-11-20 10:04:21.340918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.482 [2024-11-20 10:04:21.340946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.482 qpair failed and we were unable to recover it. 00:30:50.482 [2024-11-20 10:04:21.341286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.482 [2024-11-20 10:04:21.341314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.482 qpair failed and we were unable to recover it. 00:30:50.482 [2024-11-20 10:04:21.341673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.482 [2024-11-20 10:04:21.341703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.482 qpair failed and we were unable to recover it. 00:30:50.482 [2024-11-20 10:04:21.341659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:30:50.482 [2024-11-20 10:04:21.341779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:30:50.482 [2024-11-20 10:04:21.341792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:30:50.482 [2024-11-20 10:04:21.341799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:50.482 [2024-11-20 10:04:21.342047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.482 [2024-11-20 10:04:21.342074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.482 qpair failed and we were unable to recover it. 00:30:50.482 [2024-11-20 10:04:21.342396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.482 [2024-11-20 10:04:21.342431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.482 qpair failed and we were unable to recover it. 00:30:50.482 [2024-11-20 10:04:21.342653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.482 [2024-11-20 10:04:21.342680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.482 qpair failed and we were unable to recover it. 00:30:50.482 [2024-11-20 10:04:21.343054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.482 [2024-11-20 10:04:21.343082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.482 qpair failed and we were unable to recover it. 00:30:50.482 [2024-11-20 10:04:21.343443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.482 [2024-11-20 10:04:21.343471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.482 qpair failed and we were unable to recover it. 00:30:50.482 [2024-11-20 10:04:21.343822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.482 [2024-11-20 10:04:21.343863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.482 qpair failed and we were unable to recover it. 00:30:50.482 [2024-11-20 10:04:21.344237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.482 [2024-11-20 10:04:21.344265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.482 qpair failed and we were unable to recover it. 00:30:50.482 [2024-11-20 10:04:21.344606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.482 [2024-11-20 10:04:21.344634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.482 qpair failed and we were unable to recover it. 00:30:50.482 [2024-11-20 10:04:21.344846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.482 [2024-11-20 10:04:21.344877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.482 qpair failed and we were unable to recover it. 00:30:50.482 [2024-11-20 10:04:21.345223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.482 [2024-11-20 10:04:21.345253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.482 qpair failed and we were unable to recover it. 00:30:50.482 [2024-11-20 10:04:21.345616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.482 [2024-11-20 10:04:21.345644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.482 qpair failed and we were unable to recover it. 00:30:50.482 [2024-11-20 10:04:21.345983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.482 [2024-11-20 10:04:21.346012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.482 qpair failed and we were unable to recover it. 00:30:50.482 [2024-11-20 10:04:21.346382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.482 [2024-11-20 10:04:21.346412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.482 qpair failed and we were unable to recover it. 00:30:50.482 [2024-11-20 10:04:21.346758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.482 [2024-11-20 10:04:21.346786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.482 qpair failed and we were unable to recover it. 00:30:50.482 [2024-11-20 10:04:21.347130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.482 [2024-11-20 10:04:21.347157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.482 qpair failed and we were unable to recover it. 00:30:50.482 [2024-11-20 10:04:21.347522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.482 [2024-11-20 10:04:21.347551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.482 qpair failed and we were unable to recover it. 00:30:50.482 [2024-11-20 10:04:21.347864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.482 [2024-11-20 10:04:21.347893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.482 qpair failed and we were unable to recover it. 00:30:50.482 [2024-11-20 10:04:21.348253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.482 [2024-11-20 10:04:21.348283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.482 qpair failed and we were unable to recover it. 00:30:50.482 [2024-11-20 10:04:21.348690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.482 [2024-11-20 10:04:21.348718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.482 qpair failed and we were unable to recover it. 00:30:50.483 [2024-11-20 10:04:21.348937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.483 [2024-11-20 10:04:21.348968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.483 qpair failed and we were unable to recover it. 00:30:50.483 [2024-11-20 10:04:21.349306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.483 [2024-11-20 10:04:21.349336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.483 qpair failed and we were unable to recover it. 00:30:50.483 [2024-11-20 10:04:21.349597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.483 [2024-11-20 10:04:21.349624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.483 qpair failed and we were unable to recover it. 00:30:50.483 [2024-11-20 10:04:21.349848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.483 [2024-11-20 10:04:21.349876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.483 qpair failed and we were unable to recover it. 00:30:50.483 [2024-11-20 10:04:21.350241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.483 [2024-11-20 10:04:21.350272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.483 qpair failed and we were unable to recover it. 00:30:50.483 [2024-11-20 10:04:21.350603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.483 [2024-11-20 10:04:21.350630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.483 qpair failed and we were unable to recover it. 00:30:50.483 [2024-11-20 10:04:21.350857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.483 [2024-11-20 10:04:21.350885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.483 qpair failed and we were unable to recover it. 00:30:50.483 [2024-11-20 10:04:21.351014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.483 [2024-11-20 10:04:21.351042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.483 qpair failed and we were unable to recover it. 00:30:50.483 [2024-11-20 10:04:21.351424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.483 [2024-11-20 10:04:21.351453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.483 qpair failed and we were unable to recover it. 00:30:50.483 [2024-11-20 10:04:21.351787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.483 [2024-11-20 10:04:21.351816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.483 qpair failed and we were unable to recover it. 00:30:50.483 [2024-11-20 10:04:21.352144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.483 [2024-11-20 10:04:21.352192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.483 qpair failed and we were unable to recover it. 00:30:50.483 [2024-11-20 10:04:21.352529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.483 [2024-11-20 10:04:21.352558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.483 qpair failed and we were unable to recover it. 00:30:50.483 [2024-11-20 10:04:21.352886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.483 [2024-11-20 10:04:21.352915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.483 qpair failed and we were unable to recover it. 00:30:50.483 [2024-11-20 10:04:21.353180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.483 [2024-11-20 10:04:21.353211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.483 qpair failed and we were unable to recover it. 00:30:50.762 [2024-11-20 10:04:21.353462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.762 [2024-11-20 10:04:21.353493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.762 qpair failed and we were unable to recover it. 00:30:50.762 [2024-11-20 10:04:21.353832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.762 [2024-11-20 10:04:21.353860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.762 qpair failed and we were unable to recover it. 00:30:50.762 [2024-11-20 10:04:21.354206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.762 [2024-11-20 10:04:21.354236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.762 qpair failed and we were unable to recover it. 00:30:50.762 [2024-11-20 10:04:21.354473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.762 [2024-11-20 10:04:21.354501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.762 qpair failed and we were unable to recover it. 00:30:50.762 [2024-11-20 10:04:21.354837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.762 [2024-11-20 10:04:21.354866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.762 qpair failed and we were unable to recover it. 00:30:50.762 [2024-11-20 10:04:21.355212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.762 [2024-11-20 10:04:21.355241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.762 qpair failed and we were unable to recover it. 00:30:50.762 [2024-11-20 10:04:21.355578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.762 [2024-11-20 10:04:21.355608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.762 qpair failed and we were unable to recover it. 00:30:50.762 [2024-11-20 10:04:21.355959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.762 [2024-11-20 10:04:21.355987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.762 qpair failed and we were unable to recover it. 00:30:50.762 [2024-11-20 10:04:21.356125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.762 [2024-11-20 10:04:21.356155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.762 qpair failed and we were unable to recover it. 00:30:50.762 [2024-11-20 10:04:21.356545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.762 [2024-11-20 10:04:21.356575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.762 qpair failed and we were unable to recover it. 00:30:50.762 [2024-11-20 10:04:21.356912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.762 [2024-11-20 10:04:21.356940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.762 qpair failed and we were unable to recover it. 00:30:50.762 [2024-11-20 10:04:21.357286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.762 [2024-11-20 10:04:21.357314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.762 qpair failed and we were unable to recover it. 00:30:50.762 [2024-11-20 10:04:21.357552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.762 [2024-11-20 10:04:21.357588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.762 qpair failed and we were unable to recover it. 00:30:50.762 [2024-11-20 10:04:21.357936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.762 [2024-11-20 10:04:21.357965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.762 qpair failed and we were unable to recover it. 00:30:50.762 [2024-11-20 10:04:21.358206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.762 [2024-11-20 10:04:21.358240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.762 qpair failed and we were unable to recover it. 00:30:50.762 [2024-11-20 10:04:21.358495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.762 [2024-11-20 10:04:21.358524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.762 qpair failed and we were unable to recover it. 00:30:50.762 [2024-11-20 10:04:21.358872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.762 [2024-11-20 10:04:21.358901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.762 qpair failed and we were unable to recover it. 00:30:50.762 [2024-11-20 10:04:21.359265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.762 [2024-11-20 10:04:21.359296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.762 qpair failed and we were unable to recover it. 00:30:50.762 [2024-11-20 10:04:21.359527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.762 [2024-11-20 10:04:21.359555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.762 qpair failed and we were unable to recover it. 00:30:50.762 [2024-11-20 10:04:21.359895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.762 [2024-11-20 10:04:21.359924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.762 qpair failed and we were unable to recover it. 00:30:50.762 [2024-11-20 10:04:21.360058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.762 [2024-11-20 10:04:21.360086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.762 qpair failed and we were unable to recover it. 00:30:50.762 [2024-11-20 10:04:21.360414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.762 [2024-11-20 10:04:21.360444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.762 qpair failed and we were unable to recover it. 00:30:50.762 [2024-11-20 10:04:21.360695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.762 [2024-11-20 10:04:21.360726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.762 qpair failed and we were unable to recover it. 00:30:50.762 [2024-11-20 10:04:21.361059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.762 [2024-11-20 10:04:21.361088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.762 qpair failed and we were unable to recover it. 00:30:50.762 [2024-11-20 10:04:21.361322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.762 [2024-11-20 10:04:21.361352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.762 qpair failed and we were unable to recover it. 00:30:50.762 [2024-11-20 10:04:21.361687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.762 [2024-11-20 10:04:21.361717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.762 qpair failed and we were unable to recover it. 00:30:50.762 [2024-11-20 10:04:21.361950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.762 [2024-11-20 10:04:21.361980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.762 qpair failed and we were unable to recover it. 00:30:50.762 [2024-11-20 10:04:21.362088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.762 [2024-11-20 10:04:21.362116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.762 qpair failed and we were unable to recover it. 00:30:50.762 [2024-11-20 10:04:21.362485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.762 [2024-11-20 10:04:21.362515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.762 qpair failed and we were unable to recover it. 00:30:50.762 [2024-11-20 10:04:21.362849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.762 [2024-11-20 10:04:21.362877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.762 qpair failed and we were unable to recover it. 00:30:50.762 [2024-11-20 10:04:21.363248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.762 [2024-11-20 10:04:21.363277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.762 qpair failed and we were unable to recover it. 00:30:50.762 [2024-11-20 10:04:21.363608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.762 [2024-11-20 10:04:21.363637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.762 qpair failed and we were unable to recover it. 00:30:50.762 [2024-11-20 10:04:21.364002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.762 [2024-11-20 10:04:21.364031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.762 qpair failed and we were unable to recover it. 00:30:50.762 [2024-11-20 10:04:21.364249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.762 [2024-11-20 10:04:21.364282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.762 qpair failed and we were unable to recover it. 00:30:50.762 [2024-11-20 10:04:21.364507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.762 [2024-11-20 10:04:21.364536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.762 qpair failed and we were unable to recover it. 00:30:50.762 [2024-11-20 10:04:21.364885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.762 [2024-11-20 10:04:21.364913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.762 qpair failed and we were unable to recover it. 00:30:50.762 [2024-11-20 10:04:21.365146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.763 [2024-11-20 10:04:21.365185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.763 qpair failed and we were unable to recover it. 00:30:50.763 [2024-11-20 10:04:21.365530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.763 [2024-11-20 10:04:21.365559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.763 qpair failed and we were unable to recover it. 00:30:50.763 [2024-11-20 10:04:21.365909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.763 [2024-11-20 10:04:21.365937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.763 qpair failed and we were unable to recover it. 00:30:50.763 [2024-11-20 10:04:21.366326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.763 [2024-11-20 10:04:21.366357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.763 qpair failed and we were unable to recover it. 00:30:50.763 [2024-11-20 10:04:21.366564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.763 [2024-11-20 10:04:21.366591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.763 qpair failed and we were unable to recover it. 00:30:50.763 [2024-11-20 10:04:21.366935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.763 [2024-11-20 10:04:21.366965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.763 qpair failed and we were unable to recover it. 00:30:50.763 [2024-11-20 10:04:21.367180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.763 [2024-11-20 10:04:21.367209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.763 qpair failed and we were unable to recover it. 00:30:50.763 [2024-11-20 10:04:21.367569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.763 [2024-11-20 10:04:21.367598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.763 qpair failed and we were unable to recover it. 00:30:50.763 [2024-11-20 10:04:21.367949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.763 [2024-11-20 10:04:21.367977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.763 qpair failed and we were unable to recover it. 00:30:50.763 [2024-11-20 10:04:21.368331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.763 [2024-11-20 10:04:21.368360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.763 qpair failed and we were unable to recover it. 00:30:50.763 [2024-11-20 10:04:21.368580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.763 [2024-11-20 10:04:21.368609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.763 qpair failed and we were unable to recover it. 00:30:50.763 [2024-11-20 10:04:21.368948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.763 [2024-11-20 10:04:21.368978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.763 qpair failed and we were unable to recover it. 00:30:50.763 [2024-11-20 10:04:21.369333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.763 [2024-11-20 10:04:21.369362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.763 qpair failed and we were unable to recover it. 00:30:50.763 [2024-11-20 10:04:21.369569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.763 [2024-11-20 10:04:21.369600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.763 qpair failed and we were unable to recover it. 00:30:50.763 [2024-11-20 10:04:21.369951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.763 [2024-11-20 10:04:21.369979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.763 qpair failed and we were unable to recover it. 00:30:50.763 [2024-11-20 10:04:21.370326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.763 [2024-11-20 10:04:21.370355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.763 qpair failed and we were unable to recover it. 00:30:50.763 [2024-11-20 10:04:21.370702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.763 [2024-11-20 10:04:21.370738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.763 qpair failed and we were unable to recover it. 00:30:50.763 [2024-11-20 10:04:21.371051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.763 [2024-11-20 10:04:21.371086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.763 qpair failed and we were unable to recover it. 00:30:50.763 [2024-11-20 10:04:21.371428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.763 [2024-11-20 10:04:21.371457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.763 qpair failed and we were unable to recover it. 00:30:50.763 [2024-11-20 10:04:21.371826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.763 [2024-11-20 10:04:21.371856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.763 qpair failed and we were unable to recover it. 00:30:50.763 [2024-11-20 10:04:21.372199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.763 [2024-11-20 10:04:21.372231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.763 qpair failed and we were unable to recover it. 00:30:50.763 [2024-11-20 10:04:21.372548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.763 [2024-11-20 10:04:21.372577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.763 qpair failed and we were unable to recover it. 00:30:50.763 [2024-11-20 10:04:21.372896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.763 [2024-11-20 10:04:21.372925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.763 qpair failed and we were unable to recover it. 00:30:50.763 [2024-11-20 10:04:21.373300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.763 [2024-11-20 10:04:21.373330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.763 qpair failed and we were unable to recover it. 00:30:50.763 [2024-11-20 10:04:21.373705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.763 [2024-11-20 10:04:21.373735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.763 qpair failed and we were unable to recover it. 00:30:50.763 [2024-11-20 10:04:21.374006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.763 [2024-11-20 10:04:21.374039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.763 qpair failed and we were unable to recover it. 00:30:50.763 [2024-11-20 10:04:21.374293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.763 [2024-11-20 10:04:21.374325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.763 qpair failed and we were unable to recover it. 00:30:50.763 [2024-11-20 10:04:21.374604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.763 [2024-11-20 10:04:21.374631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.763 qpair failed and we were unable to recover it. 00:30:50.763 [2024-11-20 10:04:21.374980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.763 [2024-11-20 10:04:21.375009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.763 qpair failed and we were unable to recover it. 00:30:50.763 [2024-11-20 10:04:21.375335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.763 [2024-11-20 10:04:21.375366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.763 qpair failed and we were unable to recover it. 00:30:50.763 [2024-11-20 10:04:21.375671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.763 [2024-11-20 10:04:21.375699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.763 qpair failed and we were unable to recover it. 00:30:50.763 [2024-11-20 10:04:21.376045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.763 [2024-11-20 10:04:21.376074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.763 qpair failed and we were unable to recover it. 00:30:50.763 [2024-11-20 10:04:21.376436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.763 [2024-11-20 10:04:21.376467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.763 qpair failed and we were unable to recover it. 00:30:50.763 [2024-11-20 10:04:21.376797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.763 [2024-11-20 10:04:21.376826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.763 qpair failed and we were unable to recover it. 00:30:50.763 [2024-11-20 10:04:21.377059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.763 [2024-11-20 10:04:21.377087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.763 qpair failed and we were unable to recover it. 00:30:50.763 [2024-11-20 10:04:21.377314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.763 [2024-11-20 10:04:21.377348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.763 qpair failed and we were unable to recover it. 00:30:50.763 [2024-11-20 10:04:21.377582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.763 [2024-11-20 10:04:21.377611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.763 qpair failed and we were unable to recover it. 00:30:50.763 [2024-11-20 10:04:21.377954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.763 [2024-11-20 10:04:21.377983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.763 qpair failed and we were unable to recover it. 00:30:50.764 [2024-11-20 10:04:21.378318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.764 [2024-11-20 10:04:21.378348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.764 qpair failed and we were unable to recover it. 00:30:50.764 [2024-11-20 10:04:21.378711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.764 [2024-11-20 10:04:21.378740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.764 qpair failed and we were unable to recover it. 00:30:50.764 [2024-11-20 10:04:21.379086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.764 [2024-11-20 10:04:21.379115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.764 qpair failed and we were unable to recover it. 00:30:50.764 [2024-11-20 10:04:21.379248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.764 [2024-11-20 10:04:21.379277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.764 qpair failed and we were unable to recover it. 00:30:50.764 [2024-11-20 10:04:21.379653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.764 [2024-11-20 10:04:21.379682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.764 qpair failed and we were unable to recover it. 00:30:50.764 [2024-11-20 10:04:21.380014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.764 [2024-11-20 10:04:21.380043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.764 qpair failed and we were unable to recover it. 00:30:50.764 [2024-11-20 10:04:21.380386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.764 [2024-11-20 10:04:21.380414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.764 qpair failed and we were unable to recover it. 00:30:50.764 [2024-11-20 10:04:21.380743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.764 [2024-11-20 10:04:21.380772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.764 qpair failed and we were unable to recover it. 00:30:50.764 [2024-11-20 10:04:21.381123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.764 [2024-11-20 10:04:21.381152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.764 qpair failed and we were unable to recover it. 00:30:50.764 [2024-11-20 10:04:21.381518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.764 [2024-11-20 10:04:21.381547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.764 qpair failed and we were unable to recover it. 00:30:50.764 [2024-11-20 10:04:21.381989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.764 [2024-11-20 10:04:21.382018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.764 qpair failed and we were unable to recover it. 00:30:50.764 [2024-11-20 10:04:21.382383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.764 [2024-11-20 10:04:21.382413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.764 qpair failed and we were unable to recover it. 00:30:50.764 [2024-11-20 10:04:21.382759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.764 [2024-11-20 10:04:21.382786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.764 qpair failed and we were unable to recover it. 00:30:50.764 [2024-11-20 10:04:21.383127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.764 [2024-11-20 10:04:21.383155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.764 qpair failed and we were unable to recover it. 00:30:50.764 [2024-11-20 10:04:21.383509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.764 [2024-11-20 10:04:21.383538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.764 qpair failed and we were unable to recover it. 00:30:50.764 [2024-11-20 10:04:21.383761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.764 [2024-11-20 10:04:21.383789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.764 qpair failed and we were unable to recover it. 00:30:50.764 [2024-11-20 10:04:21.384059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.764 [2024-11-20 10:04:21.384087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.764 qpair failed and we were unable to recover it. 00:30:50.764 [2024-11-20 10:04:21.384429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.764 [2024-11-20 10:04:21.384459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.764 qpair failed and we were unable to recover it. 00:30:50.764 [2024-11-20 10:04:21.384805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.764 [2024-11-20 10:04:21.384840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.764 qpair failed and we were unable to recover it. 00:30:50.764 [2024-11-20 10:04:21.385187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.764 [2024-11-20 10:04:21.385217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.764 qpair failed and we were unable to recover it. 00:30:50.764 [2024-11-20 10:04:21.385569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.764 [2024-11-20 10:04:21.385598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.764 qpair failed and we were unable to recover it. 00:30:50.764 [2024-11-20 10:04:21.385810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.764 [2024-11-20 10:04:21.385837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.764 qpair failed and we were unable to recover it. 00:30:50.764 [2024-11-20 10:04:21.386169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.764 [2024-11-20 10:04:21.386198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.764 qpair failed and we were unable to recover it. 00:30:50.764 [2024-11-20 10:04:21.386535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.764 [2024-11-20 10:04:21.386566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.764 qpair failed and we were unable to recover it. 00:30:50.764 [2024-11-20 10:04:21.386789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.764 [2024-11-20 10:04:21.386820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.764 qpair failed and we were unable to recover it. 00:30:50.764 [2024-11-20 10:04:21.387030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.764 [2024-11-20 10:04:21.387059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.764 qpair failed and we were unable to recover it. 00:30:50.764 [2024-11-20 10:04:21.387402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.764 [2024-11-20 10:04:21.387432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.764 qpair failed and we were unable to recover it. 00:30:50.764 [2024-11-20 10:04:21.387722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.764 [2024-11-20 10:04:21.387750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.764 qpair failed and we were unable to recover it. 00:30:50.764 [2024-11-20 10:04:21.388115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.764 [2024-11-20 10:04:21.388143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.764 qpair failed and we were unable to recover it. 00:30:50.764 [2024-11-20 10:04:21.388511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.764 [2024-11-20 10:04:21.388540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.764 qpair failed and we were unable to recover it. 00:30:50.764 [2024-11-20 10:04:21.388903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.764 [2024-11-20 10:04:21.388931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.764 qpair failed and we were unable to recover it. 00:30:50.764 [2024-11-20 10:04:21.389328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.764 [2024-11-20 10:04:21.389357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.764 qpair failed and we were unable to recover it. 00:30:50.764 [2024-11-20 10:04:21.389728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.764 [2024-11-20 10:04:21.389756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.764 qpair failed and we were unable to recover it. 00:30:50.764 [2024-11-20 10:04:21.390100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.764 [2024-11-20 10:04:21.390128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.764 qpair failed and we were unable to recover it. 00:30:50.764 [2024-11-20 10:04:21.390515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.764 [2024-11-20 10:04:21.390545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.764 qpair failed and we were unable to recover it. 00:30:50.764 [2024-11-20 10:04:21.390903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.764 [2024-11-20 10:04:21.390931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.764 qpair failed and we were unable to recover it. 00:30:50.764 [2024-11-20 10:04:21.391300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.764 [2024-11-20 10:04:21.391330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.764 qpair failed and we were unable to recover it. 00:30:50.764 [2024-11-20 10:04:21.391655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.764 [2024-11-20 10:04:21.391684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.765 qpair failed and we were unable to recover it. 00:30:50.765 [2024-11-20 10:04:21.392064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.765 [2024-11-20 10:04:21.392091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.765 qpair failed and we were unable to recover it. 00:30:50.765 [2024-11-20 10:04:21.392462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.765 [2024-11-20 10:04:21.392491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.765 qpair failed and we were unable to recover it. 00:30:50.765 [2024-11-20 10:04:21.392687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.765 [2024-11-20 10:04:21.392714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.765 qpair failed and we were unable to recover it. 00:30:50.765 [2024-11-20 10:04:21.393101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.765 [2024-11-20 10:04:21.393129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.765 qpair failed and we were unable to recover it. 00:30:50.765 [2024-11-20 10:04:21.393492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.765 [2024-11-20 10:04:21.393521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.765 qpair failed and we were unable to recover it. 00:30:50.765 [2024-11-20 10:04:21.393739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.765 [2024-11-20 10:04:21.393767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.765 qpair failed and we were unable to recover it. 00:30:50.765 [2024-11-20 10:04:21.394113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.765 [2024-11-20 10:04:21.394142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.765 qpair failed and we were unable to recover it. 00:30:50.765 [2024-11-20 10:04:21.394472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.765 [2024-11-20 10:04:21.394502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.765 qpair failed and we were unable to recover it. 00:30:50.765 [2024-11-20 10:04:21.394833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.765 [2024-11-20 10:04:21.394861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.765 qpair failed and we were unable to recover it. 00:30:50.765 [2024-11-20 10:04:21.395209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.765 [2024-11-20 10:04:21.395238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.765 qpair failed and we were unable to recover it. 00:30:50.765 [2024-11-20 10:04:21.395542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.765 [2024-11-20 10:04:21.395569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.765 qpair failed and we were unable to recover it. 00:30:50.765 [2024-11-20 10:04:21.395915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.765 [2024-11-20 10:04:21.395943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.765 qpair failed and we were unable to recover it. 00:30:50.765 [2024-11-20 10:04:21.396301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.765 [2024-11-20 10:04:21.396330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.765 qpair failed and we were unable to recover it. 00:30:50.765 [2024-11-20 10:04:21.396536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.765 [2024-11-20 10:04:21.396563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.765 qpair failed and we were unable to recover it. 00:30:50.765 [2024-11-20 10:04:21.396780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.765 [2024-11-20 10:04:21.396813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.765 qpair failed and we were unable to recover it. 00:30:50.765 [2024-11-20 10:04:21.397148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.765 [2024-11-20 10:04:21.397185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.765 qpair failed and we were unable to recover it. 00:30:50.765 [2024-11-20 10:04:21.397491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.765 [2024-11-20 10:04:21.397520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.765 qpair failed and we were unable to recover it. 00:30:50.765 [2024-11-20 10:04:21.397867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.765 [2024-11-20 10:04:21.397895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.765 qpair failed and we were unable to recover it. 00:30:50.765 [2024-11-20 10:04:21.398104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.765 [2024-11-20 10:04:21.398131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.765 qpair failed and we were unable to recover it. 00:30:50.765 [2024-11-20 10:04:21.398367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.765 [2024-11-20 10:04:21.398396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.765 qpair failed and we were unable to recover it. 00:30:50.765 [2024-11-20 10:04:21.398727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.765 [2024-11-20 10:04:21.398762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.765 qpair failed and we were unable to recover it. 00:30:50.765 [2024-11-20 10:04:21.399097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.765 [2024-11-20 10:04:21.399126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.765 qpair failed and we were unable to recover it. 00:30:50.765 [2024-11-20 10:04:21.399384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.765 [2024-11-20 10:04:21.399414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.765 qpair failed and we were unable to recover it. 00:30:50.765 [2024-11-20 10:04:21.399789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.765 [2024-11-20 10:04:21.399817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.765 qpair failed and we were unable to recover it. 00:30:50.765 [2024-11-20 10:04:21.399913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.765 [2024-11-20 10:04:21.399941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.765 qpair failed and we were unable to recover it. 00:30:50.765 [2024-11-20 10:04:21.400261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.765 [2024-11-20 10:04:21.400291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.765 qpair failed and we were unable to recover it. 00:30:50.765 [2024-11-20 10:04:21.400530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.765 [2024-11-20 10:04:21.400558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.765 qpair failed and we were unable to recover it. 00:30:50.765 [2024-11-20 10:04:21.400911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.765 [2024-11-20 10:04:21.400940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.765 qpair failed and we were unable to recover it. 00:30:50.765 [2024-11-20 10:04:21.401331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.765 [2024-11-20 10:04:21.401359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.765 qpair failed and we were unable to recover it. 00:30:50.765 [2024-11-20 10:04:21.401598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.765 [2024-11-20 10:04:21.401626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.765 qpair failed and we were unable to recover it. 00:30:50.765 [2024-11-20 10:04:21.401985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.765 [2024-11-20 10:04:21.402014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.765 qpair failed and we were unable to recover it. 00:30:50.765 [2024-11-20 10:04:21.402420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.765 [2024-11-20 10:04:21.402449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.765 qpair failed and we were unable to recover it. 00:30:50.765 [2024-11-20 10:04:21.402787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.765 [2024-11-20 10:04:21.402814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.765 qpair failed and we were unable to recover it. 00:30:50.765 [2024-11-20 10:04:21.403155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.765 [2024-11-20 10:04:21.403192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.765 qpair failed and we were unable to recover it. 00:30:50.765 [2024-11-20 10:04:21.403420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.765 [2024-11-20 10:04:21.403447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.765 qpair failed and we were unable to recover it. 00:30:50.765 [2024-11-20 10:04:21.403770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.765 [2024-11-20 10:04:21.403798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.765 qpair failed and we were unable to recover it. 00:30:50.765 [2024-11-20 10:04:21.404073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.765 [2024-11-20 10:04:21.404100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.765 qpair failed and we were unable to recover it. 00:30:50.765 [2024-11-20 10:04:21.404345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.766 [2024-11-20 10:04:21.404376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.766 qpair failed and we were unable to recover it. 00:30:50.766 [2024-11-20 10:04:21.404584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.766 [2024-11-20 10:04:21.404612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.766 qpair failed and we were unable to recover it. 00:30:50.766 [2024-11-20 10:04:21.404965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.766 [2024-11-20 10:04:21.404993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.766 qpair failed and we were unable to recover it. 00:30:50.766 [2024-11-20 10:04:21.405306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.766 [2024-11-20 10:04:21.405336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.766 qpair failed and we were unable to recover it. 00:30:50.766 [2024-11-20 10:04:21.405712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.766 [2024-11-20 10:04:21.405740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.766 qpair failed and we were unable to recover it. 00:30:50.766 [2024-11-20 10:04:21.406090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.766 [2024-11-20 10:04:21.406118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.766 qpair failed and we were unable to recover it. 00:30:50.766 [2024-11-20 10:04:21.406543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.766 [2024-11-20 10:04:21.406572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.766 qpair failed and we were unable to recover it. 00:30:50.766 [2024-11-20 10:04:21.406946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.766 [2024-11-20 10:04:21.406975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.766 qpair failed and we were unable to recover it. 00:30:50.766 [2024-11-20 10:04:21.407334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.766 [2024-11-20 10:04:21.407364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.766 qpair failed and we were unable to recover it. 00:30:50.766 [2024-11-20 10:04:21.407723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.766 [2024-11-20 10:04:21.407752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.766 qpair failed and we were unable to recover it. 00:30:50.766 [2024-11-20 10:04:21.408097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.766 [2024-11-20 10:04:21.408126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.766 qpair failed and we were unable to recover it. 00:30:50.766 [2024-11-20 10:04:21.408567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.766 [2024-11-20 10:04:21.408598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.766 qpair failed and we were unable to recover it. 00:30:50.766 [2024-11-20 10:04:21.408952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.766 [2024-11-20 10:04:21.408981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.766 qpair failed and we were unable to recover it. 00:30:50.766 [2024-11-20 10:04:21.409089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.766 [2024-11-20 10:04:21.409116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.766 qpair failed and we were unable to recover it. 00:30:50.766 Read completed with error (sct=0, sc=8) 00:30:50.766 starting I/O failed 00:30:50.766 Read completed with error (sct=0, sc=8) 00:30:50.766 starting I/O failed 00:30:50.766 Read completed with error (sct=0, sc=8) 00:30:50.766 starting I/O failed 00:30:50.766 Read completed with error (sct=0, sc=8) 00:30:50.766 starting I/O failed 00:30:50.766 Read completed with error (sct=0, sc=8) 00:30:50.766 starting I/O failed 00:30:50.766 Read completed with error (sct=0, sc=8) 00:30:50.766 starting I/O failed 00:30:50.766 Read completed with error (sct=0, sc=8) 00:30:50.766 starting I/O failed 00:30:50.766 Read completed with error (sct=0, sc=8) 00:30:50.766 starting I/O failed 00:30:50.766 Read completed with error (sct=0, sc=8) 00:30:50.766 starting I/O failed 00:30:50.766 Read completed with error (sct=0, sc=8) 00:30:50.766 starting I/O failed 00:30:50.766 Read completed with error (sct=0, sc=8) 00:30:50.766 starting I/O failed 00:30:50.766 Read completed with error (sct=0, sc=8) 00:30:50.766 starting I/O failed 00:30:50.766 Read completed with error (sct=0, sc=8) 00:30:50.766 starting I/O failed 00:30:50.766 Read completed with error (sct=0, sc=8) 00:30:50.766 starting I/O failed 00:30:50.766 Write completed with error (sct=0, sc=8) 00:30:50.766 starting I/O failed 00:30:50.766 Read completed with error (sct=0, sc=8) 00:30:50.766 starting I/O failed 00:30:50.766 Write completed with error (sct=0, sc=8) 00:30:50.766 starting I/O failed 00:30:50.766 Read completed with error (sct=0, sc=8) 00:30:50.766 starting I/O failed 00:30:50.766 Write completed with error (sct=0, sc=8) 00:30:50.766 starting I/O failed 00:30:50.766 Write completed with error (sct=0, sc=8) 00:30:50.766 starting I/O failed 00:30:50.766 Read completed with error (sct=0, sc=8) 00:30:50.766 starting I/O failed 00:30:50.766 Read completed with error (sct=0, sc=8) 00:30:50.766 starting I/O failed 00:30:50.766 Write completed with error (sct=0, sc=8) 00:30:50.766 starting I/O failed 00:30:50.766 Write completed with error (sct=0, sc=8) 00:30:50.766 starting I/O failed 00:30:50.766 Read completed with error (sct=0, sc=8) 00:30:50.766 starting I/O failed 00:30:50.766 Write completed with error (sct=0, sc=8) 00:30:50.766 starting I/O failed 00:30:50.766 Write completed with error (sct=0, sc=8) 00:30:50.766 starting I/O failed 00:30:50.766 Write completed with error (sct=0, sc=8) 00:30:50.766 starting I/O failed 00:30:50.766 Read completed with error (sct=0, sc=8) 00:30:50.766 starting I/O failed 00:30:50.766 Write completed with error (sct=0, sc=8) 00:30:50.766 starting I/O failed 00:30:50.766 Read completed with error (sct=0, sc=8) 00:30:50.766 starting I/O failed 00:30:50.766 Write completed with error (sct=0, sc=8) 00:30:50.766 starting I/O failed 00:30:50.766 [2024-11-20 10:04:21.409904] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:50.766 [2024-11-20 10:04:21.410450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.766 [2024-11-20 10:04:21.410542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.766 qpair failed and we were unable to recover it. 00:30:50.766 [2024-11-20 10:04:21.410783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.766 [2024-11-20 10:04:21.410813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.766 qpair failed and we were unable to recover it. 00:30:50.766 [2024-11-20 10:04:21.411149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.766 [2024-11-20 10:04:21.411186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.766 qpair failed and we were unable to recover it. 00:30:50.766 [2024-11-20 10:04:21.411536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.766 [2024-11-20 10:04:21.411564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.766 qpair failed and we were unable to recover it. 00:30:50.766 [2024-11-20 10:04:21.411918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.766 [2024-11-20 10:04:21.411946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.766 qpair failed and we were unable to recover it. 00:30:50.766 [2024-11-20 10:04:21.412308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.766 [2024-11-20 10:04:21.412344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.766 qpair failed and we were unable to recover it. 00:30:50.766 [2024-11-20 10:04:21.412596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.767 [2024-11-20 10:04:21.412625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.767 qpair failed and we were unable to recover it. 00:30:50.767 [2024-11-20 10:04:21.413016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.767 [2024-11-20 10:04:21.413044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.767 qpair failed and we were unable to recover it. 00:30:50.767 [2024-11-20 10:04:21.413449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.767 [2024-11-20 10:04:21.413479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.767 qpair failed and we were unable to recover it. 00:30:50.767 [2024-11-20 10:04:21.413833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.767 [2024-11-20 10:04:21.413861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.767 qpair failed and we were unable to recover it. 00:30:50.767 [2024-11-20 10:04:21.414096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.767 [2024-11-20 10:04:21.414125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.767 qpair failed and we were unable to recover it. 00:30:50.767 [2024-11-20 10:04:21.414480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.767 [2024-11-20 10:04:21.414510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.767 qpair failed and we were unable to recover it. 00:30:50.767 [2024-11-20 10:04:21.414867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.767 [2024-11-20 10:04:21.414895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.767 qpair failed and we were unable to recover it. 00:30:50.767 [2024-11-20 10:04:21.415237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.767 [2024-11-20 10:04:21.415267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.767 qpair failed and we were unable to recover it. 00:30:50.767 [2024-11-20 10:04:21.415612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.767 [2024-11-20 10:04:21.415640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.767 qpair failed and we were unable to recover it. 00:30:50.767 [2024-11-20 10:04:21.415843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.767 [2024-11-20 10:04:21.415870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.767 qpair failed and we were unable to recover it. 00:30:50.767 [2024-11-20 10:04:21.416251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.767 [2024-11-20 10:04:21.416281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.767 qpair failed and we were unable to recover it. 00:30:50.767 [2024-11-20 10:04:21.416628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.767 [2024-11-20 10:04:21.416657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.767 qpair failed and we were unable to recover it. 00:30:50.767 [2024-11-20 10:04:21.416923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.767 [2024-11-20 10:04:21.416952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.767 qpair failed and we were unable to recover it. 00:30:50.767 [2024-11-20 10:04:21.417334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.767 [2024-11-20 10:04:21.417364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.767 qpair failed and we were unable to recover it. 00:30:50.767 [2024-11-20 10:04:21.417454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.767 [2024-11-20 10:04:21.417481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.767 qpair failed and we were unable to recover it. 00:30:50.767 [2024-11-20 10:04:21.417800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.767 [2024-11-20 10:04:21.417829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.767 qpair failed and we were unable to recover it. 00:30:50.767 [2024-11-20 10:04:21.418167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.767 [2024-11-20 10:04:21.418196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.767 qpair failed and we were unable to recover it. 00:30:50.767 [2024-11-20 10:04:21.418507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.767 [2024-11-20 10:04:21.418536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.767 qpair failed and we were unable to recover it. 00:30:50.767 [2024-11-20 10:04:21.418884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.767 [2024-11-20 10:04:21.418912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.767 qpair failed and we were unable to recover it. 00:30:50.767 [2024-11-20 10:04:21.419173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.767 [2024-11-20 10:04:21.419203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.767 qpair failed and we were unable to recover it. 00:30:50.767 [2024-11-20 10:04:21.419522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.767 [2024-11-20 10:04:21.419551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.767 qpair failed and we were unable to recover it. 00:30:50.767 [2024-11-20 10:04:21.419970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.767 [2024-11-20 10:04:21.419998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.767 qpair failed and we were unable to recover it. 00:30:50.767 [2024-11-20 10:04:21.420338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.767 [2024-11-20 10:04:21.420367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.767 qpair failed and we were unable to recover it. 00:30:50.767 [2024-11-20 10:04:21.420697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.767 [2024-11-20 10:04:21.420731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.767 qpair failed and we were unable to recover it. 00:30:50.767 [2024-11-20 10:04:21.420963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.767 [2024-11-20 10:04:21.420990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.767 qpair failed and we were unable to recover it. 00:30:50.767 [2024-11-20 10:04:21.421331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.767 [2024-11-20 10:04:21.421360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.767 qpair failed and we were unable to recover it. 00:30:50.767 [2024-11-20 10:04:21.421704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.767 [2024-11-20 10:04:21.421732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.767 qpair failed and we were unable to recover it. 00:30:50.767 [2024-11-20 10:04:21.421989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.767 [2024-11-20 10:04:21.422016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.767 qpair failed and we were unable to recover it. 00:30:50.767 [2024-11-20 10:04:21.422213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.767 [2024-11-20 10:04:21.422242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.767 qpair failed and we were unable to recover it. 00:30:50.767 [2024-11-20 10:04:21.422577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.767 [2024-11-20 10:04:21.422605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.767 qpair failed and we were unable to recover it. 00:30:50.767 [2024-11-20 10:04:21.422819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.767 [2024-11-20 10:04:21.422847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.767 qpair failed and we were unable to recover it. 00:30:50.767 [2024-11-20 10:04:21.423223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.767 [2024-11-20 10:04:21.423253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.767 qpair failed and we were unable to recover it. 00:30:50.767 [2024-11-20 10:04:21.423473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.767 [2024-11-20 10:04:21.423500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.767 qpair failed and we were unable to recover it. 00:30:50.767 [2024-11-20 10:04:21.423802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.767 [2024-11-20 10:04:21.423831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.767 qpair failed and we were unable to recover it. 00:30:50.767 [2024-11-20 10:04:21.424188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.767 [2024-11-20 10:04:21.424217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.767 qpair failed and we were unable to recover it. 00:30:50.767 [2024-11-20 10:04:21.424533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.767 [2024-11-20 10:04:21.424560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.767 qpair failed and we were unable to recover it. 00:30:50.767 [2024-11-20 10:04:21.424739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.767 [2024-11-20 10:04:21.424767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.767 qpair failed and we were unable to recover it. 00:30:50.767 [2024-11-20 10:04:21.425107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.767 [2024-11-20 10:04:21.425136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.767 qpair failed and we were unable to recover it. 00:30:50.767 [2024-11-20 10:04:21.425381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.768 [2024-11-20 10:04:21.425410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.768 qpair failed and we were unable to recover it. 00:30:50.768 [2024-11-20 10:04:21.425622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.768 [2024-11-20 10:04:21.425649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.768 qpair failed and we were unable to recover it. 00:30:50.768 [2024-11-20 10:04:21.425843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.768 [2024-11-20 10:04:21.425872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.768 qpair failed and we were unable to recover it. 00:30:50.768 [2024-11-20 10:04:21.426211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.768 [2024-11-20 10:04:21.426241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.768 qpair failed and we were unable to recover it. 00:30:50.768 [2024-11-20 10:04:21.426449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.768 [2024-11-20 10:04:21.426477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.768 qpair failed and we were unable to recover it. 00:30:50.768 [2024-11-20 10:04:21.426688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.768 [2024-11-20 10:04:21.426719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.768 qpair failed and we were unable to recover it. 00:30:50.768 [2024-11-20 10:04:21.427046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.768 [2024-11-20 10:04:21.427074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.768 qpair failed and we were unable to recover it. 00:30:50.768 [2024-11-20 10:04:21.427416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.768 [2024-11-20 10:04:21.427446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.768 qpair failed and we were unable to recover it. 00:30:50.768 [2024-11-20 10:04:21.427643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.768 [2024-11-20 10:04:21.427672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.768 qpair failed and we were unable to recover it. 00:30:50.768 [2024-11-20 10:04:21.427999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.768 [2024-11-20 10:04:21.428028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.768 qpair failed and we were unable to recover it. 00:30:50.768 [2024-11-20 10:04:21.428249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.768 [2024-11-20 10:04:21.428278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.768 qpair failed and we were unable to recover it. 00:30:50.768 [2024-11-20 10:04:21.428606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.768 [2024-11-20 10:04:21.428635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.768 qpair failed and we were unable to recover it. 00:30:50.768 [2024-11-20 10:04:21.428997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.768 [2024-11-20 10:04:21.429025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.768 qpair failed and we were unable to recover it. 00:30:50.768 [2024-11-20 10:04:21.429432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.768 [2024-11-20 10:04:21.429461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.768 qpair failed and we were unable to recover it. 00:30:50.768 [2024-11-20 10:04:21.429849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.768 [2024-11-20 10:04:21.429877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.768 qpair failed and we were unable to recover it. 00:30:50.768 [2024-11-20 10:04:21.430276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.768 [2024-11-20 10:04:21.430304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.768 qpair failed and we were unable to recover it. 00:30:50.768 [2024-11-20 10:04:21.430658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.768 [2024-11-20 10:04:21.430685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.768 qpair failed and we were unable to recover it. 00:30:50.768 [2024-11-20 10:04:21.431034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.768 [2024-11-20 10:04:21.431062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.768 qpair failed and we were unable to recover it. 00:30:50.768 [2024-11-20 10:04:21.431407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.768 [2024-11-20 10:04:21.431435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.768 qpair failed and we were unable to recover it. 00:30:50.768 [2024-11-20 10:04:21.431782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.768 [2024-11-20 10:04:21.431810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.768 qpair failed and we were unable to recover it. 00:30:50.768 [2024-11-20 10:04:21.432189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.768 [2024-11-20 10:04:21.432219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.768 qpair failed and we were unable to recover it. 00:30:50.768 [2024-11-20 10:04:21.432540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.768 [2024-11-20 10:04:21.432568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.768 qpair failed and we were unable to recover it. 00:30:50.768 [2024-11-20 10:04:21.432784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.768 [2024-11-20 10:04:21.432813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.768 qpair failed and we were unable to recover it. 00:30:50.768 [2024-11-20 10:04:21.433063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.768 [2024-11-20 10:04:21.433092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.768 qpair failed and we were unable to recover it. 00:30:50.768 [2024-11-20 10:04:21.433434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.768 [2024-11-20 10:04:21.433464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.768 qpair failed and we were unable to recover it. 00:30:50.768 [2024-11-20 10:04:21.433696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.768 [2024-11-20 10:04:21.433731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.768 qpair failed and we were unable to recover it. 00:30:50.768 [2024-11-20 10:04:21.434064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.768 [2024-11-20 10:04:21.434092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.768 qpair failed and we were unable to recover it. 00:30:50.768 [2024-11-20 10:04:21.434459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.768 [2024-11-20 10:04:21.434490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.768 qpair failed and we were unable to recover it. 00:30:50.768 [2024-11-20 10:04:21.434693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.768 [2024-11-20 10:04:21.434722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.768 qpair failed and we were unable to recover it. 00:30:50.768 [2024-11-20 10:04:21.434921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.768 [2024-11-20 10:04:21.434950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.768 qpair failed and we were unable to recover it. 00:30:50.768 [2024-11-20 10:04:21.435187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.768 [2024-11-20 10:04:21.435217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.768 qpair failed and we were unable to recover it. 00:30:50.768 [2024-11-20 10:04:21.435572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.768 [2024-11-20 10:04:21.435599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.768 qpair failed and we were unable to recover it. 00:30:50.768 [2024-11-20 10:04:21.435944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.768 [2024-11-20 10:04:21.435973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.768 qpair failed and we were unable to recover it. 00:30:50.768 [2024-11-20 10:04:21.436319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.768 [2024-11-20 10:04:21.436347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.768 qpair failed and we were unable to recover it. 00:30:50.768 [2024-11-20 10:04:21.436688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.768 [2024-11-20 10:04:21.436716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.768 qpair failed and we were unable to recover it. 00:30:50.768 [2024-11-20 10:04:21.437071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.768 [2024-11-20 10:04:21.437099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.768 qpair failed and we were unable to recover it. 00:30:50.768 [2024-11-20 10:04:21.437452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.768 [2024-11-20 10:04:21.437480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.768 qpair failed and we were unable to recover it. 00:30:50.768 [2024-11-20 10:04:21.437738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.768 [2024-11-20 10:04:21.437766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.768 qpair failed and we were unable to recover it. 00:30:50.769 [2024-11-20 10:04:21.438115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.769 [2024-11-20 10:04:21.438143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.769 qpair failed and we were unable to recover it. 00:30:50.769 [2024-11-20 10:04:21.438522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.769 [2024-11-20 10:04:21.438550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.769 qpair failed and we were unable to recover it. 00:30:50.769 [2024-11-20 10:04:21.438946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.769 [2024-11-20 10:04:21.438974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.769 qpair failed and we were unable to recover it. 00:30:50.769 [2024-11-20 10:04:21.439329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.769 [2024-11-20 10:04:21.439360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.769 qpair failed and we were unable to recover it. 00:30:50.769 [2024-11-20 10:04:21.439721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.769 [2024-11-20 10:04:21.439750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.769 qpair failed and we were unable to recover it. 00:30:50.769 [2024-11-20 10:04:21.439973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.769 [2024-11-20 10:04:21.440001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.769 qpair failed and we were unable to recover it. 00:30:50.769 [2024-11-20 10:04:21.440101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.769 [2024-11-20 10:04:21.440130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.769 qpair failed and we were unable to recover it. 00:30:50.769 [2024-11-20 10:04:21.440700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.769 [2024-11-20 10:04:21.440793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.769 qpair failed and we were unable to recover it. 00:30:50.769 [2024-11-20 10:04:21.440976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.769 [2024-11-20 10:04:21.441013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.769 qpair failed and we were unable to recover it. 00:30:50.769 [2024-11-20 10:04:21.441429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.769 [2024-11-20 10:04:21.441520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.769 qpair failed and we were unable to recover it. 00:30:50.769 [2024-11-20 10:04:21.441822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.769 [2024-11-20 10:04:21.441859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.769 qpair failed and we were unable to recover it. 00:30:50.769 [2024-11-20 10:04:21.442062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.769 [2024-11-20 10:04:21.442092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.769 qpair failed and we were unable to recover it. 00:30:50.769 [2024-11-20 10:04:21.442466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.769 [2024-11-20 10:04:21.442498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.769 qpair failed and we were unable to recover it. 00:30:50.769 [2024-11-20 10:04:21.442826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.769 [2024-11-20 10:04:21.442855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.769 qpair failed and we were unable to recover it. 00:30:50.769 [2024-11-20 10:04:21.443110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.769 [2024-11-20 10:04:21.443140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.769 qpair failed and we were unable to recover it. 00:30:50.769 [2024-11-20 10:04:21.443390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.769 [2024-11-20 10:04:21.443420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.769 qpair failed and we were unable to recover it. 00:30:50.769 [2024-11-20 10:04:21.443697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.769 [2024-11-20 10:04:21.443725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.769 qpair failed and we were unable to recover it. 00:30:50.769 [2024-11-20 10:04:21.443931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.769 [2024-11-20 10:04:21.443959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.769 qpair failed and we were unable to recover it. 00:30:50.769 [2024-11-20 10:04:21.444176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.769 [2024-11-20 10:04:21.444207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.769 qpair failed and we were unable to recover it. 00:30:50.769 [2024-11-20 10:04:21.444409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.769 [2024-11-20 10:04:21.444437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.769 qpair failed and we were unable to recover it. 00:30:50.769 [2024-11-20 10:04:21.444790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.769 [2024-11-20 10:04:21.444817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.769 qpair failed and we were unable to recover it. 00:30:50.769 [2024-11-20 10:04:21.445166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.769 [2024-11-20 10:04:21.445196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.769 qpair failed and we were unable to recover it. 00:30:50.769 [2024-11-20 10:04:21.445403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.769 [2024-11-20 10:04:21.445431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.769 qpair failed and we were unable to recover it. 00:30:50.769 [2024-11-20 10:04:21.445837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.769 [2024-11-20 10:04:21.445865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.769 qpair failed and we were unable to recover it. 00:30:50.769 [2024-11-20 10:04:21.446076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.769 [2024-11-20 10:04:21.446104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.769 qpair failed and we were unable to recover it. 00:30:50.769 [2024-11-20 10:04:21.446470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.769 [2024-11-20 10:04:21.446501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.769 qpair failed and we were unable to recover it. 00:30:50.769 [2024-11-20 10:04:21.446708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.769 [2024-11-20 10:04:21.446736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.769 qpair failed and we were unable to recover it. 00:30:50.769 [2024-11-20 10:04:21.447104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.769 [2024-11-20 10:04:21.447138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.769 qpair failed and we were unable to recover it. 00:30:50.769 [2024-11-20 10:04:21.447554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.769 [2024-11-20 10:04:21.447584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.769 qpair failed and we were unable to recover it. 00:30:50.769 [2024-11-20 10:04:21.447825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.769 [2024-11-20 10:04:21.447854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.769 qpair failed and we were unable to recover it. 00:30:50.769 [2024-11-20 10:04:21.448214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.769 [2024-11-20 10:04:21.448246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.769 qpair failed and we were unable to recover it. 00:30:50.769 [2024-11-20 10:04:21.448490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.769 [2024-11-20 10:04:21.448518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.769 qpair failed and we were unable to recover it. 00:30:50.769 [2024-11-20 10:04:21.448878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.769 [2024-11-20 10:04:21.448906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.769 qpair failed and we were unable to recover it. 00:30:50.769 [2024-11-20 10:04:21.449258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.769 [2024-11-20 10:04:21.449288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.769 qpair failed and we were unable to recover it. 00:30:50.769 [2024-11-20 10:04:21.449495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.769 [2024-11-20 10:04:21.449523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.769 qpair failed and we were unable to recover it. 00:30:50.769 [2024-11-20 10:04:21.449751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.769 [2024-11-20 10:04:21.449779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.769 qpair failed and we were unable to recover it. 00:30:50.769 [2024-11-20 10:04:21.449991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.769 [2024-11-20 10:04:21.450020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.769 qpair failed and we were unable to recover it. 00:30:50.769 [2024-11-20 10:04:21.450228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.769 [2024-11-20 10:04:21.450258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.770 qpair failed and we were unable to recover it. 00:30:50.770 [2024-11-20 10:04:21.450502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.770 [2024-11-20 10:04:21.450530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.770 qpair failed and we were unable to recover it. 00:30:50.770 [2024-11-20 10:04:21.450888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.770 [2024-11-20 10:04:21.450917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.770 qpair failed and we were unable to recover it. 00:30:50.770 [2024-11-20 10:04:21.451148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.770 [2024-11-20 10:04:21.451185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.770 qpair failed and we were unable to recover it. 00:30:50.770 [2024-11-20 10:04:21.451427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.770 [2024-11-20 10:04:21.451455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.770 qpair failed and we were unable to recover it. 00:30:50.770 [2024-11-20 10:04:21.451751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.770 [2024-11-20 10:04:21.451779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.770 qpair failed and we were unable to recover it. 00:30:50.770 [2024-11-20 10:04:21.452126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.770 [2024-11-20 10:04:21.452154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.770 qpair failed and we were unable to recover it. 00:30:50.770 [2024-11-20 10:04:21.452453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.770 [2024-11-20 10:04:21.452482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.770 qpair failed and we were unable to recover it. 00:30:50.770 [2024-11-20 10:04:21.452839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.770 [2024-11-20 10:04:21.452868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.770 qpair failed and we were unable to recover it. 00:30:50.770 [2024-11-20 10:04:21.452965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.770 [2024-11-20 10:04:21.452994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.770 qpair failed and we were unable to recover it. 00:30:50.770 [2024-11-20 10:04:21.453483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.770 [2024-11-20 10:04:21.453576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.770 qpair failed and we were unable to recover it. 00:30:50.770 [2024-11-20 10:04:21.453963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.770 [2024-11-20 10:04:21.454000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.770 qpair failed and we were unable to recover it. 00:30:50.770 [2024-11-20 10:04:21.454094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.770 [2024-11-20 10:04:21.454122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.770 qpair failed and we were unable to recover it. 00:30:50.770 [2024-11-20 10:04:21.454413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.770 [2024-11-20 10:04:21.454443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.770 qpair failed and we were unable to recover it. 00:30:50.770 [2024-11-20 10:04:21.454569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.770 [2024-11-20 10:04:21.454604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.770 qpair failed and we were unable to recover it. 00:30:50.770 [2024-11-20 10:04:21.454818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.770 [2024-11-20 10:04:21.454846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.770 qpair failed and we were unable to recover it. 00:30:50.770 [2024-11-20 10:04:21.455220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.770 [2024-11-20 10:04:21.455251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:50.770 qpair failed and we were unable to recover it. 00:30:50.770 [2024-11-20 10:04:21.455665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.770 [2024-11-20 10:04:21.455695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.770 qpair failed and we were unable to recover it. 00:30:50.770 [2024-11-20 10:04:21.456049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.770 [2024-11-20 10:04:21.456077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.770 qpair failed and we were unable to recover it. 00:30:50.770 [2024-11-20 10:04:21.456205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.770 [2024-11-20 10:04:21.456244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.770 qpair failed and we were unable to recover it. 00:30:50.770 [2024-11-20 10:04:21.456485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.770 [2024-11-20 10:04:21.456514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.770 qpair failed and we were unable to recover it. 00:30:50.770 [2024-11-20 10:04:21.456873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.770 [2024-11-20 10:04:21.456902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.770 qpair failed and we were unable to recover it. 00:30:50.770 [2024-11-20 10:04:21.457259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.770 [2024-11-20 10:04:21.457288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.770 qpair failed and we were unable to recover it. 00:30:50.770 [2024-11-20 10:04:21.457654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.770 [2024-11-20 10:04:21.457682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.770 qpair failed and we were unable to recover it. 00:30:50.770 [2024-11-20 10:04:21.457896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.770 [2024-11-20 10:04:21.457924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.770 qpair failed and we were unable to recover it. 00:30:50.770 [2024-11-20 10:04:21.458037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.770 [2024-11-20 10:04:21.458067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.770 qpair failed and we were unable to recover it. 00:30:50.770 [2024-11-20 10:04:21.458194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.770 [2024-11-20 10:04:21.458224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.770 qpair failed and we were unable to recover it. 00:30:50.770 [2024-11-20 10:04:21.458575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.770 [2024-11-20 10:04:21.458605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.770 qpair failed and we were unable to recover it. 00:30:50.770 [2024-11-20 10:04:21.458828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.770 [2024-11-20 10:04:21.458856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.770 qpair failed and we were unable to recover it. 00:30:50.770 [2024-11-20 10:04:21.459223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.770 [2024-11-20 10:04:21.459252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.770 qpair failed and we were unable to recover it. 00:30:50.770 [2024-11-20 10:04:21.459592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.770 [2024-11-20 10:04:21.459627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.770 qpair failed and we were unable to recover it. 00:30:50.770 [2024-11-20 10:04:21.459850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.770 [2024-11-20 10:04:21.459879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.770 qpair failed and we were unable to recover it. 00:30:50.770 [2024-11-20 10:04:21.460289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.770 [2024-11-20 10:04:21.460318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.770 qpair failed and we were unable to recover it. 00:30:50.770 [2024-11-20 10:04:21.460678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.770 [2024-11-20 10:04:21.460706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.770 qpair failed and we were unable to recover it. 00:30:50.770 [2024-11-20 10:04:21.460916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.770 [2024-11-20 10:04:21.460944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.770 qpair failed and we were unable to recover it. 00:30:50.770 [2024-11-20 10:04:21.461291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.770 [2024-11-20 10:04:21.461320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.770 qpair failed and we were unable to recover it. 00:30:50.770 [2024-11-20 10:04:21.461683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.770 [2024-11-20 10:04:21.461711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.770 qpair failed and we were unable to recover it. 00:30:50.770 [2024-11-20 10:04:21.462085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.770 [2024-11-20 10:04:21.462114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.770 qpair failed and we were unable to recover it. 00:30:50.770 [2024-11-20 10:04:21.462447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.771 [2024-11-20 10:04:21.462477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.771 qpair failed and we were unable to recover it. 00:30:50.771 [2024-11-20 10:04:21.462861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.771 [2024-11-20 10:04:21.462889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.771 qpair failed and we were unable to recover it. 00:30:50.771 [2024-11-20 10:04:21.463132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.771 [2024-11-20 10:04:21.463168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.771 qpair failed and we were unable to recover it. 00:30:50.771 [2024-11-20 10:04:21.463442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.771 [2024-11-20 10:04:21.463471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.771 qpair failed and we were unable to recover it. 00:30:50.771 [2024-11-20 10:04:21.463668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.771 [2024-11-20 10:04:21.463697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.771 qpair failed and we were unable to recover it. 00:30:50.771 [2024-11-20 10:04:21.464053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.771 [2024-11-20 10:04:21.464081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.771 qpair failed and we were unable to recover it. 00:30:50.771 [2024-11-20 10:04:21.464302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.771 [2024-11-20 10:04:21.464333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.771 qpair failed and we were unable to recover it. 00:30:50.771 [2024-11-20 10:04:21.464696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.771 [2024-11-20 10:04:21.464725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.771 qpair failed and we were unable to recover it. 00:30:50.771 [2024-11-20 10:04:21.465055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.771 [2024-11-20 10:04:21.465084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.771 qpair failed and we were unable to recover it. 00:30:50.771 [2024-11-20 10:04:21.465368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.771 [2024-11-20 10:04:21.465398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.771 qpair failed and we were unable to recover it. 00:30:50.771 [2024-11-20 10:04:21.465724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.771 [2024-11-20 10:04:21.465752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.771 qpair failed and we were unable to recover it. 00:30:50.771 [2024-11-20 10:04:21.466099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.771 [2024-11-20 10:04:21.466127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.771 qpair failed and we were unable to recover it. 00:30:50.771 [2024-11-20 10:04:21.466472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.771 [2024-11-20 10:04:21.466502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.771 qpair failed and we were unable to recover it. 00:30:50.771 [2024-11-20 10:04:21.466835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.771 [2024-11-20 10:04:21.466863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.771 qpair failed and we were unable to recover it. 00:30:50.771 [2024-11-20 10:04:21.467097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.771 [2024-11-20 10:04:21.467126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.771 qpair failed and we were unable to recover it. 00:30:50.771 [2024-11-20 10:04:21.467363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.771 [2024-11-20 10:04:21.467394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.771 qpair failed and we were unable to recover it. 00:30:50.771 [2024-11-20 10:04:21.467610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.771 [2024-11-20 10:04:21.467643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.771 qpair failed and we were unable to recover it. 00:30:50.771 [2024-11-20 10:04:21.467851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.771 [2024-11-20 10:04:21.467880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.771 qpair failed and we were unable to recover it. 00:30:50.771 [2024-11-20 10:04:21.468082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.771 [2024-11-20 10:04:21.468111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.771 qpair failed and we were unable to recover it. 00:30:50.771 [2024-11-20 10:04:21.468454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.771 [2024-11-20 10:04:21.468485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.771 qpair failed and we were unable to recover it. 00:30:50.771 [2024-11-20 10:04:21.468831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.771 [2024-11-20 10:04:21.468859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.771 qpair failed and we were unable to recover it. 00:30:50.771 [2024-11-20 10:04:21.469216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.771 [2024-11-20 10:04:21.469245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.771 qpair failed and we were unable to recover it. 00:30:50.771 [2024-11-20 10:04:21.469447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.771 [2024-11-20 10:04:21.469474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.771 qpair failed and we were unable to recover it. 00:30:50.771 [2024-11-20 10:04:21.469845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.771 [2024-11-20 10:04:21.469873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.771 qpair failed and we were unable to recover it. 00:30:50.771 [2024-11-20 10:04:21.470234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.771 [2024-11-20 10:04:21.470264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.771 qpair failed and we were unable to recover it. 00:30:50.771 [2024-11-20 10:04:21.470621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.771 [2024-11-20 10:04:21.470650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.771 qpair failed and we were unable to recover it. 00:30:50.771 [2024-11-20 10:04:21.470981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.771 [2024-11-20 10:04:21.471009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.771 qpair failed and we were unable to recover it. 00:30:50.771 [2024-11-20 10:04:21.471383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.771 [2024-11-20 10:04:21.471413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.771 qpair failed and we were unable to recover it. 00:30:50.771 [2024-11-20 10:04:21.471825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.771 [2024-11-20 10:04:21.471853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.771 qpair failed and we were unable to recover it. 00:30:50.771 [2024-11-20 10:04:21.472198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.771 [2024-11-20 10:04:21.472227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.771 qpair failed and we were unable to recover it. 00:30:50.771 [2024-11-20 10:04:21.472580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.771 [2024-11-20 10:04:21.472614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.771 qpair failed and we were unable to recover it. 00:30:50.771 [2024-11-20 10:04:21.472708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.771 [2024-11-20 10:04:21.472736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.771 qpair failed and we were unable to recover it. 00:30:50.771 [2024-11-20 10:04:21.473017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.771 [2024-11-20 10:04:21.473051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.771 qpair failed and we were unable to recover it. 00:30:50.771 [2024-11-20 10:04:21.473422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.772 [2024-11-20 10:04:21.473452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.772 qpair failed and we were unable to recover it. 00:30:50.772 [2024-11-20 10:04:21.473669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.772 [2024-11-20 10:04:21.473696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.772 qpair failed and we were unable to recover it. 00:30:50.772 [2024-11-20 10:04:21.474050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.772 [2024-11-20 10:04:21.474077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.772 qpair failed and we were unable to recover it. 00:30:50.772 [2024-11-20 10:04:21.474287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.772 [2024-11-20 10:04:21.474315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.772 qpair failed and we were unable to recover it. 00:30:50.772 [2024-11-20 10:04:21.474627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.772 [2024-11-20 10:04:21.474656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.772 qpair failed and we were unable to recover it. 00:30:50.772 [2024-11-20 10:04:21.475001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.772 [2024-11-20 10:04:21.475029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.772 qpair failed and we were unable to recover it. 00:30:50.772 [2024-11-20 10:04:21.475236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.772 [2024-11-20 10:04:21.475264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.772 qpair failed and we were unable to recover it. 00:30:50.772 [2024-11-20 10:04:21.475638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.772 [2024-11-20 10:04:21.475666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.772 qpair failed and we were unable to recover it. 00:30:50.772 [2024-11-20 10:04:21.476002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.772 [2024-11-20 10:04:21.476030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.772 qpair failed and we were unable to recover it. 00:30:50.772 [2024-11-20 10:04:21.476376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.772 [2024-11-20 10:04:21.476405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.772 qpair failed and we were unable to recover it. 00:30:50.772 [2024-11-20 10:04:21.476611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.772 [2024-11-20 10:04:21.476640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.772 qpair failed and we were unable to recover it. 00:30:50.772 [2024-11-20 10:04:21.476847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.772 [2024-11-20 10:04:21.476875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.772 qpair failed and we were unable to recover it. 00:30:50.772 [2024-11-20 10:04:21.477217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.772 [2024-11-20 10:04:21.477246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.772 qpair failed and we were unable to recover it. 00:30:50.772 [2024-11-20 10:04:21.477605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.772 [2024-11-20 10:04:21.477633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.772 qpair failed and we were unable to recover it. 00:30:50.772 [2024-11-20 10:04:21.477842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.772 [2024-11-20 10:04:21.477869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.772 qpair failed and we were unable to recover it. 00:30:50.772 [2024-11-20 10:04:21.478184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.772 [2024-11-20 10:04:21.478214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.772 qpair failed and we were unable to recover it. 00:30:50.772 [2024-11-20 10:04:21.478537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.772 [2024-11-20 10:04:21.478565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.772 qpair failed and we were unable to recover it. 00:30:50.772 [2024-11-20 10:04:21.478924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.772 [2024-11-20 10:04:21.478952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.772 qpair failed and we were unable to recover it. 00:30:50.772 [2024-11-20 10:04:21.479182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.772 [2024-11-20 10:04:21.479211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.772 qpair failed and we were unable to recover it. 00:30:50.772 [2024-11-20 10:04:21.479564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.772 [2024-11-20 10:04:21.479592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.772 qpair failed and we were unable to recover it. 00:30:50.772 [2024-11-20 10:04:21.479938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.772 [2024-11-20 10:04:21.479965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.772 qpair failed and we were unable to recover it. 00:30:50.772 [2024-11-20 10:04:21.480156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.772 [2024-11-20 10:04:21.480196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.772 qpair failed and we were unable to recover it. 00:30:50.772 [2024-11-20 10:04:21.480538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.772 [2024-11-20 10:04:21.480566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.772 qpair failed and we were unable to recover it. 00:30:50.772 [2024-11-20 10:04:21.480772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.772 [2024-11-20 10:04:21.480800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.772 qpair failed and we were unable to recover it. 00:30:50.772 [2024-11-20 10:04:21.481021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.772 [2024-11-20 10:04:21.481050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.772 qpair failed and we were unable to recover it. 00:30:50.772 [2024-11-20 10:04:21.481385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.772 [2024-11-20 10:04:21.481415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.772 qpair failed and we were unable to recover it. 00:30:50.772 [2024-11-20 10:04:21.481679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.772 [2024-11-20 10:04:21.481712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.772 qpair failed and we were unable to recover it. 00:30:50.772 [2024-11-20 10:04:21.482036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.772 [2024-11-20 10:04:21.482065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.772 qpair failed and we were unable to recover it. 00:30:50.772 [2024-11-20 10:04:21.482427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.772 [2024-11-20 10:04:21.482456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.772 qpair failed and we were unable to recover it. 00:30:50.772 [2024-11-20 10:04:21.482804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.772 [2024-11-20 10:04:21.482832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.772 qpair failed and we were unable to recover it. 00:30:50.772 [2024-11-20 10:04:21.483174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.772 [2024-11-20 10:04:21.483204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.772 qpair failed and we were unable to recover it. 00:30:50.772 [2024-11-20 10:04:21.483543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.772 [2024-11-20 10:04:21.483571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.772 qpair failed and we were unable to recover it. 00:30:50.772 [2024-11-20 10:04:21.483937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.772 [2024-11-20 10:04:21.483966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.772 qpair failed and we were unable to recover it. 00:30:50.772 [2024-11-20 10:04:21.484206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.772 [2024-11-20 10:04:21.484235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.772 qpair failed and we were unable to recover it. 00:30:50.772 [2024-11-20 10:04:21.484606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.772 [2024-11-20 10:04:21.484634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.772 qpair failed and we were unable to recover it. 00:30:50.772 [2024-11-20 10:04:21.484981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.772 [2024-11-20 10:04:21.485009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.772 qpair failed and we were unable to recover it. 00:30:50.772 [2024-11-20 10:04:21.485351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.772 [2024-11-20 10:04:21.485381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.772 qpair failed and we were unable to recover it. 00:30:50.772 [2024-11-20 10:04:21.485602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.772 [2024-11-20 10:04:21.485630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.772 qpair failed and we were unable to recover it. 00:30:50.772 [2024-11-20 10:04:21.485948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.773 [2024-11-20 10:04:21.485977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.773 qpair failed and we were unable to recover it. 00:30:50.773 [2024-11-20 10:04:21.486202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.773 [2024-11-20 10:04:21.486245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.773 qpair failed and we were unable to recover it. 00:30:50.773 [2024-11-20 10:04:21.486558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.773 [2024-11-20 10:04:21.486587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.773 qpair failed and we were unable to recover it. 00:30:50.773 [2024-11-20 10:04:21.486796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.773 [2024-11-20 10:04:21.486824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.773 qpair failed and we were unable to recover it. 00:30:50.773 [2024-11-20 10:04:21.487102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.773 [2024-11-20 10:04:21.487130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.773 qpair failed and we were unable to recover it. 00:30:50.773 [2024-11-20 10:04:21.487495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.773 [2024-11-20 10:04:21.487525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.773 qpair failed and we were unable to recover it. 00:30:50.773 [2024-11-20 10:04:21.487873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.773 [2024-11-20 10:04:21.487901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.773 qpair failed and we were unable to recover it. 00:30:50.773 [2024-11-20 10:04:21.488247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.773 [2024-11-20 10:04:21.488276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.773 qpair failed and we were unable to recover it. 00:30:50.773 [2024-11-20 10:04:21.488479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.773 [2024-11-20 10:04:21.488507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.773 qpair failed and we were unable to recover it. 00:30:50.773 [2024-11-20 10:04:21.488844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.773 [2024-11-20 10:04:21.488872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.773 qpair failed and we were unable to recover it. 00:30:50.773 [2024-11-20 10:04:21.489207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.773 [2024-11-20 10:04:21.489237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.773 qpair failed and we were unable to recover it. 00:30:50.773 [2024-11-20 10:04:21.489583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.773 [2024-11-20 10:04:21.489618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.773 qpair failed and we were unable to recover it. 00:30:50.773 [2024-11-20 10:04:21.489926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.773 [2024-11-20 10:04:21.489954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.773 qpair failed and we were unable to recover it. 00:30:50.773 [2024-11-20 10:04:21.490173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.773 [2024-11-20 10:04:21.490201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.773 qpair failed and we were unable to recover it. 00:30:50.773 [2024-11-20 10:04:21.490557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.773 [2024-11-20 10:04:21.490585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.773 qpair failed and we were unable to recover it. 00:30:50.773 [2024-11-20 10:04:21.490873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.773 [2024-11-20 10:04:21.490910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.773 qpair failed and we were unable to recover it. 00:30:50.773 [2024-11-20 10:04:21.491263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.773 [2024-11-20 10:04:21.491293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.773 qpair failed and we were unable to recover it. 00:30:50.773 [2024-11-20 10:04:21.491652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.773 [2024-11-20 10:04:21.491681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.773 qpair failed and we were unable to recover it. 00:30:50.773 [2024-11-20 10:04:21.492041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.773 [2024-11-20 10:04:21.492070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.773 qpair failed and we were unable to recover it. 00:30:50.773 [2024-11-20 10:04:21.492444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.773 [2024-11-20 10:04:21.492473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.773 qpair failed and we were unable to recover it. 00:30:50.773 [2024-11-20 10:04:21.492825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.773 [2024-11-20 10:04:21.492853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.773 qpair failed and we were unable to recover it. 00:30:50.773 [2024-11-20 10:04:21.493096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.773 [2024-11-20 10:04:21.493125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.773 qpair failed and we were unable to recover it. 00:30:50.773 [2024-11-20 10:04:21.493330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.773 [2024-11-20 10:04:21.493359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.773 qpair failed and we were unable to recover it. 00:30:50.773 [2024-11-20 10:04:21.493576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.773 [2024-11-20 10:04:21.493605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.773 qpair failed and we were unable to recover it. 00:30:50.773 [2024-11-20 10:04:21.493944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.773 [2024-11-20 10:04:21.493973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.773 qpair failed and we were unable to recover it. 00:30:50.773 [2024-11-20 10:04:21.494299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.773 [2024-11-20 10:04:21.494329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.773 qpair failed and we were unable to recover it. 00:30:50.773 [2024-11-20 10:04:21.494680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.773 [2024-11-20 10:04:21.494708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.773 qpair failed and we were unable to recover it. 00:30:50.773 [2024-11-20 10:04:21.495081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.773 [2024-11-20 10:04:21.495110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.773 qpair failed and we were unable to recover it. 00:30:50.773 [2024-11-20 10:04:21.495446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.773 [2024-11-20 10:04:21.495476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.773 qpair failed and we were unable to recover it. 00:30:50.773 [2024-11-20 10:04:21.495696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.773 [2024-11-20 10:04:21.495724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.773 qpair failed and we were unable to recover it. 00:30:50.773 [2024-11-20 10:04:21.496067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.773 [2024-11-20 10:04:21.496096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.773 qpair failed and we were unable to recover it. 00:30:50.773 [2024-11-20 10:04:21.496299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.773 [2024-11-20 10:04:21.496328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.773 qpair failed and we were unable to recover it. 00:30:50.773 [2024-11-20 10:04:21.496694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.773 [2024-11-20 10:04:21.496723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.773 qpair failed and we were unable to recover it. 00:30:50.773 [2024-11-20 10:04:21.497068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.773 [2024-11-20 10:04:21.497097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.773 qpair failed and we were unable to recover it. 00:30:50.773 [2024-11-20 10:04:21.497418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.773 [2024-11-20 10:04:21.497447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.773 qpair failed and we were unable to recover it. 00:30:50.773 [2024-11-20 10:04:21.497757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.773 [2024-11-20 10:04:21.497785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.773 qpair failed and we were unable to recover it. 00:30:50.773 [2024-11-20 10:04:21.498138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.773 [2024-11-20 10:04:21.498189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.773 qpair failed and we were unable to recover it. 00:30:50.773 [2024-11-20 10:04:21.498403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.773 [2024-11-20 10:04:21.498431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.773 qpair failed and we were unable to recover it. 00:30:50.774 [2024-11-20 10:04:21.498765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.774 [2024-11-20 10:04:21.498794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.774 qpair failed and we were unable to recover it. 00:30:50.774 [2024-11-20 10:04:21.499156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.774 [2024-11-20 10:04:21.499194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.774 qpair failed and we were unable to recover it. 00:30:50.774 [2024-11-20 10:04:21.499392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.774 [2024-11-20 10:04:21.499419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.774 qpair failed and we were unable to recover it. 00:30:50.774 [2024-11-20 10:04:21.499768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.774 [2024-11-20 10:04:21.499804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.774 qpair failed and we were unable to recover it. 00:30:50.774 [2024-11-20 10:04:21.500019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.774 [2024-11-20 10:04:21.500047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.774 qpair failed and we were unable to recover it. 00:30:50.774 [2024-11-20 10:04:21.500246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.774 [2024-11-20 10:04:21.500275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.774 qpair failed and we were unable to recover it. 00:30:50.774 [2024-11-20 10:04:21.500655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.774 [2024-11-20 10:04:21.500684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.774 qpair failed and we were unable to recover it. 00:30:50.774 [2024-11-20 10:04:21.501047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.774 [2024-11-20 10:04:21.501074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.774 qpair failed and we were unable to recover it. 00:30:50.774 [2024-11-20 10:04:21.501274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.774 [2024-11-20 10:04:21.501303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.774 qpair failed and we were unable to recover it. 00:30:50.774 [2024-11-20 10:04:21.501549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.774 [2024-11-20 10:04:21.501582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.774 qpair failed and we were unable to recover it. 00:30:50.774 [2024-11-20 10:04:21.501941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.774 [2024-11-20 10:04:21.501975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.774 qpair failed and we were unable to recover it. 00:30:50.774 [2024-11-20 10:04:21.502331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.774 [2024-11-20 10:04:21.502361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.774 qpair failed and we were unable to recover it. 00:30:50.774 [2024-11-20 10:04:21.502721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.774 [2024-11-20 10:04:21.502749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.774 qpair failed and we were unable to recover it. 00:30:50.774 [2024-11-20 10:04:21.503004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.774 [2024-11-20 10:04:21.503032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.774 qpair failed and we were unable to recover it. 00:30:50.774 [2024-11-20 10:04:21.503381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.774 [2024-11-20 10:04:21.503410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.774 qpair failed and we were unable to recover it. 00:30:50.774 [2024-11-20 10:04:21.503621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.774 [2024-11-20 10:04:21.503649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.774 qpair failed and we were unable to recover it. 00:30:50.774 [2024-11-20 10:04:21.503907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.774 [2024-11-20 10:04:21.503935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.774 qpair failed and we were unable to recover it. 00:30:50.774 [2024-11-20 10:04:21.504272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.774 [2024-11-20 10:04:21.504301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.774 qpair failed and we were unable to recover it. 00:30:50.774 [2024-11-20 10:04:21.504611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.774 [2024-11-20 10:04:21.504640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.774 qpair failed and we were unable to recover it. 00:30:50.774 [2024-11-20 10:04:21.504987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.774 [2024-11-20 10:04:21.505015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.774 qpair failed and we were unable to recover it. 00:30:50.774 [2024-11-20 10:04:21.505356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.774 [2024-11-20 10:04:21.505386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.774 qpair failed and we were unable to recover it. 00:30:50.774 [2024-11-20 10:04:21.505724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.774 [2024-11-20 10:04:21.505752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.774 qpair failed and we were unable to recover it. 00:30:50.774 [2024-11-20 10:04:21.506094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.774 [2024-11-20 10:04:21.506122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.774 qpair failed and we were unable to recover it. 00:30:50.774 [2024-11-20 10:04:21.506500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.774 [2024-11-20 10:04:21.506531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.774 qpair failed and we were unable to recover it. 00:30:50.774 [2024-11-20 10:04:21.506789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.774 [2024-11-20 10:04:21.506820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.774 qpair failed and we were unable to recover it. 00:30:50.774 [2024-11-20 10:04:21.507184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.774 [2024-11-20 10:04:21.507214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.774 qpair failed and we were unable to recover it. 00:30:50.774 [2024-11-20 10:04:21.507533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.774 [2024-11-20 10:04:21.507560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.774 qpair failed and we were unable to recover it. 00:30:50.774 [2024-11-20 10:04:21.507898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.774 [2024-11-20 10:04:21.507927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.774 qpair failed and we were unable to recover it. 00:30:50.774 [2024-11-20 10:04:21.508208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.774 [2024-11-20 10:04:21.508237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.774 qpair failed and we were unable to recover it. 00:30:50.774 [2024-11-20 10:04:21.508582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.774 [2024-11-20 10:04:21.508610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.774 qpair failed and we were unable to recover it. 00:30:50.774 [2024-11-20 10:04:21.508958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.774 [2024-11-20 10:04:21.508987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.774 qpair failed and we were unable to recover it. 00:30:50.774 [2024-11-20 10:04:21.509347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.774 [2024-11-20 10:04:21.509375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.774 qpair failed and we were unable to recover it. 00:30:50.774 [2024-11-20 10:04:21.509710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.774 [2024-11-20 10:04:21.509738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.774 qpair failed and we were unable to recover it. 00:30:50.774 [2024-11-20 10:04:21.510110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.774 [2024-11-20 10:04:21.510138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.774 qpair failed and we were unable to recover it. 00:30:50.774 [2024-11-20 10:04:21.510470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.774 [2024-11-20 10:04:21.510500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.774 qpair failed and we were unable to recover it. 00:30:50.774 [2024-11-20 10:04:21.510847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.774 [2024-11-20 10:04:21.510876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.774 qpair failed and we were unable to recover it. 00:30:50.774 [2024-11-20 10:04:21.511096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.774 [2024-11-20 10:04:21.511124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.774 qpair failed and we were unable to recover it. 00:30:50.774 [2024-11-20 10:04:21.511502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.775 [2024-11-20 10:04:21.511531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.775 qpair failed and we were unable to recover it. 00:30:50.775 [2024-11-20 10:04:21.511893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.775 [2024-11-20 10:04:21.511921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.775 qpair failed and we were unable to recover it. 00:30:50.775 [2024-11-20 10:04:21.512267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.775 [2024-11-20 10:04:21.512296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.775 qpair failed and we were unable to recover it. 00:30:50.775 [2024-11-20 10:04:21.512657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.775 [2024-11-20 10:04:21.512685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.775 qpair failed and we were unable to recover it. 00:30:50.775 [2024-11-20 10:04:21.512930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.775 [2024-11-20 10:04:21.512957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.775 qpair failed and we were unable to recover it. 00:30:50.775 [2024-11-20 10:04:21.513309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.775 [2024-11-20 10:04:21.513339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.775 qpair failed and we were unable to recover it. 00:30:50.775 [2024-11-20 10:04:21.513598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.775 [2024-11-20 10:04:21.513633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.775 qpair failed and we were unable to recover it. 00:30:50.775 [2024-11-20 10:04:21.513828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.775 [2024-11-20 10:04:21.513857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.775 qpair failed and we were unable to recover it. 00:30:50.775 [2024-11-20 10:04:21.514191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.775 [2024-11-20 10:04:21.514221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.775 qpair failed and we were unable to recover it. 00:30:50.775 [2024-11-20 10:04:21.514450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.775 [2024-11-20 10:04:21.514479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.775 qpair failed and we were unable to recover it. 00:30:50.775 [2024-11-20 10:04:21.514832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.775 [2024-11-20 10:04:21.514860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.775 qpair failed and we were unable to recover it. 00:30:50.775 [2024-11-20 10:04:21.515181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.775 [2024-11-20 10:04:21.515211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.775 qpair failed and we were unable to recover it. 00:30:50.775 [2024-11-20 10:04:21.515554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.775 [2024-11-20 10:04:21.515582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.775 qpair failed and we were unable to recover it. 00:30:50.775 [2024-11-20 10:04:21.515955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.775 [2024-11-20 10:04:21.515983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.775 qpair failed and we were unable to recover it. 00:30:50.775 [2024-11-20 10:04:21.516331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.775 [2024-11-20 10:04:21.516360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.775 qpair failed and we were unable to recover it. 00:30:50.775 [2024-11-20 10:04:21.516450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.775 [2024-11-20 10:04:21.516477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.775 qpair failed and we were unable to recover it. 00:30:50.775 [2024-11-20 10:04:21.516782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.775 [2024-11-20 10:04:21.516810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.775 qpair failed and we were unable to recover it. 00:30:50.775 [2024-11-20 10:04:21.517169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.775 [2024-11-20 10:04:21.517199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.775 qpair failed and we were unable to recover it. 00:30:50.775 [2024-11-20 10:04:21.517508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.775 [2024-11-20 10:04:21.517536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.775 qpair failed and we were unable to recover it. 00:30:50.775 [2024-11-20 10:04:21.517896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.775 [2024-11-20 10:04:21.517923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.775 qpair failed and we were unable to recover it. 00:30:50.775 [2024-11-20 10:04:21.518302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.775 [2024-11-20 10:04:21.518332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.775 qpair failed and we were unable to recover it. 00:30:50.775 [2024-11-20 10:04:21.518571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.775 [2024-11-20 10:04:21.518599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.775 qpair failed and we were unable to recover it. 00:30:50.775 [2024-11-20 10:04:21.518901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.775 [2024-11-20 10:04:21.518930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.775 qpair failed and we were unable to recover it. 00:30:50.775 [2024-11-20 10:04:21.519171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.775 [2024-11-20 10:04:21.519201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.775 qpair failed and we were unable to recover it. 00:30:50.775 [2024-11-20 10:04:21.519410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.775 [2024-11-20 10:04:21.519444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.775 qpair failed and we were unable to recover it. 00:30:50.775 [2024-11-20 10:04:21.519666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.775 [2024-11-20 10:04:21.519694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.775 qpair failed and we were unable to recover it. 00:30:50.775 [2024-11-20 10:04:21.520035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.775 [2024-11-20 10:04:21.520064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.775 qpair failed and we were unable to recover it. 00:30:50.775 [2024-11-20 10:04:21.520426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.775 [2024-11-20 10:04:21.520456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.775 qpair failed and we were unable to recover it. 00:30:50.775 [2024-11-20 10:04:21.520828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.775 [2024-11-20 10:04:21.520856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.775 qpair failed and we were unable to recover it. 00:30:50.775 [2024-11-20 10:04:21.521070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.775 [2024-11-20 10:04:21.521098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.775 qpair failed and we were unable to recover it. 00:30:50.775 [2024-11-20 10:04:21.521316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.775 [2024-11-20 10:04:21.521350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.775 qpair failed and we were unable to recover it. 00:30:50.775 [2024-11-20 10:04:21.521588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.775 [2024-11-20 10:04:21.521616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.775 qpair failed and we were unable to recover it. 00:30:50.775 [2024-11-20 10:04:21.521945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.775 [2024-11-20 10:04:21.521973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.775 qpair failed and we were unable to recover it. 00:30:50.775 [2024-11-20 10:04:21.522335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.775 [2024-11-20 10:04:21.522366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.775 qpair failed and we were unable to recover it. 00:30:50.776 [2024-11-20 10:04:21.522615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.776 [2024-11-20 10:04:21.522643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.776 qpair failed and we were unable to recover it. 00:30:50.776 [2024-11-20 10:04:21.522994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.776 [2024-11-20 10:04:21.523022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.776 qpair failed and we were unable to recover it. 00:30:50.776 [2024-11-20 10:04:21.523322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.776 [2024-11-20 10:04:21.523352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.776 qpair failed and we were unable to recover it. 00:30:50.776 [2024-11-20 10:04:21.523675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.776 [2024-11-20 10:04:21.523703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.776 qpair failed and we were unable to recover it. 00:30:50.776 [2024-11-20 10:04:21.524031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.776 [2024-11-20 10:04:21.524059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.776 qpair failed and we were unable to recover it. 00:30:50.776 [2024-11-20 10:04:21.524416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.776 [2024-11-20 10:04:21.524445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.776 qpair failed and we were unable to recover it. 00:30:50.776 [2024-11-20 10:04:21.524802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.776 [2024-11-20 10:04:21.524830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.776 qpair failed and we were unable to recover it. 00:30:50.776 [2024-11-20 10:04:21.525182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.776 [2024-11-20 10:04:21.525211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.776 qpair failed and we were unable to recover it. 00:30:50.776 [2024-11-20 10:04:21.525546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.776 [2024-11-20 10:04:21.525574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.776 qpair failed and we were unable to recover it. 00:30:50.776 [2024-11-20 10:04:21.526000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.776 [2024-11-20 10:04:21.526028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.776 qpair failed and we were unable to recover it. 00:30:50.776 [2024-11-20 10:04:21.526385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.776 [2024-11-20 10:04:21.526416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.776 qpair failed and we were unable to recover it. 00:30:50.776 [2024-11-20 10:04:21.526769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.776 [2024-11-20 10:04:21.526797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.776 qpair failed and we were unable to recover it. 00:30:50.776 [2024-11-20 10:04:21.527150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.776 [2024-11-20 10:04:21.527188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.776 qpair failed and we were unable to recover it. 00:30:50.776 [2024-11-20 10:04:21.527409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.776 [2024-11-20 10:04:21.527438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.776 qpair failed and we were unable to recover it. 00:30:50.776 [2024-11-20 10:04:21.527789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.776 [2024-11-20 10:04:21.527817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.776 qpair failed and we were unable to recover it. 00:30:50.776 [2024-11-20 10:04:21.528058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.776 [2024-11-20 10:04:21.528089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.776 qpair failed and we were unable to recover it. 00:30:50.776 [2024-11-20 10:04:21.528319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.776 [2024-11-20 10:04:21.528349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.776 qpair failed and we were unable to recover it. 00:30:50.776 [2024-11-20 10:04:21.528573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.776 [2024-11-20 10:04:21.528601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.776 qpair failed and we were unable to recover it. 00:30:50.776 [2024-11-20 10:04:21.528800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.776 [2024-11-20 10:04:21.528829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.776 qpair failed and we were unable to recover it. 00:30:50.776 [2024-11-20 10:04:21.529200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.776 [2024-11-20 10:04:21.529231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.776 qpair failed and we were unable to recover it. 00:30:50.776 [2024-11-20 10:04:21.529569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.776 [2024-11-20 10:04:21.529598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.776 qpair failed and we were unable to recover it. 00:30:50.776 [2024-11-20 10:04:21.529809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.776 [2024-11-20 10:04:21.529838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.776 qpair failed and we were unable to recover it. 00:30:50.776 [2024-11-20 10:04:21.530210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.776 [2024-11-20 10:04:21.530241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.776 qpair failed and we were unable to recover it. 00:30:50.776 [2024-11-20 10:04:21.530570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.776 [2024-11-20 10:04:21.530599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.776 qpair failed and we were unable to recover it. 00:30:50.776 [2024-11-20 10:04:21.530995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.776 [2024-11-20 10:04:21.531024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.776 qpair failed and we were unable to recover it. 00:30:50.776 [2024-11-20 10:04:21.531387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.776 [2024-11-20 10:04:21.531417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.776 qpair failed and we were unable to recover it. 00:30:50.776 [2024-11-20 10:04:21.531772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.776 [2024-11-20 10:04:21.531800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.776 qpair failed and we were unable to recover it. 00:30:50.776 [2024-11-20 10:04:21.532190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.776 [2024-11-20 10:04:21.532219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.776 qpair failed and we were unable to recover it. 00:30:50.776 [2024-11-20 10:04:21.532544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.776 [2024-11-20 10:04:21.532574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.776 qpair failed and we were unable to recover it. 00:30:50.776 [2024-11-20 10:04:21.532939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.776 [2024-11-20 10:04:21.532968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.776 qpair failed and we were unable to recover it. 00:30:50.776 [2024-11-20 10:04:21.533203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.776 [2024-11-20 10:04:21.533231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.776 qpair failed and we were unable to recover it. 00:30:50.776 [2024-11-20 10:04:21.533652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.776 [2024-11-20 10:04:21.533680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.776 qpair failed and we were unable to recover it. 00:30:50.776 [2024-11-20 10:04:21.534022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.776 [2024-11-20 10:04:21.534050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.776 qpair failed and we were unable to recover it. 00:30:50.776 [2024-11-20 10:04:21.534257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.776 [2024-11-20 10:04:21.534287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.776 qpair failed and we were unable to recover it. 00:30:50.776 [2024-11-20 10:04:21.534636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.776 [2024-11-20 10:04:21.534665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.776 qpair failed and we were unable to recover it. 00:30:50.777 [2024-11-20 10:04:21.535026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.777 [2024-11-20 10:04:21.535054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.777 qpair failed and we were unable to recover it. 00:30:50.777 [2024-11-20 10:04:21.535296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.777 [2024-11-20 10:04:21.535325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.777 qpair failed and we were unable to recover it. 00:30:50.777 [2024-11-20 10:04:21.535603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.777 [2024-11-20 10:04:21.535632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.777 qpair failed and we were unable to recover it. 00:30:50.777 [2024-11-20 10:04:21.535987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.777 [2024-11-20 10:04:21.536015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.777 qpair failed and we were unable to recover it. 00:30:50.777 [2024-11-20 10:04:21.536395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.777 [2024-11-20 10:04:21.536431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.777 qpair failed and we were unable to recover it. 00:30:50.777 [2024-11-20 10:04:21.536787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.777 [2024-11-20 10:04:21.536816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.777 qpair failed and we were unable to recover it. 00:30:50.777 [2024-11-20 10:04:21.537172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.777 [2024-11-20 10:04:21.537202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.777 qpair failed and we were unable to recover it. 00:30:50.777 [2024-11-20 10:04:21.537533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.777 [2024-11-20 10:04:21.537560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.777 qpair failed and we were unable to recover it. 00:30:50.777 [2024-11-20 10:04:21.537891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.777 [2024-11-20 10:04:21.537920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.777 qpair failed and we were unable to recover it. 00:30:50.777 [2024-11-20 10:04:21.538273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.777 [2024-11-20 10:04:21.538303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.777 qpair failed and we were unable to recover it. 00:30:50.777 [2024-11-20 10:04:21.538673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.777 [2024-11-20 10:04:21.538700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.777 qpair failed and we were unable to recover it. 00:30:50.777 [2024-11-20 10:04:21.538921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.777 [2024-11-20 10:04:21.538950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.777 qpair failed and we were unable to recover it. 00:30:50.777 [2024-11-20 10:04:21.539097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.777 [2024-11-20 10:04:21.539124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.777 qpair failed and we were unable to recover it. 00:30:50.777 [2024-11-20 10:04:21.539475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.777 [2024-11-20 10:04:21.539505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.777 qpair failed and we were unable to recover it. 00:30:50.777 [2024-11-20 10:04:21.539816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.777 [2024-11-20 10:04:21.539845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.777 qpair failed and we were unable to recover it. 00:30:50.777 [2024-11-20 10:04:21.540191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.777 [2024-11-20 10:04:21.540220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.777 qpair failed and we were unable to recover it. 00:30:50.777 [2024-11-20 10:04:21.540539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.777 [2024-11-20 10:04:21.540567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.777 qpair failed and we were unable to recover it. 00:30:50.777 [2024-11-20 10:04:21.540976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.777 [2024-11-20 10:04:21.541005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.777 qpair failed and we were unable to recover it. 00:30:50.777 [2024-11-20 10:04:21.541317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.777 [2024-11-20 10:04:21.541347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.777 qpair failed and we were unable to recover it. 00:30:50.777 [2024-11-20 10:04:21.541567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.777 [2024-11-20 10:04:21.541595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.777 qpair failed and we were unable to recover it. 00:30:50.777 [2024-11-20 10:04:21.541845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.777 [2024-11-20 10:04:21.541873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.777 qpair failed and we were unable to recover it. 00:30:50.777 [2024-11-20 10:04:21.542239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.777 [2024-11-20 10:04:21.542268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.777 qpair failed and we were unable to recover it. 00:30:50.777 [2024-11-20 10:04:21.542523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.777 [2024-11-20 10:04:21.542555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.777 qpair failed and we were unable to recover it. 00:30:50.777 [2024-11-20 10:04:21.542896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.777 [2024-11-20 10:04:21.542924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.777 qpair failed and we were unable to recover it. 00:30:50.777 [2024-11-20 10:04:21.543285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.777 [2024-11-20 10:04:21.543314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.777 qpair failed and we were unable to recover it. 00:30:50.777 [2024-11-20 10:04:21.543691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.777 [2024-11-20 10:04:21.543728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.777 qpair failed and we were unable to recover it. 00:30:50.777 [2024-11-20 10:04:21.543963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.777 [2024-11-20 10:04:21.543991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.777 qpair failed and we were unable to recover it. 00:30:50.777 [2024-11-20 10:04:21.544334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.777 [2024-11-20 10:04:21.544363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.777 qpair failed and we were unable to recover it. 00:30:50.777 [2024-11-20 10:04:21.544724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.777 [2024-11-20 10:04:21.544752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.777 qpair failed and we were unable to recover it. 00:30:50.777 [2024-11-20 10:04:21.545107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.777 [2024-11-20 10:04:21.545136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.777 qpair failed and we were unable to recover it. 00:30:50.777 [2024-11-20 10:04:21.545499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.777 [2024-11-20 10:04:21.545528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.777 qpair failed and we were unable to recover it. 00:30:50.777 [2024-11-20 10:04:21.545888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.777 [2024-11-20 10:04:21.545917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.777 qpair failed and we were unable to recover it. 00:30:50.777 [2024-11-20 10:04:21.546270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.777 [2024-11-20 10:04:21.546301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.777 qpair failed and we were unable to recover it. 00:30:50.777 [2024-11-20 10:04:21.546685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.777 [2024-11-20 10:04:21.546712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.777 qpair failed and we were unable to recover it. 00:30:50.777 [2024-11-20 10:04:21.547069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.777 [2024-11-20 10:04:21.547097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.777 qpair failed and we were unable to recover it. 00:30:50.777 [2024-11-20 10:04:21.547449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.778 [2024-11-20 10:04:21.547480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.778 qpair failed and we were unable to recover it. 00:30:50.778 [2024-11-20 10:04:21.547872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.778 [2024-11-20 10:04:21.547900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.778 qpair failed and we were unable to recover it. 00:30:50.778 [2024-11-20 10:04:21.548258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.778 [2024-11-20 10:04:21.548288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.778 qpair failed and we were unable to recover it. 00:30:50.778 [2024-11-20 10:04:21.548500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.778 [2024-11-20 10:04:21.548527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.778 qpair failed and we were unable to recover it. 00:30:50.778 [2024-11-20 10:04:21.548885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.778 [2024-11-20 10:04:21.548913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.778 qpair failed and we were unable to recover it. 00:30:50.778 [2024-11-20 10:04:21.549275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.778 [2024-11-20 10:04:21.549305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.778 qpair failed and we were unable to recover it. 00:30:50.778 [2024-11-20 10:04:21.549605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.778 [2024-11-20 10:04:21.549632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.778 qpair failed and we were unable to recover it. 00:30:50.778 [2024-11-20 10:04:21.549994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.778 [2024-11-20 10:04:21.550022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.778 qpair failed and we were unable to recover it. 00:30:50.778 [2024-11-20 10:04:21.550258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.778 [2024-11-20 10:04:21.550288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.778 qpair failed and we were unable to recover it. 00:30:50.778 [2024-11-20 10:04:21.550509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.778 [2024-11-20 10:04:21.550543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.778 qpair failed and we were unable to recover it. 00:30:50.778 [2024-11-20 10:04:21.550906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.778 [2024-11-20 10:04:21.550935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.778 qpair failed and we were unable to recover it. 00:30:50.778 [2024-11-20 10:04:21.551336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.778 [2024-11-20 10:04:21.551366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.778 qpair failed and we were unable to recover it. 00:30:50.778 [2024-11-20 10:04:21.551611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.778 [2024-11-20 10:04:21.551639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.778 qpair failed and we were unable to recover it. 00:30:50.778 [2024-11-20 10:04:21.551995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.778 [2024-11-20 10:04:21.552024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.778 qpair failed and we were unable to recover it. 00:30:50.778 [2024-11-20 10:04:21.552378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.778 [2024-11-20 10:04:21.552408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.778 qpair failed and we were unable to recover it. 00:30:50.778 [2024-11-20 10:04:21.552652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.778 [2024-11-20 10:04:21.552684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.778 qpair failed and we were unable to recover it. 00:30:50.778 [2024-11-20 10:04:21.553032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.778 [2024-11-20 10:04:21.553061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.778 qpair failed and we were unable to recover it. 00:30:50.778 [2024-11-20 10:04:21.553405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.778 [2024-11-20 10:04:21.553434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.778 qpair failed and we were unable to recover it. 00:30:50.778 [2024-11-20 10:04:21.553655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.778 [2024-11-20 10:04:21.553683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.778 qpair failed and we were unable to recover it. 00:30:50.778 [2024-11-20 10:04:21.554027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.778 [2024-11-20 10:04:21.554056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.778 qpair failed and we were unable to recover it. 00:30:50.778 [2024-11-20 10:04:21.554431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.778 [2024-11-20 10:04:21.554461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.778 qpair failed and we were unable to recover it. 00:30:50.778 [2024-11-20 10:04:21.554784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.778 [2024-11-20 10:04:21.554813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.778 qpair failed and we were unable to recover it. 00:30:50.778 [2024-11-20 10:04:21.555145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.778 [2024-11-20 10:04:21.555181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.778 qpair failed and we were unable to recover it. 00:30:50.778 [2024-11-20 10:04:21.555528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.778 [2024-11-20 10:04:21.555557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.778 qpair failed and we were unable to recover it. 00:30:50.778 [2024-11-20 10:04:21.555846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.778 [2024-11-20 10:04:21.555874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.778 qpair failed and we were unable to recover it. 00:30:50.778 [2024-11-20 10:04:21.556117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.778 [2024-11-20 10:04:21.556145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.778 qpair failed and we were unable to recover it. 00:30:50.778 [2024-11-20 10:04:21.556506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.778 [2024-11-20 10:04:21.556535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.778 qpair failed and we were unable to recover it. 00:30:50.778 [2024-11-20 10:04:21.556883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.778 [2024-11-20 10:04:21.556912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.778 qpair failed and we were unable to recover it. 00:30:50.778 [2024-11-20 10:04:21.557003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.778 [2024-11-20 10:04:21.557031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3890000b90 with addr=10.0.0.2, port=4420 00:30:50.778 qpair failed and we were unable to recover it. 00:30:50.778 [2024-11-20 10:04:21.557398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.778 [2024-11-20 10:04:21.557489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.778 qpair failed and we were unable to recover it. 00:30:50.778 [2024-11-20 10:04:21.557850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.778 [2024-11-20 10:04:21.557887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.778 qpair failed and we were unable to recover it. 00:30:50.778 [2024-11-20 10:04:21.558254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.778 [2024-11-20 10:04:21.558288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.778 qpair failed and we were unable to recover it. 00:30:50.778 [2024-11-20 10:04:21.558593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.778 [2024-11-20 10:04:21.558624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.778 qpair failed and we were unable to recover it. 00:30:50.778 [2024-11-20 10:04:21.558974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.778 [2024-11-20 10:04:21.559004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.778 qpair failed and we were unable to recover it. 00:30:50.778 [2024-11-20 10:04:21.559408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.778 [2024-11-20 10:04:21.559439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.778 qpair failed and we were unable to recover it. 00:30:50.778 [2024-11-20 10:04:21.559675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.778 [2024-11-20 10:04:21.559703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.778 qpair failed and we were unable to recover it. 00:30:50.778 [2024-11-20 10:04:21.560043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.778 [2024-11-20 10:04:21.560073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.778 qpair failed and we were unable to recover it. 00:30:50.778 [2024-11-20 10:04:21.560313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.778 [2024-11-20 10:04:21.560344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.778 qpair failed and we were unable to recover it. 00:30:50.778 [2024-11-20 10:04:21.560538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.779 [2024-11-20 10:04:21.560566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.779 qpair failed and we were unable to recover it. 00:30:50.779 [2024-11-20 10:04:21.560905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.779 [2024-11-20 10:04:21.560933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.779 qpair failed and we were unable to recover it. 00:30:50.779 [2024-11-20 10:04:21.561186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.779 [2024-11-20 10:04:21.561215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.779 qpair failed and we were unable to recover it. 00:30:50.779 [2024-11-20 10:04:21.561538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.779 [2024-11-20 10:04:21.561568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.779 qpair failed and we were unable to recover it. 00:30:50.779 [2024-11-20 10:04:21.561934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.779 [2024-11-20 10:04:21.561963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.779 qpair failed and we were unable to recover it. 00:30:50.779 [2024-11-20 10:04:21.562313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.779 [2024-11-20 10:04:21.562344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.779 qpair failed and we were unable to recover it. 00:30:50.779 [2024-11-20 10:04:21.562697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.779 [2024-11-20 10:04:21.562726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.779 qpair failed and we were unable to recover it. 00:30:50.779 [2024-11-20 10:04:21.562963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.779 [2024-11-20 10:04:21.562992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.779 qpair failed and we were unable to recover it. 00:30:50.779 [2024-11-20 10:04:21.563359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.779 [2024-11-20 10:04:21.563390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.779 qpair failed and we were unable to recover it. 00:30:50.779 [2024-11-20 10:04:21.563702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.779 [2024-11-20 10:04:21.563730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.779 qpair failed and we were unable to recover it. 00:30:50.779 [2024-11-20 10:04:21.564079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.779 [2024-11-20 10:04:21.564107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.779 qpair failed and we were unable to recover it. 00:30:50.779 [2024-11-20 10:04:21.564457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.779 [2024-11-20 10:04:21.564487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.779 qpair failed and we were unable to recover it. 00:30:50.779 [2024-11-20 10:04:21.564760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.779 [2024-11-20 10:04:21.564794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.779 qpair failed and we were unable to recover it. 00:30:50.779 [2024-11-20 10:04:21.565010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.779 [2024-11-20 10:04:21.565038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.779 qpair failed and we were unable to recover it. 00:30:50.779 [2024-11-20 10:04:21.565387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.779 [2024-11-20 10:04:21.565417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.779 qpair failed and we were unable to recover it. 00:30:50.779 [2024-11-20 10:04:21.565654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.779 [2024-11-20 10:04:21.565681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.779 qpair failed and we were unable to recover it. 00:30:50.779 [2024-11-20 10:04:21.566024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.779 [2024-11-20 10:04:21.566051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.779 qpair failed and we were unable to recover it. 00:30:50.779 [2024-11-20 10:04:21.566387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.779 [2024-11-20 10:04:21.566419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.779 qpair failed and we were unable to recover it. 00:30:50.779 [2024-11-20 10:04:21.566746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.779 [2024-11-20 10:04:21.566774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.779 qpair failed and we were unable to recover it. 00:30:50.779 [2024-11-20 10:04:21.567031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.779 [2024-11-20 10:04:21.567058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.779 qpair failed and we were unable to recover it. 00:30:50.779 [2024-11-20 10:04:21.567408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.779 [2024-11-20 10:04:21.567437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.779 qpair failed and we were unable to recover it. 00:30:50.779 [2024-11-20 10:04:21.567817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.779 [2024-11-20 10:04:21.567846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.779 qpair failed and we were unable to recover it. 00:30:50.779 [2024-11-20 10:04:21.568240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.779 [2024-11-20 10:04:21.568270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.779 qpair failed and we were unable to recover it. 00:30:50.779 [2024-11-20 10:04:21.568503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.779 [2024-11-20 10:04:21.568531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.779 qpair failed and we were unable to recover it. 00:30:50.779 [2024-11-20 10:04:21.568888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.779 [2024-11-20 10:04:21.568917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.779 qpair failed and we were unable to recover it. 00:30:50.779 [2024-11-20 10:04:21.569271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.779 [2024-11-20 10:04:21.569308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.779 qpair failed and we were unable to recover it. 00:30:50.779 [2024-11-20 10:04:21.569675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.779 [2024-11-20 10:04:21.569709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.779 qpair failed and we were unable to recover it. 00:30:50.779 [2024-11-20 10:04:21.570059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.779 [2024-11-20 10:04:21.570088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.779 qpair failed and we were unable to recover it. 00:30:50.779 [2024-11-20 10:04:21.570442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.779 [2024-11-20 10:04:21.570472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.779 qpair failed and we were unable to recover it. 00:30:50.779 [2024-11-20 10:04:21.570710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.779 [2024-11-20 10:04:21.570742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.779 qpair failed and we were unable to recover it. 00:30:50.779 [2024-11-20 10:04:21.570974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.779 [2024-11-20 10:04:21.571004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.779 qpair failed and we were unable to recover it. 00:30:50.779 [2024-11-20 10:04:21.571331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.779 [2024-11-20 10:04:21.571361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.779 qpair failed and we were unable to recover it. 00:30:50.779 [2024-11-20 10:04:21.571711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.779 [2024-11-20 10:04:21.571740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.779 qpair failed and we were unable to recover it. 00:30:50.779 [2024-11-20 10:04:21.572073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.779 [2024-11-20 10:04:21.572101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.779 qpair failed and we were unable to recover it. 00:30:50.779 [2024-11-20 10:04:21.572469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.779 [2024-11-20 10:04:21.572500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.779 qpair failed and we were unable to recover it. 00:30:50.779 [2024-11-20 10:04:21.572876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.779 [2024-11-20 10:04:21.572904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.779 qpair failed and we were unable to recover it. 00:30:50.779 [2024-11-20 10:04:21.573260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.779 [2024-11-20 10:04:21.573289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.779 qpair failed and we were unable to recover it. 00:30:50.779 [2024-11-20 10:04:21.573651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.779 [2024-11-20 10:04:21.573679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.779 qpair failed and we were unable to recover it. 00:30:50.779 [2024-11-20 10:04:21.574042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.780 [2024-11-20 10:04:21.574070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.780 qpair failed and we were unable to recover it. 00:30:50.780 [2024-11-20 10:04:21.574404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.780 [2024-11-20 10:04:21.574435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.780 qpair failed and we were unable to recover it. 00:30:50.780 [2024-11-20 10:04:21.574777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.780 [2024-11-20 10:04:21.574805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.780 qpair failed and we were unable to recover it. 00:30:50.780 [2024-11-20 10:04:21.575115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.780 [2024-11-20 10:04:21.575143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.780 qpair failed and we were unable to recover it. 00:30:50.780 [2024-11-20 10:04:21.575466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.780 [2024-11-20 10:04:21.575496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.780 qpair failed and we were unable to recover it. 00:30:50.780 [2024-11-20 10:04:21.575744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.780 [2024-11-20 10:04:21.575771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.780 qpair failed and we were unable to recover it. 00:30:50.780 [2024-11-20 10:04:21.576127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.780 [2024-11-20 10:04:21.576155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.780 qpair failed and we were unable to recover it. 00:30:50.780 [2024-11-20 10:04:21.576582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.780 [2024-11-20 10:04:21.576612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.780 qpair failed and we were unable to recover it. 00:30:50.780 [2024-11-20 10:04:21.576824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.780 [2024-11-20 10:04:21.576852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.780 qpair failed and we were unable to recover it. 00:30:50.780 [2024-11-20 10:04:21.577201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.780 [2024-11-20 10:04:21.577232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.780 qpair failed and we were unable to recover it. 00:30:50.780 [2024-11-20 10:04:21.577553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.780 [2024-11-20 10:04:21.577582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.780 qpair failed and we were unable to recover it. 00:30:50.780 [2024-11-20 10:04:21.577900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.780 [2024-11-20 10:04:21.577928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.780 qpair failed and we were unable to recover it. 00:30:50.780 [2024-11-20 10:04:21.578273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.780 [2024-11-20 10:04:21.578302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.780 qpair failed and we were unable to recover it. 00:30:50.780 [2024-11-20 10:04:21.578659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.780 [2024-11-20 10:04:21.578687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.780 qpair failed and we were unable to recover it. 00:30:50.780 [2024-11-20 10:04:21.579036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.780 [2024-11-20 10:04:21.579070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.780 qpair failed and we were unable to recover it. 00:30:50.780 [2024-11-20 10:04:21.579289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.780 [2024-11-20 10:04:21.579318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.780 qpair failed and we were unable to recover it. 00:30:50.780 [2024-11-20 10:04:21.579690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.780 [2024-11-20 10:04:21.579718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.780 qpair failed and we were unable to recover it. 00:30:50.780 [2024-11-20 10:04:21.579980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.780 [2024-11-20 10:04:21.580007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.780 qpair failed and we were unable to recover it. 00:30:50.780 [2024-11-20 10:04:21.580360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.780 [2024-11-20 10:04:21.580390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.780 qpair failed and we were unable to recover it. 00:30:50.780 [2024-11-20 10:04:21.580612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.780 [2024-11-20 10:04:21.580639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.780 qpair failed and we were unable to recover it. 00:30:50.780 [2024-11-20 10:04:21.580887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.780 [2024-11-20 10:04:21.580916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.780 qpair failed and we were unable to recover it. 00:30:50.780 [2024-11-20 10:04:21.581242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.780 [2024-11-20 10:04:21.581272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.780 qpair failed and we were unable to recover it. 00:30:50.780 [2024-11-20 10:04:21.581618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.780 [2024-11-20 10:04:21.581645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.780 qpair failed and we were unable to recover it. 00:30:50.780 [2024-11-20 10:04:21.581990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.780 [2024-11-20 10:04:21.582018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.780 qpair failed and we were unable to recover it. 00:30:50.780 [2024-11-20 10:04:21.582390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.780 [2024-11-20 10:04:21.582419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.780 qpair failed and we were unable to recover it. 00:30:50.780 [2024-11-20 10:04:21.582779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.780 [2024-11-20 10:04:21.582807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.780 qpair failed and we were unable to recover it. 00:30:50.780 [2024-11-20 10:04:21.583118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.780 [2024-11-20 10:04:21.583146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.780 qpair failed and we were unable to recover it. 00:30:50.780 [2024-11-20 10:04:21.583519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.780 [2024-11-20 10:04:21.583548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.780 qpair failed and we were unable to recover it. 00:30:50.780 [2024-11-20 10:04:21.583899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.780 [2024-11-20 10:04:21.583928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.780 qpair failed and we were unable to recover it. 00:30:50.780 [2024-11-20 10:04:21.584276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.780 [2024-11-20 10:04:21.584305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.780 qpair failed and we were unable to recover it. 00:30:50.780 [2024-11-20 10:04:21.584498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.780 [2024-11-20 10:04:21.584525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.780 qpair failed and we were unable to recover it. 00:30:50.780 [2024-11-20 10:04:21.584832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.780 [2024-11-20 10:04:21.584862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.780 qpair failed and we were unable to recover it. 00:30:50.780 [2024-11-20 10:04:21.585186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.781 [2024-11-20 10:04:21.585216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.781 qpair failed and we were unable to recover it. 00:30:50.781 [2024-11-20 10:04:21.585517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.781 [2024-11-20 10:04:21.585545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.781 qpair failed and we were unable to recover it. 00:30:50.781 [2024-11-20 10:04:21.585895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.781 [2024-11-20 10:04:21.585923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.781 qpair failed and we were unable to recover it. 00:30:50.781 [2024-11-20 10:04:21.586127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.781 [2024-11-20 10:04:21.586154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.781 qpair failed and we were unable to recover it. 00:30:50.781 [2024-11-20 10:04:21.586525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.781 [2024-11-20 10:04:21.586553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.781 qpair failed and we were unable to recover it. 00:30:50.781 [2024-11-20 10:04:21.586873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.781 [2024-11-20 10:04:21.586902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.781 qpair failed and we were unable to recover it. 00:30:50.781 [2024-11-20 10:04:21.587247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.781 [2024-11-20 10:04:21.587277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.781 qpair failed and we were unable to recover it. 00:30:50.781 [2024-11-20 10:04:21.587618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.781 [2024-11-20 10:04:21.587647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.781 qpair failed and we were unable to recover it. 00:30:50.781 [2024-11-20 10:04:21.587999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.781 [2024-11-20 10:04:21.588029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.781 qpair failed and we were unable to recover it. 00:30:50.781 [2024-11-20 10:04:21.588373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.781 [2024-11-20 10:04:21.588402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.781 qpair failed and we were unable to recover it. 00:30:50.781 [2024-11-20 10:04:21.588731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.781 [2024-11-20 10:04:21.588759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.781 qpair failed and we were unable to recover it. 00:30:50.781 [2024-11-20 10:04:21.588966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.781 [2024-11-20 10:04:21.588994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.781 qpair failed and we were unable to recover it. 00:30:50.781 [2024-11-20 10:04:21.589328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.781 [2024-11-20 10:04:21.589356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.781 qpair failed and we were unable to recover it. 00:30:50.781 [2024-11-20 10:04:21.589575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.781 [2024-11-20 10:04:21.589602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.781 qpair failed and we were unable to recover it. 00:30:50.781 [2024-11-20 10:04:21.589813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.781 [2024-11-20 10:04:21.589841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.781 qpair failed and we were unable to recover it. 00:30:50.781 [2024-11-20 10:04:21.590076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.781 [2024-11-20 10:04:21.590104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.781 qpair failed and we were unable to recover it. 00:30:50.781 [2024-11-20 10:04:21.590335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.781 [2024-11-20 10:04:21.590369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.781 qpair failed and we were unable to recover it. 00:30:50.781 [2024-11-20 10:04:21.590728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.781 [2024-11-20 10:04:21.590757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.781 qpair failed and we were unable to recover it. 00:30:50.781 [2024-11-20 10:04:21.591107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.781 [2024-11-20 10:04:21.591135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.781 qpair failed and we were unable to recover it. 00:30:50.781 [2024-11-20 10:04:21.591384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.781 [2024-11-20 10:04:21.591415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.781 qpair failed and we were unable to recover it. 00:30:50.781 [2024-11-20 10:04:21.591619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.781 [2024-11-20 10:04:21.591647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.781 qpair failed and we were unable to recover it. 00:30:50.781 [2024-11-20 10:04:21.591986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.781 [2024-11-20 10:04:21.592014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.781 qpair failed and we were unable to recover it. 00:30:50.781 [2024-11-20 10:04:21.592335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.781 [2024-11-20 10:04:21.592366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.781 qpair failed and we were unable to recover it. 00:30:50.781 [2024-11-20 10:04:21.592590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.781 [2024-11-20 10:04:21.592624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.781 qpair failed and we were unable to recover it. 00:30:50.781 [2024-11-20 10:04:21.592974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.781 [2024-11-20 10:04:21.593002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.781 qpair failed and we were unable to recover it. 00:30:50.781 [2024-11-20 10:04:21.593094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.781 [2024-11-20 10:04:21.593121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.781 qpair failed and we were unable to recover it. 00:30:50.781 [2024-11-20 10:04:21.593486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.781 [2024-11-20 10:04:21.593515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.781 qpair failed and we were unable to recover it. 00:30:50.781 [2024-11-20 10:04:21.593870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.781 [2024-11-20 10:04:21.593898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.781 qpair failed and we were unable to recover it. 00:30:50.781 [2024-11-20 10:04:21.594176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.781 [2024-11-20 10:04:21.594206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.781 qpair failed and we were unable to recover it. 00:30:50.781 [2024-11-20 10:04:21.594522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.781 [2024-11-20 10:04:21.594550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.781 qpair failed and we were unable to recover it. 00:30:50.781 [2024-11-20 10:04:21.594787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.781 [2024-11-20 10:04:21.594815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.781 qpair failed and we were unable to recover it. 00:30:50.781 [2024-11-20 10:04:21.595169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.781 [2024-11-20 10:04:21.595199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.781 qpair failed and we were unable to recover it. 00:30:50.781 [2024-11-20 10:04:21.595478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.781 [2024-11-20 10:04:21.595505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.781 qpair failed and we were unable to recover it. 00:30:50.781 [2024-11-20 10:04:21.595852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.781 [2024-11-20 10:04:21.595880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.781 qpair failed and we were unable to recover it. 00:30:50.781 [2024-11-20 10:04:21.596229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.781 [2024-11-20 10:04:21.596259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.781 qpair failed and we were unable to recover it. 00:30:50.781 [2024-11-20 10:04:21.596473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.781 [2024-11-20 10:04:21.596500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.781 qpair failed and we were unable to recover it. 00:30:50.781 [2024-11-20 10:04:21.596867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.781 [2024-11-20 10:04:21.596895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.781 qpair failed and we were unable to recover it. 00:30:50.781 [2024-11-20 10:04:21.597226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.781 [2024-11-20 10:04:21.597256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.781 qpair failed and we were unable to recover it. 00:30:50.781 [2024-11-20 10:04:21.597604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.782 [2024-11-20 10:04:21.597631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.782 qpair failed and we were unable to recover it. 00:30:50.782 [2024-11-20 10:04:21.597960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.782 [2024-11-20 10:04:21.597988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.782 qpair failed and we were unable to recover it. 00:30:50.782 [2024-11-20 10:04:21.598333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.782 [2024-11-20 10:04:21.598362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.782 qpair failed and we were unable to recover it. 00:30:50.782 [2024-11-20 10:04:21.598721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.782 [2024-11-20 10:04:21.598749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.782 qpair failed and we were unable to recover it. 00:30:50.782 [2024-11-20 10:04:21.598984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.782 [2024-11-20 10:04:21.599013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.782 qpair failed and we were unable to recover it. 00:30:50.782 [2024-11-20 10:04:21.599351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.782 [2024-11-20 10:04:21.599380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.782 qpair failed and we were unable to recover it. 00:30:50.782 [2024-11-20 10:04:21.599683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.782 [2024-11-20 10:04:21.599712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.782 qpair failed and we were unable to recover it. 00:30:50.782 [2024-11-20 10:04:21.600056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.782 [2024-11-20 10:04:21.600084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.782 qpair failed and we were unable to recover it. 00:30:50.782 [2024-11-20 10:04:21.600440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.782 [2024-11-20 10:04:21.600470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.782 qpair failed and we were unable to recover it. 00:30:50.782 [2024-11-20 10:04:21.600720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.782 [2024-11-20 10:04:21.600747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.782 qpair failed and we were unable to recover it. 00:30:50.782 [2024-11-20 10:04:21.600947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.782 [2024-11-20 10:04:21.600976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.782 qpair failed and we were unable to recover it. 00:30:50.782 [2024-11-20 10:04:21.601279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.782 [2024-11-20 10:04:21.601308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.782 qpair failed and we were unable to recover it. 00:30:50.782 [2024-11-20 10:04:21.601628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.782 [2024-11-20 10:04:21.601658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.782 qpair failed and we were unable to recover it. 00:30:50.782 [2024-11-20 10:04:21.601892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.782 [2024-11-20 10:04:21.601922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.782 qpair failed and we were unable to recover it. 00:30:50.782 [2024-11-20 10:04:21.602253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.782 [2024-11-20 10:04:21.602282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.782 qpair failed and we were unable to recover it. 00:30:50.782 [2024-11-20 10:04:21.602506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.782 [2024-11-20 10:04:21.602533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.782 qpair failed and we were unable to recover it. 00:30:50.782 [2024-11-20 10:04:21.602900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.782 [2024-11-20 10:04:21.602928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.782 qpair failed and we were unable to recover it. 00:30:50.782 [2024-11-20 10:04:21.603278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.782 [2024-11-20 10:04:21.603308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.782 qpair failed and we were unable to recover it. 00:30:50.782 [2024-11-20 10:04:21.603631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.782 [2024-11-20 10:04:21.603658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.782 qpair failed and we were unable to recover it. 00:30:50.782 [2024-11-20 10:04:21.603888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.782 [2024-11-20 10:04:21.603915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.782 qpair failed and we were unable to recover it. 00:30:50.782 [2024-11-20 10:04:21.604122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.782 [2024-11-20 10:04:21.604149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.782 qpair failed and we were unable to recover it. 00:30:50.782 [2024-11-20 10:04:21.604473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.782 [2024-11-20 10:04:21.604503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.782 qpair failed and we were unable to recover it. 00:30:50.782 [2024-11-20 10:04:21.604874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.782 [2024-11-20 10:04:21.604902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.782 qpair failed and we were unable to recover it. 00:30:50.782 [2024-11-20 10:04:21.605256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.782 [2024-11-20 10:04:21.605286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.782 qpair failed and we were unable to recover it. 00:30:50.782 [2024-11-20 10:04:21.605640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.782 [2024-11-20 10:04:21.605670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.782 qpair failed and we were unable to recover it. 00:30:50.782 [2024-11-20 10:04:21.606056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.782 [2024-11-20 10:04:21.606084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.782 qpair failed and we were unable to recover it. 00:30:50.782 [2024-11-20 10:04:21.606479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.782 [2024-11-20 10:04:21.606508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.782 qpair failed and we were unable to recover it. 00:30:50.782 [2024-11-20 10:04:21.606823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.782 [2024-11-20 10:04:21.606852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.782 qpair failed and we were unable to recover it. 00:30:50.782 [2024-11-20 10:04:21.607208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.782 [2024-11-20 10:04:21.607237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.782 qpair failed and we were unable to recover it. 00:30:50.782 [2024-11-20 10:04:21.607550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.782 [2024-11-20 10:04:21.607579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.782 qpair failed and we were unable to recover it. 00:30:50.782 [2024-11-20 10:04:21.607953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.782 [2024-11-20 10:04:21.607982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.782 qpair failed and we were unable to recover it. 00:30:50.782 [2024-11-20 10:04:21.608204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.782 [2024-11-20 10:04:21.608233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.782 qpair failed and we were unable to recover it. 00:30:50.782 [2024-11-20 10:04:21.608571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.782 [2024-11-20 10:04:21.608600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.782 qpair failed and we were unable to recover it. 00:30:50.782 [2024-11-20 10:04:21.608896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.782 [2024-11-20 10:04:21.608925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.782 qpair failed and we were unable to recover it. 00:30:50.782 [2024-11-20 10:04:21.609222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.782 [2024-11-20 10:04:21.609252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.782 qpair failed and we were unable to recover it. 00:30:50.782 [2024-11-20 10:04:21.609580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.782 [2024-11-20 10:04:21.609608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.782 qpair failed and we were unable to recover it. 00:30:50.782 [2024-11-20 10:04:21.609946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.782 [2024-11-20 10:04:21.609973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.782 qpair failed and we were unable to recover it. 00:30:50.782 [2024-11-20 10:04:21.610322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.782 [2024-11-20 10:04:21.610351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.782 qpair failed and we were unable to recover it. 00:30:50.783 [2024-11-20 10:04:21.610698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.783 [2024-11-20 10:04:21.610725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.783 qpair failed and we were unable to recover it. 00:30:50.783 [2024-11-20 10:04:21.611069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.783 [2024-11-20 10:04:21.611096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.783 qpair failed and we were unable to recover it. 00:30:50.783 [2024-11-20 10:04:21.611446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.783 [2024-11-20 10:04:21.611476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.783 qpair failed and we were unable to recover it. 00:30:50.783 [2024-11-20 10:04:21.612081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.783 [2024-11-20 10:04:21.612118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.783 qpair failed and we were unable to recover it. 00:30:50.783 [2024-11-20 10:04:21.612339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.783 [2024-11-20 10:04:21.612374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.783 qpair failed and we were unable to recover it. 00:30:50.783 [2024-11-20 10:04:21.612621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.783 [2024-11-20 10:04:21.612654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.783 qpair failed and we were unable to recover it. 00:30:50.783 [2024-11-20 10:04:21.613005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.783 [2024-11-20 10:04:21.613036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.783 qpair failed and we were unable to recover it. 00:30:50.783 [2024-11-20 10:04:21.613402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.783 [2024-11-20 10:04:21.613432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.783 qpair failed and we were unable to recover it. 00:30:50.783 [2024-11-20 10:04:21.613639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.783 [2024-11-20 10:04:21.613667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.783 qpair failed and we were unable to recover it. 00:30:50.783 [2024-11-20 10:04:21.614023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.783 [2024-11-20 10:04:21.614052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.783 qpair failed and we were unable to recover it. 00:30:50.783 [2024-11-20 10:04:21.614395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.783 [2024-11-20 10:04:21.614424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.783 qpair failed and we were unable to recover it. 00:30:50.783 [2024-11-20 10:04:21.614746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.783 [2024-11-20 10:04:21.614774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.783 qpair failed and we were unable to recover it. 00:30:50.783 [2024-11-20 10:04:21.614872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.783 [2024-11-20 10:04:21.614900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.783 qpair failed and we were unable to recover it. 00:30:50.783 [2024-11-20 10:04:21.615144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.783 [2024-11-20 10:04:21.615181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.783 qpair failed and we were unable to recover it. 00:30:50.783 [2024-11-20 10:04:21.615439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.783 [2024-11-20 10:04:21.615473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.783 qpair failed and we were unable to recover it. 00:30:50.783 [2024-11-20 10:04:21.615833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.783 [2024-11-20 10:04:21.615870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.783 qpair failed and we were unable to recover it. 00:30:50.783 [2024-11-20 10:04:21.616068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.783 [2024-11-20 10:04:21.616097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.783 qpair failed and we were unable to recover it. 00:30:50.783 [2024-11-20 10:04:21.616467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.783 [2024-11-20 10:04:21.616498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.783 qpair failed and we were unable to recover it. 00:30:50.783 [2024-11-20 10:04:21.616839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.783 [2024-11-20 10:04:21.616867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.783 qpair failed and we were unable to recover it. 00:30:50.783 [2024-11-20 10:04:21.617074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.783 [2024-11-20 10:04:21.617102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.783 qpair failed and we were unable to recover it. 00:30:50.783 [2024-11-20 10:04:21.617441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.783 [2024-11-20 10:04:21.617471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.783 qpair failed and we were unable to recover it. 00:30:50.783 [2024-11-20 10:04:21.617690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.783 [2024-11-20 10:04:21.617717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.783 qpair failed and we were unable to recover it. 00:30:50.783 [2024-11-20 10:04:21.617944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.783 [2024-11-20 10:04:21.617972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.783 qpair failed and we were unable to recover it. 00:30:50.783 [2024-11-20 10:04:21.618318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.783 [2024-11-20 10:04:21.618349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.783 qpair failed and we were unable to recover it. 00:30:50.783 [2024-11-20 10:04:21.618694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.783 [2024-11-20 10:04:21.618722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.783 qpair failed and we were unable to recover it. 00:30:50.783 [2024-11-20 10:04:21.619090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.783 [2024-11-20 10:04:21.619126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.783 qpair failed and we were unable to recover it. 00:30:50.783 [2024-11-20 10:04:21.619564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.783 [2024-11-20 10:04:21.619593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.783 qpair failed and we were unable to recover it. 00:30:50.783 [2024-11-20 10:04:21.619928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.783 [2024-11-20 10:04:21.619955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.783 qpair failed and we were unable to recover it. 00:30:50.783 [2024-11-20 10:04:21.620319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.783 [2024-11-20 10:04:21.620348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.783 qpair failed and we were unable to recover it. 00:30:50.783 [2024-11-20 10:04:21.620699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.783 [2024-11-20 10:04:21.620728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.783 qpair failed and we were unable to recover it. 00:30:50.783 [2024-11-20 10:04:21.621039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.783 [2024-11-20 10:04:21.621067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.783 qpair failed and we were unable to recover it. 00:30:50.783 [2024-11-20 10:04:21.621322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.783 [2024-11-20 10:04:21.621354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.783 qpair failed and we were unable to recover it. 00:30:50.783 [2024-11-20 10:04:21.621716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.783 [2024-11-20 10:04:21.621744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.783 qpair failed and we were unable to recover it. 00:30:50.783 [2024-11-20 10:04:21.622084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.783 [2024-11-20 10:04:21.622111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.783 qpair failed and we were unable to recover it. 00:30:50.783 [2024-11-20 10:04:21.622480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.783 [2024-11-20 10:04:21.622510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.783 qpair failed and we were unable to recover it. 00:30:50.783 [2024-11-20 10:04:21.622855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.783 [2024-11-20 10:04:21.622884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.783 qpair failed and we were unable to recover it. 00:30:50.783 [2024-11-20 10:04:21.623232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.783 [2024-11-20 10:04:21.623261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.783 qpair failed and we were unable to recover it. 00:30:50.783 [2024-11-20 10:04:21.623349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.783 [2024-11-20 10:04:21.623376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.783 qpair failed and we were unable to recover it. 00:30:50.783 [2024-11-20 10:04:21.623720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.784 [2024-11-20 10:04:21.623748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.784 qpair failed and we were unable to recover it. 00:30:50.784 [2024-11-20 10:04:21.623984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.784 [2024-11-20 10:04:21.624012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.784 qpair failed and we were unable to recover it. 00:30:50.784 [2024-11-20 10:04:21.624359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.784 [2024-11-20 10:04:21.624390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.784 qpair failed and we were unable to recover it. 00:30:50.784 [2024-11-20 10:04:21.624743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.784 [2024-11-20 10:04:21.624771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.784 qpair failed and we were unable to recover it. 00:30:50.784 [2024-11-20 10:04:21.625145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.784 [2024-11-20 10:04:21.625182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.784 qpair failed and we were unable to recover it. 00:30:50.784 [2024-11-20 10:04:21.625542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.784 [2024-11-20 10:04:21.625571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.784 qpair failed and we were unable to recover it. 00:30:50.784 [2024-11-20 10:04:21.625917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.784 [2024-11-20 10:04:21.625944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.784 qpair failed and we were unable to recover it. 00:30:50.784 [2024-11-20 10:04:21.626181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.784 [2024-11-20 10:04:21.626215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.784 qpair failed and we were unable to recover it. 00:30:50.784 [2024-11-20 10:04:21.626546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.784 [2024-11-20 10:04:21.626574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.784 qpair failed and we were unable to recover it. 00:30:50.784 [2024-11-20 10:04:21.626925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.784 [2024-11-20 10:04:21.626953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.784 qpair failed and we were unable to recover it. 00:30:50.784 [2024-11-20 10:04:21.627318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.784 [2024-11-20 10:04:21.627348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.784 qpair failed and we were unable to recover it. 00:30:50.784 [2024-11-20 10:04:21.627699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.784 [2024-11-20 10:04:21.627727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.784 qpair failed and we were unable to recover it. 00:30:50.784 [2024-11-20 10:04:21.628098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.784 [2024-11-20 10:04:21.628126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.784 qpair failed and we were unable to recover it. 00:30:50.784 [2024-11-20 10:04:21.628508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.784 [2024-11-20 10:04:21.628537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.784 qpair failed and we were unable to recover it. 00:30:50.784 [2024-11-20 10:04:21.628712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.784 [2024-11-20 10:04:21.628740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.784 qpair failed and we were unable to recover it. 00:30:50.784 [2024-11-20 10:04:21.629014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.784 [2024-11-20 10:04:21.629041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.784 qpair failed and we were unable to recover it. 00:30:50.784 [2024-11-20 10:04:21.629391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.784 [2024-11-20 10:04:21.629421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.784 qpair failed and we were unable to recover it. 00:30:50.784 [2024-11-20 10:04:21.629757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.784 [2024-11-20 10:04:21.629785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.784 qpair failed and we were unable to recover it. 00:30:50.784 [2024-11-20 10:04:21.630015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.784 [2024-11-20 10:04:21.630043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.784 qpair failed and we were unable to recover it. 00:30:50.784 [2024-11-20 10:04:21.630396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.784 [2024-11-20 10:04:21.630426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.784 qpair failed and we were unable to recover it. 00:30:50.784 [2024-11-20 10:04:21.630648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.784 [2024-11-20 10:04:21.630676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.784 qpair failed and we were unable to recover it. 00:30:50.784 [2024-11-20 10:04:21.630898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.784 [2024-11-20 10:04:21.630927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.784 qpair failed and we were unable to recover it. 00:30:50.784 [2024-11-20 10:04:21.631271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.784 [2024-11-20 10:04:21.631301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.784 qpair failed and we were unable to recover it. 00:30:50.784 [2024-11-20 10:04:21.631645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.784 [2024-11-20 10:04:21.631674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.784 qpair failed and we were unable to recover it. 00:30:50.784 [2024-11-20 10:04:21.632034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.784 [2024-11-20 10:04:21.632062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.784 qpair failed and we were unable to recover it. 00:30:50.784 [2024-11-20 10:04:21.632395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.784 [2024-11-20 10:04:21.632425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.784 qpair failed and we were unable to recover it. 00:30:50.784 [2024-11-20 10:04:21.632797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.784 [2024-11-20 10:04:21.632823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.784 qpair failed and we were unable to recover it. 00:30:50.784 [2024-11-20 10:04:21.633146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.784 [2024-11-20 10:04:21.633182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.784 qpair failed and we were unable to recover it. 00:30:50.784 [2024-11-20 10:04:21.633512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.784 [2024-11-20 10:04:21.633542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.784 qpair failed and we were unable to recover it. 00:30:50.784 [2024-11-20 10:04:21.633651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.784 [2024-11-20 10:04:21.633683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.784 qpair failed and we were unable to recover it. 00:30:50.784 [2024-11-20 10:04:21.634018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.784 [2024-11-20 10:04:21.634046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.784 qpair failed and we were unable to recover it. 00:30:50.784 [2024-11-20 10:04:21.634417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.784 [2024-11-20 10:04:21.634446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.784 qpair failed and we were unable to recover it. 00:30:50.784 [2024-11-20 10:04:21.634804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.784 [2024-11-20 10:04:21.634832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.784 qpair failed and we were unable to recover it. 00:30:50.784 [2024-11-20 10:04:21.635184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.784 [2024-11-20 10:04:21.635214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.784 qpair failed and we were unable to recover it. 00:30:50.784 [2024-11-20 10:04:21.635552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.784 [2024-11-20 10:04:21.635580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.784 qpair failed and we were unable to recover it. 00:30:50.784 [2024-11-20 10:04:21.635932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.784 [2024-11-20 10:04:21.635959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.784 qpair failed and we were unable to recover it. 00:30:50.784 [2024-11-20 10:04:21.636321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.784 [2024-11-20 10:04:21.636351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.784 qpair failed and we were unable to recover it. 00:30:50.784 [2024-11-20 10:04:21.636587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.784 [2024-11-20 10:04:21.636614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.784 qpair failed and we were unable to recover it. 00:30:50.784 [2024-11-20 10:04:21.636917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.784 [2024-11-20 10:04:21.636945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.785 qpair failed and we were unable to recover it. 00:30:50.785 [2024-11-20 10:04:21.637291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.785 [2024-11-20 10:04:21.637320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.785 qpair failed and we were unable to recover it. 00:30:50.785 [2024-11-20 10:04:21.637666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.785 [2024-11-20 10:04:21.637695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.785 qpair failed and we were unable to recover it. 00:30:50.785 [2024-11-20 10:04:21.637887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.785 [2024-11-20 10:04:21.637914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.785 qpair failed and we were unable to recover it. 00:30:50.785 [2024-11-20 10:04:21.638263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.785 [2024-11-20 10:04:21.638293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.785 qpair failed and we were unable to recover it. 00:30:50.785 [2024-11-20 10:04:21.638614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.785 [2024-11-20 10:04:21.638651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.785 qpair failed and we were unable to recover it. 00:30:50.785 [2024-11-20 10:04:21.638964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.785 [2024-11-20 10:04:21.638992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.785 qpair failed and we were unable to recover it. 00:30:50.785 [2024-11-20 10:04:21.639378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.785 [2024-11-20 10:04:21.639423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.785 qpair failed and we were unable to recover it. 00:30:50.785 [2024-11-20 10:04:21.639737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.785 [2024-11-20 10:04:21.639766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.785 qpair failed and we were unable to recover it. 00:30:50.785 [2024-11-20 10:04:21.640019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.785 [2024-11-20 10:04:21.640047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.785 qpair failed and we were unable to recover it. 00:30:50.785 [2024-11-20 10:04:21.640396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.785 [2024-11-20 10:04:21.640424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.785 qpair failed and we were unable to recover it. 00:30:50.785 [2024-11-20 10:04:21.640754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.785 [2024-11-20 10:04:21.640781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.785 qpair failed and we were unable to recover it. 00:30:50.785 [2024-11-20 10:04:21.641114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.785 [2024-11-20 10:04:21.641142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.785 qpair failed and we were unable to recover it. 00:30:50.785 [2024-11-20 10:04:21.641362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.785 [2024-11-20 10:04:21.641390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.785 qpair failed and we were unable to recover it. 00:30:50.785 [2024-11-20 10:04:21.641747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.785 [2024-11-20 10:04:21.641775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.785 qpair failed and we were unable to recover it. 00:30:50.785 [2024-11-20 10:04:21.642123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.785 [2024-11-20 10:04:21.642151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.785 qpair failed and we were unable to recover it. 00:30:50.785 [2024-11-20 10:04:21.642551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.785 [2024-11-20 10:04:21.642580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.785 qpair failed and we were unable to recover it. 00:30:50.785 [2024-11-20 10:04:21.642787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.785 [2024-11-20 10:04:21.642814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.785 qpair failed and we were unable to recover it. 00:30:50.785 [2024-11-20 10:04:21.643183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.785 [2024-11-20 10:04:21.643213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.785 qpair failed and we were unable to recover it. 00:30:50.785 [2024-11-20 10:04:21.643544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.785 [2024-11-20 10:04:21.643573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.785 qpair failed and we were unable to recover it. 00:30:50.785 [2024-11-20 10:04:21.643788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.785 [2024-11-20 10:04:21.643815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.785 qpair failed and we were unable to recover it. 00:30:50.785 [2024-11-20 10:04:21.644024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.785 [2024-11-20 10:04:21.644052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.785 qpair failed and we were unable to recover it. 00:30:50.785 [2024-11-20 10:04:21.644282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.785 [2024-11-20 10:04:21.644311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.785 qpair failed and we were unable to recover it. 00:30:50.785 [2024-11-20 10:04:21.644524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.785 [2024-11-20 10:04:21.644551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.785 qpair failed and we were unable to recover it. 00:30:50.785 [2024-11-20 10:04:21.644937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.785 [2024-11-20 10:04:21.644965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.785 qpair failed and we were unable to recover it. 00:30:50.785 [2024-11-20 10:04:21.645311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.785 [2024-11-20 10:04:21.645340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.785 qpair failed and we were unable to recover it. 00:30:50.785 [2024-11-20 10:04:21.645687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.785 [2024-11-20 10:04:21.645714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.785 qpair failed and we were unable to recover it. 00:30:50.785 [2024-11-20 10:04:21.646065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.785 [2024-11-20 10:04:21.646092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.785 qpair failed and we were unable to recover it. 00:30:50.785 [2024-11-20 10:04:21.646431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.785 [2024-11-20 10:04:21.646461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.785 qpair failed and we were unable to recover it. 00:30:50.785 [2024-11-20 10:04:21.646806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.785 [2024-11-20 10:04:21.646833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.785 qpair failed and we were unable to recover it. 00:30:50.785 [2024-11-20 10:04:21.647183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.785 [2024-11-20 10:04:21.647212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.785 qpair failed and we were unable to recover it. 00:30:50.785 [2024-11-20 10:04:21.647551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.785 [2024-11-20 10:04:21.647579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.785 qpair failed and we were unable to recover it. 00:30:50.785 [2024-11-20 10:04:21.647926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.785 [2024-11-20 10:04:21.647955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.785 qpair failed and we were unable to recover it. 00:30:50.785 [2024-11-20 10:04:21.648288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.785 [2024-11-20 10:04:21.648316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.785 qpair failed and we were unable to recover it. 00:30:50.785 [2024-11-20 10:04:21.648636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.786 [2024-11-20 10:04:21.648664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.786 qpair failed and we were unable to recover it. 00:30:50.786 [2024-11-20 10:04:21.648917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.786 [2024-11-20 10:04:21.648948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.786 qpair failed and we were unable to recover it. 00:30:50.786 [2024-11-20 10:04:21.649253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.786 [2024-11-20 10:04:21.649284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.786 qpair failed and we were unable to recover it. 00:30:50.786 [2024-11-20 10:04:21.649613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.786 [2024-11-20 10:04:21.649641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.786 qpair failed and we were unable to recover it. 00:30:50.786 [2024-11-20 10:04:21.649991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.786 [2024-11-20 10:04:21.650018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.786 qpair failed and we were unable to recover it. 00:30:50.786 [2024-11-20 10:04:21.650386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.786 [2024-11-20 10:04:21.650415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.786 qpair failed and we were unable to recover it. 00:30:50.786 [2024-11-20 10:04:21.650762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.786 [2024-11-20 10:04:21.650790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.786 qpair failed and we were unable to recover it. 00:30:50.786 [2024-11-20 10:04:21.651022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.786 [2024-11-20 10:04:21.651049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.786 qpair failed and we were unable to recover it. 00:30:50.786 [2024-11-20 10:04:21.651384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.786 [2024-11-20 10:04:21.651414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.786 qpair failed and we were unable to recover it. 00:30:50.786 [2024-11-20 10:04:21.651794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.786 [2024-11-20 10:04:21.651822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.786 qpair failed and we were unable to recover it. 00:30:50.786 [2024-11-20 10:04:21.652175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.786 [2024-11-20 10:04:21.652203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.786 qpair failed and we were unable to recover it. 00:30:50.786 [2024-11-20 10:04:21.652436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.786 [2024-11-20 10:04:21.652464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.786 qpair failed and we were unable to recover it. 00:30:50.786 [2024-11-20 10:04:21.652664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.786 [2024-11-20 10:04:21.652693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.786 qpair failed and we were unable to recover it. 00:30:50.786 [2024-11-20 10:04:21.653041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.786 [2024-11-20 10:04:21.653068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.786 qpair failed and we were unable to recover it. 00:30:50.786 [2024-11-20 10:04:21.653195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.786 [2024-11-20 10:04:21.653230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.786 qpair failed and we were unable to recover it. 00:30:50.786 [2024-11-20 10:04:21.653468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.786 [2024-11-20 10:04:21.653499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.786 qpair failed and we were unable to recover it. 00:30:50.786 [2024-11-20 10:04:21.653750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.786 [2024-11-20 10:04:21.653779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.786 qpair failed and we were unable to recover it. 00:30:50.786 [2024-11-20 10:04:21.654016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.786 [2024-11-20 10:04:21.654044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.786 qpair failed and we were unable to recover it. 00:30:50.786 [2024-11-20 10:04:21.654389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.786 [2024-11-20 10:04:21.654419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.786 qpair failed and we were unable to recover it. 00:30:50.786 [2024-11-20 10:04:21.654767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.786 [2024-11-20 10:04:21.654795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.786 qpair failed and we were unable to recover it. 00:30:50.786 [2024-11-20 10:04:21.655145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.786 [2024-11-20 10:04:21.655183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.786 qpair failed and we were unable to recover it. 00:30:50.786 [2024-11-20 10:04:21.655532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.786 [2024-11-20 10:04:21.655560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.786 qpair failed and we were unable to recover it. 00:30:50.786 [2024-11-20 10:04:21.655719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.786 [2024-11-20 10:04:21.655747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.786 qpair failed and we were unable to recover it. 00:30:50.786 [2024-11-20 10:04:21.656098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.786 [2024-11-20 10:04:21.656127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.786 qpair failed and we were unable to recover it. 00:30:50.786 [2024-11-20 10:04:21.656482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.786 [2024-11-20 10:04:21.656511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:50.786 qpair failed and we were unable to recover it. 00:30:51.065 [2024-11-20 10:04:21.656878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.065 [2024-11-20 10:04:21.656907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.065 qpair failed and we were unable to recover it. 00:30:51.065 [2024-11-20 10:04:21.657285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.065 [2024-11-20 10:04:21.657321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.065 qpair failed and we were unable to recover it. 00:30:51.065 [2024-11-20 10:04:21.657657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.065 [2024-11-20 10:04:21.657686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.065 qpair failed and we were unable to recover it. 00:30:51.065 [2024-11-20 10:04:21.658112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.065 [2024-11-20 10:04:21.658141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.065 qpair failed and we were unable to recover it. 00:30:51.065 [2024-11-20 10:04:21.658493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.065 [2024-11-20 10:04:21.658522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.065 qpair failed and we were unable to recover it. 00:30:51.065 [2024-11-20 10:04:21.658888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.065 [2024-11-20 10:04:21.658917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.065 qpair failed and we were unable to recover it. 00:30:51.065 [2024-11-20 10:04:21.659271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.065 [2024-11-20 10:04:21.659300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.065 qpair failed and we were unable to recover it. 00:30:51.065 [2024-11-20 10:04:21.659606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.065 [2024-11-20 10:04:21.659634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.065 qpair failed and we were unable to recover it. 00:30:51.065 [2024-11-20 10:04:21.659871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.065 [2024-11-20 10:04:21.659901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.065 qpair failed and we were unable to recover it. 00:30:51.065 [2024-11-20 10:04:21.660266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.065 [2024-11-20 10:04:21.660296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.065 qpair failed and we were unable to recover it. 00:30:51.065 [2024-11-20 10:04:21.660505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.065 [2024-11-20 10:04:21.660532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.065 qpair failed and we were unable to recover it. 00:30:51.065 [2024-11-20 10:04:21.660791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.065 [2024-11-20 10:04:21.660819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.065 qpair failed and we were unable to recover it. 00:30:51.065 [2024-11-20 10:04:21.661027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.065 [2024-11-20 10:04:21.661055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.065 qpair failed and we were unable to recover it. 00:30:51.065 [2024-11-20 10:04:21.661280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.065 [2024-11-20 10:04:21.661311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.065 qpair failed and we were unable to recover it. 00:30:51.065 [2024-11-20 10:04:21.661663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.065 [2024-11-20 10:04:21.661692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.065 qpair failed and we were unable to recover it. 00:30:51.065 [2024-11-20 10:04:21.662044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.065 [2024-11-20 10:04:21.662072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.065 qpair failed and we were unable to recover it. 00:30:51.065 [2024-11-20 10:04:21.662286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.065 [2024-11-20 10:04:21.662320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.065 qpair failed and we were unable to recover it. 00:30:51.065 [2024-11-20 10:04:21.662556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.065 [2024-11-20 10:04:21.662584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.065 qpair failed and we were unable to recover it. 00:30:51.065 [2024-11-20 10:04:21.662911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.065 [2024-11-20 10:04:21.662940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.066 qpair failed and we were unable to recover it. 00:30:51.066 [2024-11-20 10:04:21.663289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.066 [2024-11-20 10:04:21.663318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.066 qpair failed and we were unable to recover it. 00:30:51.066 [2024-11-20 10:04:21.663624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.066 [2024-11-20 10:04:21.663652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.066 qpair failed and we were unable to recover it. 00:30:51.066 [2024-11-20 10:04:21.663884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.066 [2024-11-20 10:04:21.663916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.066 qpair failed and we were unable to recover it. 00:30:51.066 [2024-11-20 10:04:21.664268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.066 [2024-11-20 10:04:21.664297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.066 qpair failed and we were unable to recover it. 00:30:51.066 [2024-11-20 10:04:21.664643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.066 [2024-11-20 10:04:21.664671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.066 qpair failed and we were unable to recover it. 00:30:51.066 [2024-11-20 10:04:21.664900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.066 [2024-11-20 10:04:21.664927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.066 qpair failed and we were unable to recover it. 00:30:51.066 [2024-11-20 10:04:21.665043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.066 [2024-11-20 10:04:21.665070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.066 qpair failed and we were unable to recover it. 00:30:51.066 [2024-11-20 10:04:21.665407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.066 [2024-11-20 10:04:21.665436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.066 qpair failed and we were unable to recover it. 00:30:51.066 [2024-11-20 10:04:21.665772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.066 [2024-11-20 10:04:21.665799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.066 qpair failed and we were unable to recover it. 00:30:51.066 [2024-11-20 10:04:21.666146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.066 [2024-11-20 10:04:21.666181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.066 qpair failed and we were unable to recover it. 00:30:51.066 [2024-11-20 10:04:21.666527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.066 [2024-11-20 10:04:21.666555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.066 qpair failed and we were unable to recover it. 00:30:51.066 [2024-11-20 10:04:21.666887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.066 [2024-11-20 10:04:21.666916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.066 qpair failed and we were unable to recover it. 00:30:51.066 [2024-11-20 10:04:21.667187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.066 [2024-11-20 10:04:21.667216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.066 qpair failed and we were unable to recover it. 00:30:51.066 [2024-11-20 10:04:21.667558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.066 [2024-11-20 10:04:21.667586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.066 qpair failed and we were unable to recover it. 00:30:51.066 [2024-11-20 10:04:21.667803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.066 [2024-11-20 10:04:21.667831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.066 qpair failed and we were unable to recover it. 00:30:51.066 [2024-11-20 10:04:21.668173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.066 [2024-11-20 10:04:21.668202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.066 qpair failed and we were unable to recover it. 00:30:51.066 [2024-11-20 10:04:21.668544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.066 [2024-11-20 10:04:21.668571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.066 qpair failed and we were unable to recover it. 00:30:51.066 [2024-11-20 10:04:21.668923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.066 [2024-11-20 10:04:21.668950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.066 qpair failed and we were unable to recover it. 00:30:51.066 [2024-11-20 10:04:21.669302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.066 [2024-11-20 10:04:21.669331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.066 qpair failed and we were unable to recover it. 00:30:51.066 [2024-11-20 10:04:21.669663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.066 [2024-11-20 10:04:21.669692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.066 qpair failed and we were unable to recover it. 00:30:51.066 [2024-11-20 10:04:21.670043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.066 [2024-11-20 10:04:21.670071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.066 qpair failed and we were unable to recover it. 00:30:51.066 [2024-11-20 10:04:21.670454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.066 [2024-11-20 10:04:21.670483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.066 qpair failed and we were unable to recover it. 00:30:51.066 [2024-11-20 10:04:21.670712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.066 [2024-11-20 10:04:21.670739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.066 qpair failed and we were unable to recover it. 00:30:51.066 [2024-11-20 10:04:21.671133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.066 [2024-11-20 10:04:21.671168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.066 qpair failed and we were unable to recover it. 00:30:51.066 [2024-11-20 10:04:21.671502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.066 [2024-11-20 10:04:21.671530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.066 qpair failed and we were unable to recover it. 00:30:51.066 [2024-11-20 10:04:21.671728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.066 [2024-11-20 10:04:21.671756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.066 qpair failed and we were unable to recover it. 00:30:51.066 [2024-11-20 10:04:21.672093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.066 [2024-11-20 10:04:21.672121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.066 qpair failed and we were unable to recover it. 00:30:51.066 [2024-11-20 10:04:21.672457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.066 [2024-11-20 10:04:21.672487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.066 qpair failed and we were unable to recover it. 00:30:51.066 [2024-11-20 10:04:21.672816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.066 [2024-11-20 10:04:21.672844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.066 qpair failed and we were unable to recover it. 00:30:51.066 [2024-11-20 10:04:21.673195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.066 [2024-11-20 10:04:21.673244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.066 qpair failed and we were unable to recover it. 00:30:51.066 [2024-11-20 10:04:21.673614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.066 [2024-11-20 10:04:21.673643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.066 qpair failed and we were unable to recover it. 00:30:51.066 [2024-11-20 10:04:21.673855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.066 [2024-11-20 10:04:21.673883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.066 qpair failed and we were unable to recover it. 00:30:51.066 [2024-11-20 10:04:21.674229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.066 [2024-11-20 10:04:21.674257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.066 qpair failed and we were unable to recover it. 00:30:51.066 [2024-11-20 10:04:21.674487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.066 [2024-11-20 10:04:21.674514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.066 qpair failed and we were unable to recover it. 00:30:51.066 [2024-11-20 10:04:21.674867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.066 [2024-11-20 10:04:21.674895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.066 qpair failed and we were unable to recover it. 00:30:51.066 [2024-11-20 10:04:21.675012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.066 [2024-11-20 10:04:21.675039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.066 qpair failed and we were unable to recover it. 00:30:51.066 [2024-11-20 10:04:21.675190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.066 [2024-11-20 10:04:21.675220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.066 qpair failed and we were unable to recover it. 00:30:51.066 [2024-11-20 10:04:21.675430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.066 [2024-11-20 10:04:21.675458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.066 qpair failed and we were unable to recover it. 00:30:51.067 [2024-11-20 10:04:21.675652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.067 [2024-11-20 10:04:21.675685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.067 qpair failed and we were unable to recover it. 00:30:51.067 [2024-11-20 10:04:21.675994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.067 [2024-11-20 10:04:21.676022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.067 qpair failed and we were unable to recover it. 00:30:51.067 [2024-11-20 10:04:21.676391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.067 [2024-11-20 10:04:21.676420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.067 qpair failed and we were unable to recover it. 00:30:51.067 [2024-11-20 10:04:21.676615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.067 [2024-11-20 10:04:21.676642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.067 qpair failed and we were unable to recover it. 00:30:51.067 [2024-11-20 10:04:21.676919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.067 [2024-11-20 10:04:21.676947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.067 qpair failed and we were unable to recover it. 00:30:51.067 [2024-11-20 10:04:21.677300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.067 [2024-11-20 10:04:21.677337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.067 qpair failed and we were unable to recover it. 00:30:51.067 [2024-11-20 10:04:21.677711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.067 [2024-11-20 10:04:21.677739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.067 qpair failed and we were unable to recover it. 00:30:51.067 [2024-11-20 10:04:21.677989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.067 [2024-11-20 10:04:21.678016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.067 qpair failed and we were unable to recover it. 00:30:51.067 [2024-11-20 10:04:21.678372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.067 [2024-11-20 10:04:21.678402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.067 qpair failed and we were unable to recover it. 00:30:51.067 [2024-11-20 10:04:21.678765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.067 [2024-11-20 10:04:21.678793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.067 qpair failed and we were unable to recover it. 00:30:51.067 [2024-11-20 10:04:21.679204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.067 [2024-11-20 10:04:21.679235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.067 qpair failed and we were unable to recover it. 00:30:51.067 [2024-11-20 10:04:21.679597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.067 [2024-11-20 10:04:21.679626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.067 qpair failed and we were unable to recover it. 00:30:51.067 [2024-11-20 10:04:21.679976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.067 [2024-11-20 10:04:21.680004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.067 qpair failed and we were unable to recover it. 00:30:51.067 [2024-11-20 10:04:21.680348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.067 [2024-11-20 10:04:21.680377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.067 qpair failed and we were unable to recover it. 00:30:51.067 [2024-11-20 10:04:21.680612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.067 [2024-11-20 10:04:21.680640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.067 qpair failed and we were unable to recover it. 00:30:51.067 [2024-11-20 10:04:21.681003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.067 [2024-11-20 10:04:21.681031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.067 qpair failed and we were unable to recover it. 00:30:51.067 [2024-11-20 10:04:21.681130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.067 [2024-11-20 10:04:21.681185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.067 qpair failed and we were unable to recover it. 00:30:51.067 [2024-11-20 10:04:21.681554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.067 [2024-11-20 10:04:21.681584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.067 qpair failed and we were unable to recover it. 00:30:51.067 [2024-11-20 10:04:21.681780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.067 [2024-11-20 10:04:21.681807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.067 qpair failed and we were unable to recover it. 00:30:51.067 [2024-11-20 10:04:21.682106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.067 [2024-11-20 10:04:21.682133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.067 qpair failed and we were unable to recover it. 00:30:51.067 [2024-11-20 10:04:21.682346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.067 [2024-11-20 10:04:21.682375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.067 qpair failed and we were unable to recover it. 00:30:51.067 [2024-11-20 10:04:21.682741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.067 [2024-11-20 10:04:21.682769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.067 qpair failed and we were unable to recover it. 00:30:51.067 [2024-11-20 10:04:21.682865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.067 [2024-11-20 10:04:21.682892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.067 qpair failed and we were unable to recover it. 00:30:51.067 [2024-11-20 10:04:21.683254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.067 [2024-11-20 10:04:21.683284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.067 qpair failed and we were unable to recover it. 00:30:51.067 [2024-11-20 10:04:21.683642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.067 [2024-11-20 10:04:21.683670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.067 qpair failed and we were unable to recover it. 00:30:51.067 [2024-11-20 10:04:21.683941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.067 [2024-11-20 10:04:21.683969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.067 qpair failed and we were unable to recover it. 00:30:51.067 [2024-11-20 10:04:21.684311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.067 [2024-11-20 10:04:21.684340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.067 qpair failed and we were unable to recover it. 00:30:51.067 [2024-11-20 10:04:21.684695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.067 [2024-11-20 10:04:21.684729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.067 qpair failed and we were unable to recover it. 00:30:51.067 [2024-11-20 10:04:21.684932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.067 [2024-11-20 10:04:21.684960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.067 qpair failed and we were unable to recover it. 00:30:51.067 [2024-11-20 10:04:21.685190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.067 [2024-11-20 10:04:21.685222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.067 qpair failed and we were unable to recover it. 00:30:51.067 [2024-11-20 10:04:21.685547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.067 [2024-11-20 10:04:21.685576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.067 qpair failed and we were unable to recover it. 00:30:51.067 [2024-11-20 10:04:21.685788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.067 [2024-11-20 10:04:21.685815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.067 qpair failed and we were unable to recover it. 00:30:51.067 [2024-11-20 10:04:21.686174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.067 [2024-11-20 10:04:21.686203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.067 qpair failed and we were unable to recover it. 00:30:51.067 [2024-11-20 10:04:21.686545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.067 [2024-11-20 10:04:21.686574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.067 qpair failed and we were unable to recover it. 00:30:51.067 [2024-11-20 10:04:21.686929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.067 [2024-11-20 10:04:21.686957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.067 qpair failed and we were unable to recover it. 00:30:51.067 [2024-11-20 10:04:21.687178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.067 [2024-11-20 10:04:21.687208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.067 qpair failed and we were unable to recover it. 00:30:51.067 [2024-11-20 10:04:21.687571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.067 [2024-11-20 10:04:21.687606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.067 qpair failed and we were unable to recover it. 00:30:51.067 [2024-11-20 10:04:21.687993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.067 [2024-11-20 10:04:21.688021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.067 qpair failed and we were unable to recover it. 00:30:51.067 [2024-11-20 10:04:21.688358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.068 [2024-11-20 10:04:21.688387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.068 qpair failed and we were unable to recover it. 00:30:51.068 [2024-11-20 10:04:21.688721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.068 [2024-11-20 10:04:21.688748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.068 qpair failed and we were unable to recover it. 00:30:51.068 [2024-11-20 10:04:21.689073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.068 [2024-11-20 10:04:21.689101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.068 qpair failed and we were unable to recover it. 00:30:51.068 [2024-11-20 10:04:21.689441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.068 [2024-11-20 10:04:21.689472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.068 qpair failed and we were unable to recover it. 00:30:51.068 [2024-11-20 10:04:21.689700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.068 [2024-11-20 10:04:21.689729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.068 qpair failed and we were unable to recover it. 00:30:51.068 [2024-11-20 10:04:21.689815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.068 [2024-11-20 10:04:21.689841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.068 qpair failed and we were unable to recover it. 00:30:51.068 [2024-11-20 10:04:21.690228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.068 [2024-11-20 10:04:21.690257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.068 qpair failed and we were unable to recover it. 00:30:51.068 [2024-11-20 10:04:21.690611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.068 [2024-11-20 10:04:21.690640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.068 qpair failed and we were unable to recover it. 00:30:51.068 [2024-11-20 10:04:21.690951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.068 [2024-11-20 10:04:21.690979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.068 qpair failed and we were unable to recover it. 00:30:51.068 [2024-11-20 10:04:21.691340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.068 [2024-11-20 10:04:21.691370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.068 qpair failed and we were unable to recover it. 00:30:51.068 [2024-11-20 10:04:21.691631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.068 [2024-11-20 10:04:21.691659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.068 qpair failed and we were unable to recover it. 00:30:51.068 [2024-11-20 10:04:21.692006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.068 [2024-11-20 10:04:21.692034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.068 qpair failed and we were unable to recover it. 00:30:51.068 [2024-11-20 10:04:21.692288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.068 [2024-11-20 10:04:21.692321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.068 qpair failed and we were unable to recover it. 00:30:51.068 [2024-11-20 10:04:21.692556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.068 [2024-11-20 10:04:21.692584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.068 qpair failed and we were unable to recover it. 00:30:51.068 [2024-11-20 10:04:21.692837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.068 [2024-11-20 10:04:21.692866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.068 qpair failed and we were unable to recover it. 00:30:51.068 [2024-11-20 10:04:21.693192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.068 [2024-11-20 10:04:21.693221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.068 qpair failed and we were unable to recover it. 00:30:51.068 [2024-11-20 10:04:21.693422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.068 [2024-11-20 10:04:21.693451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.068 qpair failed and we were unable to recover it. 00:30:51.068 [2024-11-20 10:04:21.693821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.068 [2024-11-20 10:04:21.693850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.068 qpair failed and we were unable to recover it. 00:30:51.068 [2024-11-20 10:04:21.693947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.068 [2024-11-20 10:04:21.693974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.068 qpair failed and we were unable to recover it. 00:30:51.068 [2024-11-20 10:04:21.694336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.068 [2024-11-20 10:04:21.694429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.068 qpair failed and we were unable to recover it. 00:30:51.068 [2024-11-20 10:04:21.694623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.068 [2024-11-20 10:04:21.694663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.068 qpair failed and we were unable to recover it. 00:30:51.068 [2024-11-20 10:04:21.695000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.068 [2024-11-20 10:04:21.695031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.068 qpair failed and we were unable to recover it. 00:30:51.068 [2024-11-20 10:04:21.695446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.068 [2024-11-20 10:04:21.695537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.068 qpair failed and we were unable to recover it. 00:30:51.068 [2024-11-20 10:04:21.695924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.068 [2024-11-20 10:04:21.695962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.068 qpair failed and we were unable to recover it. 00:30:51.068 [2024-11-20 10:04:21.696454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.068 [2024-11-20 10:04:21.696547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.068 qpair failed and we were unable to recover it. 00:30:51.068 [2024-11-20 10:04:21.696960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.068 [2024-11-20 10:04:21.696997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.068 qpair failed and we were unable to recover it. 00:30:51.068 [2024-11-20 10:04:21.697211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.068 [2024-11-20 10:04:21.697242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.068 qpair failed and we were unable to recover it. 00:30:51.068 [2024-11-20 10:04:21.697619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.068 [2024-11-20 10:04:21.697648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.068 qpair failed and we were unable to recover it. 00:30:51.068 [2024-11-20 10:04:21.697866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.068 [2024-11-20 10:04:21.697894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.068 qpair failed and we were unable to recover it. 00:30:51.068 [2024-11-20 10:04:21.698116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.068 [2024-11-20 10:04:21.698145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.068 qpair failed and we were unable to recover it. 00:30:51.068 [2024-11-20 10:04:21.698531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.068 [2024-11-20 10:04:21.698563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.068 qpair failed and we were unable to recover it. 00:30:51.068 [2024-11-20 10:04:21.698940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.068 [2024-11-20 10:04:21.698969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.068 qpair failed and we were unable to recover it. 00:30:51.068 [2024-11-20 10:04:21.699189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.068 [2024-11-20 10:04:21.699219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.068 qpair failed and we were unable to recover it. 00:30:51.068 [2024-11-20 10:04:21.699453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.068 [2024-11-20 10:04:21.699482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.068 qpair failed and we were unable to recover it. 00:30:51.068 [2024-11-20 10:04:21.699730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.068 [2024-11-20 10:04:21.699764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.068 qpair failed and we were unable to recover it. 00:30:51.068 [2024-11-20 10:04:21.699963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.068 [2024-11-20 10:04:21.699994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.068 qpair failed and we were unable to recover it. 00:30:51.068 [2024-11-20 10:04:21.700335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.068 [2024-11-20 10:04:21.700365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.068 qpair failed and we were unable to recover it. 00:30:51.068 [2024-11-20 10:04:21.700695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.069 [2024-11-20 10:04:21.700725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.069 qpair failed and we were unable to recover it. 00:30:51.069 [2024-11-20 10:04:21.701063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.069 [2024-11-20 10:04:21.701091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.069 qpair failed and we were unable to recover it. 00:30:51.069 [2024-11-20 10:04:21.701466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.069 [2024-11-20 10:04:21.701497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.069 qpair failed and we were unable to recover it. 00:30:51.069 [2024-11-20 10:04:21.701770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.069 [2024-11-20 10:04:21.701798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.069 qpair failed and we were unable to recover it. 00:30:51.069 [2024-11-20 10:04:21.702024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.069 [2024-11-20 10:04:21.702053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.069 qpair failed and we were unable to recover it. 00:30:51.069 [2024-11-20 10:04:21.702420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.069 [2024-11-20 10:04:21.702449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.069 qpair failed and we were unable to recover it. 00:30:51.069 [2024-11-20 10:04:21.702709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.069 [2024-11-20 10:04:21.702746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.069 qpair failed and we were unable to recover it. 00:30:51.069 [2024-11-20 10:04:21.703078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.069 [2024-11-20 10:04:21.703107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.069 qpair failed and we were unable to recover it. 00:30:51.069 [2024-11-20 10:04:21.703239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.069 [2024-11-20 10:04:21.703268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.069 qpair failed and we were unable to recover it. 00:30:51.069 [2024-11-20 10:04:21.703490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.069 [2024-11-20 10:04:21.703518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.069 qpair failed and we were unable to recover it. 00:30:51.069 [2024-11-20 10:04:21.703871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.069 [2024-11-20 10:04:21.703899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.069 qpair failed and we were unable to recover it. 00:30:51.069 [2024-11-20 10:04:21.704230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.069 [2024-11-20 10:04:21.704261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.069 qpair failed and we were unable to recover it. 00:30:51.069 [2024-11-20 10:04:21.704573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.069 [2024-11-20 10:04:21.704603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.069 qpair failed and we were unable to recover it. 00:30:51.069 [2024-11-20 10:04:21.704828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.069 [2024-11-20 10:04:21.704856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.069 qpair failed and we were unable to recover it. 00:30:51.069 [2024-11-20 10:04:21.705242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.069 [2024-11-20 10:04:21.705271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.069 qpair failed and we were unable to recover it. 00:30:51.069 [2024-11-20 10:04:21.705520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.069 [2024-11-20 10:04:21.705549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.069 qpair failed and we were unable to recover it. 00:30:51.069 [2024-11-20 10:04:21.705757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.069 [2024-11-20 10:04:21.705785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.069 qpair failed and we were unable to recover it. 00:30:51.069 [2024-11-20 10:04:21.706134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.069 [2024-11-20 10:04:21.706170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.069 qpair failed and we were unable to recover it. 00:30:51.069 [2024-11-20 10:04:21.706379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.069 [2024-11-20 10:04:21.706407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.069 qpair failed and we were unable to recover it. 00:30:51.069 [2024-11-20 10:04:21.706645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.069 [2024-11-20 10:04:21.706675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.069 qpair failed and we were unable to recover it. 00:30:51.069 [2024-11-20 10:04:21.706911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.069 [2024-11-20 10:04:21.706940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.069 qpair failed and we were unable to recover it. 00:30:51.069 [2024-11-20 10:04:21.707307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.069 [2024-11-20 10:04:21.707337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.069 qpair failed and we were unable to recover it. 00:30:51.069 [2024-11-20 10:04:21.707698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.069 [2024-11-20 10:04:21.707728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.069 qpair failed and we were unable to recover it. 00:30:51.069 [2024-11-20 10:04:21.708074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.069 [2024-11-20 10:04:21.708102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.069 qpair failed and we were unable to recover it. 00:30:51.069 [2024-11-20 10:04:21.708486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.069 [2024-11-20 10:04:21.708515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.069 qpair failed and we were unable to recover it. 00:30:51.069 [2024-11-20 10:04:21.708868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.069 [2024-11-20 10:04:21.708896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.069 qpair failed and we were unable to recover it. 00:30:51.069 [2024-11-20 10:04:21.709238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.069 [2024-11-20 10:04:21.709269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.069 qpair failed and we were unable to recover it. 00:30:51.069 [2024-11-20 10:04:21.709626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.069 [2024-11-20 10:04:21.709655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.069 qpair failed and we were unable to recover it. 00:30:51.069 [2024-11-20 10:04:21.709886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.069 [2024-11-20 10:04:21.709915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.069 qpair failed and we were unable to recover it. 00:30:51.069 [2024-11-20 10:04:21.710304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.069 [2024-11-20 10:04:21.710334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.069 qpair failed and we were unable to recover it. 00:30:51.069 [2024-11-20 10:04:21.710544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.069 [2024-11-20 10:04:21.710571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.069 qpair failed and we were unable to recover it. 00:30:51.069 [2024-11-20 10:04:21.710916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.069 [2024-11-20 10:04:21.710945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.069 qpair failed and we were unable to recover it. 00:30:51.069 [2024-11-20 10:04:21.711193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.069 [2024-11-20 10:04:21.711227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.069 qpair failed and we were unable to recover it. 00:30:51.069 [2024-11-20 10:04:21.711473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.069 [2024-11-20 10:04:21.711503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.069 qpair failed and we were unable to recover it. 00:30:51.069 [2024-11-20 10:04:21.711839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.069 [2024-11-20 10:04:21.711868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.069 qpair failed and we were unable to recover it. 00:30:51.069 [2024-11-20 10:04:21.712066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.069 [2024-11-20 10:04:21.712095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.069 qpair failed and we were unable to recover it. 00:30:51.069 [2024-11-20 10:04:21.712454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.069 [2024-11-20 10:04:21.712484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.069 qpair failed and we were unable to recover it. 00:30:51.069 [2024-11-20 10:04:21.712733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.069 [2024-11-20 10:04:21.712761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.069 qpair failed and we were unable to recover it. 00:30:51.069 [2024-11-20 10:04:21.712961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.070 [2024-11-20 10:04:21.712988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.070 qpair failed and we were unable to recover it. 00:30:51.070 [2024-11-20 10:04:21.713365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.070 [2024-11-20 10:04:21.713396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.070 qpair failed and we were unable to recover it. 00:30:51.070 [2024-11-20 10:04:21.713748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.070 [2024-11-20 10:04:21.713776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.070 qpair failed and we were unable to recover it. 00:30:51.070 [2024-11-20 10:04:21.714011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.070 [2024-11-20 10:04:21.714038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.070 qpair failed and we were unable to recover it. 00:30:51.070 [2024-11-20 10:04:21.714403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.070 [2024-11-20 10:04:21.714433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.070 qpair failed and we were unable to recover it. 00:30:51.070 [2024-11-20 10:04:21.714786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.070 [2024-11-20 10:04:21.714815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.070 qpair failed and we were unable to recover it. 00:30:51.070 [2024-11-20 10:04:21.715174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.070 [2024-11-20 10:04:21.715204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.070 qpair failed and we were unable to recover it. 00:30:51.070 [2024-11-20 10:04:21.715583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.070 [2024-11-20 10:04:21.715612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.070 qpair failed and we were unable to recover it. 00:30:51.070 [2024-11-20 10:04:21.715958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.070 [2024-11-20 10:04:21.715992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.070 qpair failed and we were unable to recover it. 00:30:51.070 [2024-11-20 10:04:21.716334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.070 [2024-11-20 10:04:21.716364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.070 qpair failed and we were unable to recover it. 00:30:51.070 [2024-11-20 10:04:21.716596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.070 [2024-11-20 10:04:21.716624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.070 qpair failed and we were unable to recover it. 00:30:51.070 [2024-11-20 10:04:21.716982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.070 [2024-11-20 10:04:21.717011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.070 qpair failed and we were unable to recover it. 00:30:51.070 [2024-11-20 10:04:21.717237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.070 [2024-11-20 10:04:21.717266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.070 qpair failed and we were unable to recover it. 00:30:51.070 [2024-11-20 10:04:21.717615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.070 [2024-11-20 10:04:21.717644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.070 qpair failed and we were unable to recover it. 00:30:51.070 [2024-11-20 10:04:21.717989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.070 [2024-11-20 10:04:21.718018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.070 qpair failed and we were unable to recover it. 00:30:51.070 [2024-11-20 10:04:21.718353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.070 [2024-11-20 10:04:21.718382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.070 qpair failed and we were unable to recover it. 00:30:51.070 [2024-11-20 10:04:21.718607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.070 [2024-11-20 10:04:21.718635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.070 qpair failed and we were unable to recover it. 00:30:51.070 [2024-11-20 10:04:21.718990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.070 [2024-11-20 10:04:21.719018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.070 qpair failed and we were unable to recover it. 00:30:51.070 [2024-11-20 10:04:21.719427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.070 [2024-11-20 10:04:21.719457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.070 qpair failed and we were unable to recover it. 00:30:51.070 [2024-11-20 10:04:21.719652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.070 [2024-11-20 10:04:21.719679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.070 qpair failed and we were unable to recover it. 00:30:51.070 [2024-11-20 10:04:21.720047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.070 [2024-11-20 10:04:21.720075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.070 qpair failed and we were unable to recover it. 00:30:51.070 [2024-11-20 10:04:21.720336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.070 [2024-11-20 10:04:21.720365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.070 qpair failed and we were unable to recover it. 00:30:51.070 [2024-11-20 10:04:21.720569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.070 [2024-11-20 10:04:21.720598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.070 qpair failed and we were unable to recover it. 00:30:51.070 [2024-11-20 10:04:21.720717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.070 [2024-11-20 10:04:21.720746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.070 qpair failed and we were unable to recover it. 00:30:51.070 [2024-11-20 10:04:21.721070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.070 [2024-11-20 10:04:21.721099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.070 qpair failed and we were unable to recover it. 00:30:51.070 [2024-11-20 10:04:21.721446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.070 [2024-11-20 10:04:21.721477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.070 qpair failed and we were unable to recover it. 00:30:51.070 [2024-11-20 10:04:21.721811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.070 [2024-11-20 10:04:21.721839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.070 qpair failed and we were unable to recover it. 00:30:51.070 [2024-11-20 10:04:21.722180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.070 [2024-11-20 10:04:21.722210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.070 qpair failed and we were unable to recover it. 00:30:51.070 [2024-11-20 10:04:21.722535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.070 [2024-11-20 10:04:21.722564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.070 qpair failed and we were unable to recover it. 00:30:51.070 [2024-11-20 10:04:21.722903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.070 [2024-11-20 10:04:21.722930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.070 qpair failed and we were unable to recover it. 00:30:51.070 [2024-11-20 10:04:21.723281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.070 [2024-11-20 10:04:21.723311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.070 qpair failed and we were unable to recover it. 00:30:51.070 [2024-11-20 10:04:21.723646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.071 [2024-11-20 10:04:21.723675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.071 qpair failed and we were unable to recover it. 00:30:51.071 [2024-11-20 10:04:21.723891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.071 [2024-11-20 10:04:21.723919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.071 qpair failed and we were unable to recover it. 00:30:51.071 [2024-11-20 10:04:21.724236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.071 [2024-11-20 10:04:21.724266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.071 qpair failed and we were unable to recover it. 00:30:51.071 [2024-11-20 10:04:21.724467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.071 [2024-11-20 10:04:21.724495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.071 qpair failed and we were unable to recover it. 00:30:51.071 [2024-11-20 10:04:21.724870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.071 [2024-11-20 10:04:21.724899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.071 qpair failed and we were unable to recover it. 00:30:51.071 [2024-11-20 10:04:21.725241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.071 [2024-11-20 10:04:21.725271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.071 qpair failed and we were unable to recover it. 00:30:51.071 [2024-11-20 10:04:21.725568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.071 [2024-11-20 10:04:21.725596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.071 qpair failed and we were unable to recover it. 00:30:51.071 [2024-11-20 10:04:21.725931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.071 [2024-11-20 10:04:21.725960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.071 qpair failed and we were unable to recover it. 00:30:51.071 [2024-11-20 10:04:21.726203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.071 [2024-11-20 10:04:21.726232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.071 qpair failed and we were unable to recover it. 00:30:51.071 [2024-11-20 10:04:21.726573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.071 [2024-11-20 10:04:21.726602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.071 qpair failed and we were unable to recover it. 00:30:51.071 [2024-11-20 10:04:21.726952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.071 [2024-11-20 10:04:21.726980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.071 qpair failed and we were unable to recover it. 00:30:51.071 [2024-11-20 10:04:21.727308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.071 [2024-11-20 10:04:21.727338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.071 qpair failed and we were unable to recover it. 00:30:51.071 [2024-11-20 10:04:21.727683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.071 [2024-11-20 10:04:21.727711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.071 qpair failed and we were unable to recover it. 00:30:51.071 [2024-11-20 10:04:21.728060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.071 [2024-11-20 10:04:21.728088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.071 qpair failed and we were unable to recover it. 00:30:51.071 [2024-11-20 10:04:21.728444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.071 [2024-11-20 10:04:21.728474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.071 qpair failed and we were unable to recover it. 00:30:51.071 [2024-11-20 10:04:21.728829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.071 [2024-11-20 10:04:21.728857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.071 qpair failed and we were unable to recover it. 00:30:51.071 [2024-11-20 10:04:21.729107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.071 [2024-11-20 10:04:21.729135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.071 qpair failed and we were unable to recover it. 00:30:51.071 [2024-11-20 10:04:21.729345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.071 [2024-11-20 10:04:21.729380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.071 qpair failed and we were unable to recover it. 00:30:51.071 [2024-11-20 10:04:21.729714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.071 [2024-11-20 10:04:21.729742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.071 qpair failed and we were unable to recover it. 00:30:51.071 [2024-11-20 10:04:21.729941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.071 [2024-11-20 10:04:21.729969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.071 qpair failed and we were unable to recover it. 00:30:51.071 [2024-11-20 10:04:21.730327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.071 [2024-11-20 10:04:21.730356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.071 qpair failed and we were unable to recover it. 00:30:51.071 [2024-11-20 10:04:21.730712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.071 [2024-11-20 10:04:21.730740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.071 qpair failed and we were unable to recover it. 00:30:51.071 [2024-11-20 10:04:21.731089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.071 [2024-11-20 10:04:21.731117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.071 qpair failed and we were unable to recover it. 00:30:51.071 [2024-11-20 10:04:21.731348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.071 [2024-11-20 10:04:21.731381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.071 qpair failed and we were unable to recover it. 00:30:51.071 [2024-11-20 10:04:21.731709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.071 [2024-11-20 10:04:21.731739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.071 qpair failed and we were unable to recover it. 00:30:51.071 [2024-11-20 10:04:21.732083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.071 [2024-11-20 10:04:21.732111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.071 qpair failed and we were unable to recover it. 00:30:51.071 [2024-11-20 10:04:21.732329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.071 [2024-11-20 10:04:21.732358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.071 qpair failed and we were unable to recover it. 00:30:51.071 [2024-11-20 10:04:21.732706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.071 [2024-11-20 10:04:21.732734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.071 qpair failed and we were unable to recover it. 00:30:51.071 [2024-11-20 10:04:21.733091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.071 [2024-11-20 10:04:21.733119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.071 qpair failed and we were unable to recover it. 00:30:51.071 [2024-11-20 10:04:21.733374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.071 [2024-11-20 10:04:21.733403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.071 qpair failed and we were unable to recover it. 00:30:51.071 [2024-11-20 10:04:21.733608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.071 [2024-11-20 10:04:21.733635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.071 qpair failed and we were unable to recover it. 00:30:51.071 [2024-11-20 10:04:21.733850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.071 [2024-11-20 10:04:21.733881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.071 qpair failed and we were unable to recover it. 00:30:51.071 [2024-11-20 10:04:21.734236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.071 [2024-11-20 10:04:21.734266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.071 qpair failed and we were unable to recover it. 00:30:51.071 [2024-11-20 10:04:21.734579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.071 [2024-11-20 10:04:21.734608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.071 qpair failed and we were unable to recover it. 00:30:51.071 [2024-11-20 10:04:21.734822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.071 [2024-11-20 10:04:21.734851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.071 qpair failed and we were unable to recover it. 00:30:51.071 [2024-11-20 10:04:21.735176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.071 [2024-11-20 10:04:21.735206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.071 qpair failed and we were unable to recover it. 00:30:51.071 [2024-11-20 10:04:21.735460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.071 [2024-11-20 10:04:21.735492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.071 qpair failed and we were unable to recover it. 00:30:51.071 [2024-11-20 10:04:21.735877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.071 [2024-11-20 10:04:21.735906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.071 qpair failed and we were unable to recover it. 00:30:51.071 [2024-11-20 10:04:21.736231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.072 [2024-11-20 10:04:21.736266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.072 qpair failed and we were unable to recover it. 00:30:51.072 [2024-11-20 10:04:21.736579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.072 [2024-11-20 10:04:21.736607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.072 qpair failed and we were unable to recover it. 00:30:51.072 [2024-11-20 10:04:21.736955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.072 [2024-11-20 10:04:21.736983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.072 qpair failed and we were unable to recover it. 00:30:51.072 [2024-11-20 10:04:21.737333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.072 [2024-11-20 10:04:21.737362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.072 qpair failed and we were unable to recover it. 00:30:51.072 [2024-11-20 10:04:21.737716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.072 [2024-11-20 10:04:21.737744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.072 qpair failed and we were unable to recover it. 00:30:51.072 [2024-11-20 10:04:21.738003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.072 [2024-11-20 10:04:21.738031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.072 qpair failed and we were unable to recover it. 00:30:51.072 [2024-11-20 10:04:21.738299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.072 [2024-11-20 10:04:21.738333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.072 qpair failed and we were unable to recover it. 00:30:51.072 [2024-11-20 10:04:21.738542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.072 [2024-11-20 10:04:21.738570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.072 qpair failed and we were unable to recover it. 00:30:51.072 [2024-11-20 10:04:21.738912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.072 [2024-11-20 10:04:21.738940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.072 qpair failed and we were unable to recover it. 00:30:51.072 [2024-11-20 10:04:21.739293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.072 [2024-11-20 10:04:21.739322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.072 qpair failed and we were unable to recover it. 00:30:51.072 [2024-11-20 10:04:21.739668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.072 [2024-11-20 10:04:21.739696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.072 qpair failed and we were unable to recover it. 00:30:51.072 [2024-11-20 10:04:21.739860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.072 [2024-11-20 10:04:21.739889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.072 qpair failed and we were unable to recover it. 00:30:51.072 [2024-11-20 10:04:21.740276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.072 [2024-11-20 10:04:21.740305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.072 qpair failed and we were unable to recover it. 00:30:51.072 [2024-11-20 10:04:21.740664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.072 [2024-11-20 10:04:21.740692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.072 qpair failed and we were unable to recover it. 00:30:51.072 [2024-11-20 10:04:21.741042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.072 [2024-11-20 10:04:21.741070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.072 qpair failed and we were unable to recover it. 00:30:51.072 [2024-11-20 10:04:21.741426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.072 [2024-11-20 10:04:21.741456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.072 qpair failed and we were unable to recover it. 00:30:51.072 [2024-11-20 10:04:21.741691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.072 [2024-11-20 10:04:21.741718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.072 qpair failed and we were unable to recover it. 00:30:51.072 [2024-11-20 10:04:21.742098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.072 [2024-11-20 10:04:21.742126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.072 qpair failed and we were unable to recover it. 00:30:51.072 [2024-11-20 10:04:21.742378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.072 [2024-11-20 10:04:21.742412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.072 qpair failed and we were unable to recover it. 00:30:51.072 [2024-11-20 10:04:21.742645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.072 [2024-11-20 10:04:21.742680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.072 qpair failed and we were unable to recover it. 00:30:51.072 [2024-11-20 10:04:21.742893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.072 [2024-11-20 10:04:21.742922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.072 qpair failed and we were unable to recover it. 00:30:51.072 [2024-11-20 10:04:21.743129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.072 [2024-11-20 10:04:21.743165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.072 qpair failed and we were unable to recover it. 00:30:51.072 [2024-11-20 10:04:21.743525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.072 [2024-11-20 10:04:21.743553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.072 qpair failed and we were unable to recover it. 00:30:51.072 [2024-11-20 10:04:21.743909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.072 [2024-11-20 10:04:21.743937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.072 qpair failed and we were unable to recover it. 00:30:51.072 [2024-11-20 10:04:21.744284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.072 [2024-11-20 10:04:21.744313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.072 qpair failed and we were unable to recover it. 00:30:51.072 [2024-11-20 10:04:21.744540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.072 [2024-11-20 10:04:21.744568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.072 qpair failed and we were unable to recover it. 00:30:51.072 [2024-11-20 10:04:21.744878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.072 [2024-11-20 10:04:21.744906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.072 qpair failed and we were unable to recover it. 00:30:51.072 [2024-11-20 10:04:21.745246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.072 [2024-11-20 10:04:21.745276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.072 qpair failed and we were unable to recover it. 00:30:51.072 [2024-11-20 10:04:21.745616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.072 [2024-11-20 10:04:21.745644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.072 qpair failed and we were unable to recover it. 00:30:51.072 [2024-11-20 10:04:21.745961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.072 [2024-11-20 10:04:21.745989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.072 qpair failed and we were unable to recover it. 00:30:51.072 [2024-11-20 10:04:21.746327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.072 [2024-11-20 10:04:21.746356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.072 qpair failed and we were unable to recover it. 00:30:51.072 [2024-11-20 10:04:21.746703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.072 [2024-11-20 10:04:21.746731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.072 qpair failed and we were unable to recover it. 00:30:51.072 [2024-11-20 10:04:21.747074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.072 [2024-11-20 10:04:21.747102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.072 qpair failed and we were unable to recover it. 00:30:51.072 [2024-11-20 10:04:21.747342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.072 [2024-11-20 10:04:21.747371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.072 qpair failed and we were unable to recover it. 00:30:51.072 [2024-11-20 10:04:21.747694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.072 [2024-11-20 10:04:21.747722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.072 qpair failed and we were unable to recover it. 00:30:51.073 [2024-11-20 10:04:21.748071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.073 [2024-11-20 10:04:21.748100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.073 qpair failed and we were unable to recover it. 00:30:51.073 [2024-11-20 10:04:21.748460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.073 [2024-11-20 10:04:21.748490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.073 qpair failed and we were unable to recover it. 00:30:51.073 [2024-11-20 10:04:21.748858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.073 [2024-11-20 10:04:21.748887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.073 qpair failed and we were unable to recover it. 00:30:51.073 [2024-11-20 10:04:21.749082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.073 [2024-11-20 10:04:21.749110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.073 qpair failed and we were unable to recover it. 00:30:51.073 [2024-11-20 10:04:21.749455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.073 [2024-11-20 10:04:21.749484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.073 qpair failed and we were unable to recover it. 00:30:51.073 [2024-11-20 10:04:21.749826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.073 [2024-11-20 10:04:21.749855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.073 qpair failed and we were unable to recover it. 00:30:51.073 [2024-11-20 10:04:21.750198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.073 [2024-11-20 10:04:21.750229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.073 qpair failed and we were unable to recover it. 00:30:51.073 [2024-11-20 10:04:21.750540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.073 [2024-11-20 10:04:21.750568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.073 qpair failed and we were unable to recover it. 00:30:51.073 [2024-11-20 10:04:21.750910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.073 [2024-11-20 10:04:21.750938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.073 qpair failed and we were unable to recover it. 00:30:51.073 [2024-11-20 10:04:21.751283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.073 [2024-11-20 10:04:21.751313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.073 qpair failed and we were unable to recover it. 00:30:51.073 [2024-11-20 10:04:21.751616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.073 [2024-11-20 10:04:21.751644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.073 qpair failed and we were unable to recover it. 00:30:51.073 [2024-11-20 10:04:21.751992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.073 [2024-11-20 10:04:21.752020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.073 qpair failed and we were unable to recover it. 00:30:51.073 [2024-11-20 10:04:21.752381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.073 [2024-11-20 10:04:21.752412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.073 qpair failed and we were unable to recover it. 00:30:51.073 [2024-11-20 10:04:21.752762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.073 [2024-11-20 10:04:21.752790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.073 qpair failed and we were unable to recover it. 00:30:51.073 [2024-11-20 10:04:21.753142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.073 [2024-11-20 10:04:21.753177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.073 qpair failed and we were unable to recover it. 00:30:51.073 [2024-11-20 10:04:21.753583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.073 [2024-11-20 10:04:21.753611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.073 qpair failed and we were unable to recover it. 00:30:51.073 [2024-11-20 10:04:21.753907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.073 [2024-11-20 10:04:21.753936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.073 qpair failed and we were unable to recover it. 00:30:51.073 [2024-11-20 10:04:21.754294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.073 [2024-11-20 10:04:21.754324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.073 qpair failed and we were unable to recover it. 00:30:51.073 [2024-11-20 10:04:21.754677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.073 [2024-11-20 10:04:21.754706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.073 qpair failed and we were unable to recover it. 00:30:51.073 [2024-11-20 10:04:21.755052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.073 [2024-11-20 10:04:21.755080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.073 qpair failed and we were unable to recover it. 00:30:51.073 [2024-11-20 10:04:21.755420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.073 [2024-11-20 10:04:21.755449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.073 qpair failed and we were unable to recover it. 00:30:51.073 [2024-11-20 10:04:21.755795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.073 [2024-11-20 10:04:21.755824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.073 qpair failed and we were unable to recover it. 00:30:51.073 [2024-11-20 10:04:21.756172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.073 [2024-11-20 10:04:21.756203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.073 qpair failed and we were unable to recover it. 00:30:51.073 [2024-11-20 10:04:21.756546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.073 [2024-11-20 10:04:21.756575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.073 qpair failed and we were unable to recover it. 00:30:51.073 [2024-11-20 10:04:21.756926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.073 [2024-11-20 10:04:21.756954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.073 qpair failed and we were unable to recover it. 00:30:51.073 [2024-11-20 10:04:21.757170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.073 [2024-11-20 10:04:21.757201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.073 qpair failed and we were unable to recover it. 00:30:51.073 [2024-11-20 10:04:21.757395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.073 [2024-11-20 10:04:21.757423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.073 qpair failed and we were unable to recover it. 00:30:51.073 [2024-11-20 10:04:21.757768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.073 [2024-11-20 10:04:21.757796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.073 qpair failed and we were unable to recover it. 00:30:51.073 [2024-11-20 10:04:21.758153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.073 [2024-11-20 10:04:21.758190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.073 qpair failed and we were unable to recover it. 00:30:51.073 [2024-11-20 10:04:21.758419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.073 [2024-11-20 10:04:21.758446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.073 qpair failed and we were unable to recover it. 00:30:51.073 [2024-11-20 10:04:21.758680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.073 [2024-11-20 10:04:21.758708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.073 qpair failed and we were unable to recover it. 00:30:51.073 [2024-11-20 10:04:21.759096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.073 [2024-11-20 10:04:21.759124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.073 qpair failed and we were unable to recover it. 00:30:51.073 [2024-11-20 10:04:21.759345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.073 [2024-11-20 10:04:21.759375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.073 qpair failed and we were unable to recover it. 00:30:51.073 [2024-11-20 10:04:21.759673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.073 [2024-11-20 10:04:21.759701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.073 qpair failed and we were unable to recover it. 00:30:51.073 [2024-11-20 10:04:21.760033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.073 [2024-11-20 10:04:21.760061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.073 qpair failed and we were unable to recover it. 00:30:51.073 [2024-11-20 10:04:21.760453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.073 [2024-11-20 10:04:21.760483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.073 qpair failed and we were unable to recover it. 00:30:51.073 [2024-11-20 10:04:21.760825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.073 [2024-11-20 10:04:21.760853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.073 qpair failed and we were unable to recover it. 00:30:51.074 [2024-11-20 10:04:21.761175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.074 [2024-11-20 10:04:21.761204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.074 qpair failed and we were unable to recover it. 00:30:51.074 [2024-11-20 10:04:21.761578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.074 [2024-11-20 10:04:21.761607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.074 qpair failed and we were unable to recover it. 00:30:51.074 [2024-11-20 10:04:21.761941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.074 [2024-11-20 10:04:21.761970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.074 qpair failed and we were unable to recover it. 00:30:51.074 [2024-11-20 10:04:21.762331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.074 [2024-11-20 10:04:21.762360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.074 qpair failed and we were unable to recover it. 00:30:51.074 [2024-11-20 10:04:21.762581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.074 [2024-11-20 10:04:21.762609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.074 qpair failed and we were unable to recover it. 00:30:51.074 [2024-11-20 10:04:21.762953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.074 [2024-11-20 10:04:21.762981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.074 qpair failed and we were unable to recover it. 00:30:51.074 [2024-11-20 10:04:21.763326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.074 [2024-11-20 10:04:21.763355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.074 qpair failed and we were unable to recover it. 00:30:51.074 [2024-11-20 10:04:21.763695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.074 [2024-11-20 10:04:21.763722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.074 qpair failed and we were unable to recover it. 00:30:51.074 [2024-11-20 10:04:21.763916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.074 [2024-11-20 10:04:21.763944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.074 qpair failed and we were unable to recover it. 00:30:51.074 [2024-11-20 10:04:21.764294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.074 [2024-11-20 10:04:21.764323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.074 qpair failed and we were unable to recover it. 00:30:51.074 [2024-11-20 10:04:21.764658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.074 [2024-11-20 10:04:21.764686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.074 qpair failed and we were unable to recover it. 00:30:51.074 [2024-11-20 10:04:21.765036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.074 [2024-11-20 10:04:21.765064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.074 qpair failed and we were unable to recover it. 00:30:51.074 [2024-11-20 10:04:21.765327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.074 [2024-11-20 10:04:21.765356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.074 qpair failed and we were unable to recover it. 00:30:51.074 [2024-11-20 10:04:21.765688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.074 [2024-11-20 10:04:21.765716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.074 qpair failed and we were unable to recover it. 00:30:51.074 [2024-11-20 10:04:21.766069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.074 [2024-11-20 10:04:21.766102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.074 qpair failed and we were unable to recover it. 00:30:51.074 [2024-11-20 10:04:21.766440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.074 [2024-11-20 10:04:21.766470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.074 qpair failed and we were unable to recover it. 00:30:51.074 [2024-11-20 10:04:21.766822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.074 [2024-11-20 10:04:21.766850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.074 qpair failed and we were unable to recover it. 00:30:51.074 [2024-11-20 10:04:21.767200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.074 [2024-11-20 10:04:21.767229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.074 qpair failed and we were unable to recover it. 00:30:51.074 [2024-11-20 10:04:21.767443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.074 [2024-11-20 10:04:21.767470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.074 qpair failed and we were unable to recover it. 00:30:51.074 [2024-11-20 10:04:21.767595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.074 [2024-11-20 10:04:21.767622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.074 qpair failed and we were unable to recover it. 00:30:51.074 [2024-11-20 10:04:21.768113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.074 [2024-11-20 10:04:21.768221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.074 qpair failed and we were unable to recover it. 00:30:51.074 [2024-11-20 10:04:21.768666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.074 [2024-11-20 10:04:21.768703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.074 qpair failed and we were unable to recover it. 00:30:51.074 [2024-11-20 10:04:21.769091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.074 [2024-11-20 10:04:21.769120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.074 qpair failed and we were unable to recover it. 00:30:51.074 [2024-11-20 10:04:21.769603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.074 [2024-11-20 10:04:21.769692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.074 qpair failed and we were unable to recover it. 00:30:51.074 [2024-11-20 10:04:21.769968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.074 [2024-11-20 10:04:21.770006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.074 qpair failed and we were unable to recover it. 00:30:51.074 [2024-11-20 10:04:21.770330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.074 [2024-11-20 10:04:21.770363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.074 qpair failed and we were unable to recover it. 00:30:51.074 [2024-11-20 10:04:21.770675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.074 [2024-11-20 10:04:21.770704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.074 qpair failed and we were unable to recover it. 00:30:51.074 [2024-11-20 10:04:21.771076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.074 [2024-11-20 10:04:21.771105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.074 qpair failed and we were unable to recover it. 00:30:51.074 [2024-11-20 10:04:21.771480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.074 [2024-11-20 10:04:21.771511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.074 qpair failed and we were unable to recover it. 00:30:51.074 [2024-11-20 10:04:21.771742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.074 [2024-11-20 10:04:21.771769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.074 qpair failed and we were unable to recover it. 00:30:51.074 [2024-11-20 10:04:21.772097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.074 [2024-11-20 10:04:21.772126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.074 qpair failed and we were unable to recover it. 00:30:51.074 [2024-11-20 10:04:21.772369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.074 [2024-11-20 10:04:21.772398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.074 qpair failed and we were unable to recover it. 00:30:51.074 [2024-11-20 10:04:21.772741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.074 [2024-11-20 10:04:21.772769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.074 qpair failed and we were unable to recover it. 00:30:51.074 [2024-11-20 10:04:21.773103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.075 [2024-11-20 10:04:21.773134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.075 qpair failed and we were unable to recover it. 00:30:51.075 [2024-11-20 10:04:21.773538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.075 [2024-11-20 10:04:21.773568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.075 qpair failed and we were unable to recover it. 00:30:51.075 [2024-11-20 10:04:21.773898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.075 [2024-11-20 10:04:21.773927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.075 qpair failed and we were unable to recover it. 00:30:51.075 [2024-11-20 10:04:21.774151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.075 [2024-11-20 10:04:21.774191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.075 qpair failed and we were unable to recover it. 00:30:51.075 [2024-11-20 10:04:21.774559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.075 [2024-11-20 10:04:21.774587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.075 qpair failed and we were unable to recover it. 00:30:51.075 [2024-11-20 10:04:21.774969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.075 [2024-11-20 10:04:21.774998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.075 qpair failed and we were unable to recover it. 00:30:51.075 [2024-11-20 10:04:21.775195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.075 [2024-11-20 10:04:21.775225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.075 qpair failed and we were unable to recover it. 00:30:51.075 [2024-11-20 10:04:21.775542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.075 [2024-11-20 10:04:21.775571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.075 qpair failed and we were unable to recover it. 00:30:51.075 [2024-11-20 10:04:21.775947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.075 [2024-11-20 10:04:21.775982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.075 qpair failed and we were unable to recover it. 00:30:51.075 [2024-11-20 10:04:21.776311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.075 [2024-11-20 10:04:21.776341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.075 qpair failed and we were unable to recover it. 00:30:51.075 [2024-11-20 10:04:21.776700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.075 [2024-11-20 10:04:21.776728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.075 qpair failed and we were unable to recover it. 00:30:51.075 [2024-11-20 10:04:21.777045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.075 [2024-11-20 10:04:21.777073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.075 qpair failed and we were unable to recover it. 00:30:51.075 [2024-11-20 10:04:21.777442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.075 [2024-11-20 10:04:21.777472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.075 qpair failed and we were unable to recover it. 00:30:51.075 [2024-11-20 10:04:21.777820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.075 [2024-11-20 10:04:21.777850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.075 qpair failed and we were unable to recover it. 00:30:51.075 [2024-11-20 10:04:21.778150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.075 [2024-11-20 10:04:21.778186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.075 qpair failed and we were unable to recover it. 00:30:51.075 [2024-11-20 10:04:21.778501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.075 [2024-11-20 10:04:21.778529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.075 qpair failed and we were unable to recover it. 00:30:51.075 [2024-11-20 10:04:21.778896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.075 [2024-11-20 10:04:21.778924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.075 qpair failed and we were unable to recover it. 00:30:51.075 [2024-11-20 10:04:21.779246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.075 [2024-11-20 10:04:21.779275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.075 qpair failed and we were unable to recover it. 00:30:51.075 [2024-11-20 10:04:21.779589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.075 [2024-11-20 10:04:21.779617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.075 qpair failed and we were unable to recover it. 00:30:51.075 [2024-11-20 10:04:21.779957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.075 [2024-11-20 10:04:21.779984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.075 qpair failed and we were unable to recover it. 00:30:51.075 [2024-11-20 10:04:21.780325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.075 [2024-11-20 10:04:21.780354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.075 qpair failed and we were unable to recover it. 00:30:51.075 [2024-11-20 10:04:21.780733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.075 [2024-11-20 10:04:21.780761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.075 qpair failed and we were unable to recover it. 00:30:51.075 [2024-11-20 10:04:21.781097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.075 [2024-11-20 10:04:21.781126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.075 qpair failed and we were unable to recover it. 00:30:51.075 [2024-11-20 10:04:21.781458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.075 [2024-11-20 10:04:21.781486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.075 qpair failed and we were unable to recover it. 00:30:51.075 [2024-11-20 10:04:21.781830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.075 [2024-11-20 10:04:21.781858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.075 qpair failed and we were unable to recover it. 00:30:51.075 [2024-11-20 10:04:21.782195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.075 [2024-11-20 10:04:21.782224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.075 qpair failed and we were unable to recover it. 00:30:51.075 [2024-11-20 10:04:21.782427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.075 [2024-11-20 10:04:21.782454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.075 qpair failed and we were unable to recover it. 00:30:51.075 [2024-11-20 10:04:21.782778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.075 [2024-11-20 10:04:21.782806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.075 qpair failed and we were unable to recover it. 00:30:51.075 [2024-11-20 10:04:21.783024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.075 [2024-11-20 10:04:21.783051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.075 qpair failed and we were unable to recover it. 00:30:51.075 [2024-11-20 10:04:21.783302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.075 [2024-11-20 10:04:21.783338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.075 qpair failed and we were unable to recover it. 00:30:51.075 [2024-11-20 10:04:21.783696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.075 [2024-11-20 10:04:21.783725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.075 qpair failed and we were unable to recover it. 00:30:51.075 [2024-11-20 10:04:21.784052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.075 [2024-11-20 10:04:21.784079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.075 qpair failed and we were unable to recover it. 00:30:51.075 [2024-11-20 10:04:21.784389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.075 [2024-11-20 10:04:21.784419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.075 qpair failed and we were unable to recover it. 00:30:51.075 [2024-11-20 10:04:21.784742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.075 [2024-11-20 10:04:21.784771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.075 qpair failed and we were unable to recover it. 00:30:51.075 [2024-11-20 10:04:21.785140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.075 [2024-11-20 10:04:21.785183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.075 qpair failed and we were unable to recover it. 00:30:51.075 [2024-11-20 10:04:21.785542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.075 [2024-11-20 10:04:21.785584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.075 qpair failed and we were unable to recover it. 00:30:51.075 [2024-11-20 10:04:21.785918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.075 [2024-11-20 10:04:21.785946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.075 qpair failed and we were unable to recover it. 00:30:51.075 [2024-11-20 10:04:21.786138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.075 [2024-11-20 10:04:21.786174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.075 qpair failed and we were unable to recover it. 00:30:51.075 [2024-11-20 10:04:21.786511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.075 [2024-11-20 10:04:21.786539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.075 qpair failed and we were unable to recover it. 00:30:51.076 [2024-11-20 10:04:21.786880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.076 [2024-11-20 10:04:21.786909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.076 qpair failed and we were unable to recover it. 00:30:51.076 [2024-11-20 10:04:21.787280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.076 [2024-11-20 10:04:21.787308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.076 qpair failed and we were unable to recover it. 00:30:51.076 [2024-11-20 10:04:21.787660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.076 [2024-11-20 10:04:21.787689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.076 qpair failed and we were unable to recover it. 00:30:51.076 [2024-11-20 10:04:21.788033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.076 [2024-11-20 10:04:21.788062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.076 qpair failed and we were unable to recover it. 00:30:51.076 [2024-11-20 10:04:21.788433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.076 [2024-11-20 10:04:21.788468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.076 qpair failed and we were unable to recover it. 00:30:51.076 [2024-11-20 10:04:21.788717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.076 [2024-11-20 10:04:21.788750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.076 qpair failed and we were unable to recover it. 00:30:51.076 [2024-11-20 10:04:21.788998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.076 [2024-11-20 10:04:21.789029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.076 qpair failed and we were unable to recover it. 00:30:51.076 [2024-11-20 10:04:21.789382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.076 [2024-11-20 10:04:21.789413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.076 qpair failed and we were unable to recover it. 00:30:51.076 [2024-11-20 10:04:21.789736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.076 [2024-11-20 10:04:21.789765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.076 qpair failed and we were unable to recover it. 00:30:51.076 [2024-11-20 10:04:21.790038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.076 [2024-11-20 10:04:21.790066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.076 qpair failed and we were unable to recover it. 00:30:51.076 [2024-11-20 10:04:21.790402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.076 [2024-11-20 10:04:21.790432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.076 qpair failed and we were unable to recover it. 00:30:51.076 [2024-11-20 10:04:21.790780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.076 [2024-11-20 10:04:21.790808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.076 qpair failed and we were unable to recover it. 00:30:51.076 [2024-11-20 10:04:21.791166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.076 [2024-11-20 10:04:21.791196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.076 qpair failed and we were unable to recover it. 00:30:51.076 [2024-11-20 10:04:21.791395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.076 [2024-11-20 10:04:21.791423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.076 qpair failed and we were unable to recover it. 00:30:51.076 [2024-11-20 10:04:21.791782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.076 [2024-11-20 10:04:21.791811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.076 qpair failed and we were unable to recover it. 00:30:51.076 [2024-11-20 10:04:21.791969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.076 [2024-11-20 10:04:21.791996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.076 qpair failed and we were unable to recover it. 00:30:51.076 [2024-11-20 10:04:21.792327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.076 [2024-11-20 10:04:21.792357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.076 qpair failed and we were unable to recover it. 00:30:51.076 [2024-11-20 10:04:21.792703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.076 [2024-11-20 10:04:21.792732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.076 qpair failed and we were unable to recover it. 00:30:51.076 [2024-11-20 10:04:21.792978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.076 [2024-11-20 10:04:21.793010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.076 qpair failed and we were unable to recover it. 00:30:51.076 [2024-11-20 10:04:21.793319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.076 [2024-11-20 10:04:21.793348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.076 qpair failed and we were unable to recover it. 00:30:51.076 [2024-11-20 10:04:21.793620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.076 [2024-11-20 10:04:21.793648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.076 qpair failed and we were unable to recover it. 00:30:51.076 [2024-11-20 10:04:21.793995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.076 [2024-11-20 10:04:21.794023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.076 qpair failed and we were unable to recover it. 00:30:51.076 [2024-11-20 10:04:21.794371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.076 [2024-11-20 10:04:21.794400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.076 qpair failed and we were unable to recover it. 00:30:51.076 [2024-11-20 10:04:21.794624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.076 [2024-11-20 10:04:21.794652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.076 qpair failed and we were unable to recover it. 00:30:51.076 [2024-11-20 10:04:21.795017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.076 [2024-11-20 10:04:21.795047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.076 qpair failed and we were unable to recover it. 00:30:51.076 [2024-11-20 10:04:21.795388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.076 [2024-11-20 10:04:21.795417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.076 qpair failed and we were unable to recover it. 00:30:51.076 [2024-11-20 10:04:21.795758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.076 [2024-11-20 10:04:21.795786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.076 qpair failed and we were unable to recover it. 00:30:51.076 [2024-11-20 10:04:21.795986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.076 [2024-11-20 10:04:21.796019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.076 qpair failed and we were unable to recover it. 00:30:51.076 [2024-11-20 10:04:21.796451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.076 [2024-11-20 10:04:21.796481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.076 qpair failed and we were unable to recover it. 00:30:51.076 [2024-11-20 10:04:21.796811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.076 [2024-11-20 10:04:21.796838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.076 qpair failed and we were unable to recover it. 00:30:51.076 [2024-11-20 10:04:21.797170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.076 [2024-11-20 10:04:21.797200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.076 qpair failed and we were unable to recover it. 00:30:51.076 [2024-11-20 10:04:21.797518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.076 [2024-11-20 10:04:21.797547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.076 qpair failed and we were unable to recover it. 00:30:51.076 [2024-11-20 10:04:21.797886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.076 [2024-11-20 10:04:21.797913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.076 qpair failed and we were unable to recover it. 00:30:51.076 [2024-11-20 10:04:21.798242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.076 [2024-11-20 10:04:21.798272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.076 qpair failed and we were unable to recover it. 00:30:51.076 [2024-11-20 10:04:21.798609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.076 [2024-11-20 10:04:21.798636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.076 qpair failed and we were unable to recover it. 00:30:51.076 [2024-11-20 10:04:21.798977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.076 [2024-11-20 10:04:21.799005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.076 qpair failed and we were unable to recover it. 00:30:51.076 [2024-11-20 10:04:21.799357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.076 [2024-11-20 10:04:21.799386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.076 qpair failed and we were unable to recover it. 00:30:51.076 [2024-11-20 10:04:21.799735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.076 [2024-11-20 10:04:21.799769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.077 qpair failed and we were unable to recover it. 00:30:51.077 [2024-11-20 10:04:21.800120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.077 [2024-11-20 10:04:21.800148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.077 qpair failed and we were unable to recover it. 00:30:51.077 [2024-11-20 10:04:21.800377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.077 [2024-11-20 10:04:21.800405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.077 qpair failed and we were unable to recover it. 00:30:51.077 [2024-11-20 10:04:21.800754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.077 [2024-11-20 10:04:21.800783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.077 qpair failed and we were unable to recover it. 00:30:51.077 [2024-11-20 10:04:21.801167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.077 [2024-11-20 10:04:21.801197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.077 qpair failed and we were unable to recover it. 00:30:51.077 [2024-11-20 10:04:21.801536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.077 [2024-11-20 10:04:21.801564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.077 qpair failed and we were unable to recover it. 00:30:51.077 [2024-11-20 10:04:21.801913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.077 [2024-11-20 10:04:21.801942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.077 qpair failed and we were unable to recover it. 00:30:51.077 [2024-11-20 10:04:21.802313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.077 [2024-11-20 10:04:21.802343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.077 qpair failed and we were unable to recover it. 00:30:51.077 [2024-11-20 10:04:21.802566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.077 [2024-11-20 10:04:21.802594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.077 qpair failed and we were unable to recover it. 00:30:51.077 [2024-11-20 10:04:21.802950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.077 [2024-11-20 10:04:21.802979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.077 qpair failed and we were unable to recover it. 00:30:51.077 [2024-11-20 10:04:21.803183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.077 [2024-11-20 10:04:21.803212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.077 qpair failed and we were unable to recover it. 00:30:51.077 [2024-11-20 10:04:21.803407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.077 [2024-11-20 10:04:21.803434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.077 qpair failed and we were unable to recover it. 00:30:51.077 [2024-11-20 10:04:21.803860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.077 [2024-11-20 10:04:21.803888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.077 qpair failed and we were unable to recover it. 00:30:51.077 [2024-11-20 10:04:21.804223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.077 [2024-11-20 10:04:21.804252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.077 qpair failed and we were unable to recover it. 00:30:51.077 [2024-11-20 10:04:21.804554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.077 [2024-11-20 10:04:21.804584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.077 qpair failed and we were unable to recover it. 00:30:51.077 [2024-11-20 10:04:21.804828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.077 [2024-11-20 10:04:21.804856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.077 qpair failed and we were unable to recover it. 00:30:51.077 [2024-11-20 10:04:21.805196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.077 [2024-11-20 10:04:21.805225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.077 qpair failed and we were unable to recover it. 00:30:51.077 [2024-11-20 10:04:21.805574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.077 [2024-11-20 10:04:21.805602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.077 qpair failed and we were unable to recover it. 00:30:51.077 [2024-11-20 10:04:21.805820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.077 [2024-11-20 10:04:21.805847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.077 qpair failed and we were unable to recover it. 00:30:51.077 [2024-11-20 10:04:21.806183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.077 [2024-11-20 10:04:21.806213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.077 qpair failed and we were unable to recover it. 00:30:51.077 [2024-11-20 10:04:21.806571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.077 [2024-11-20 10:04:21.806600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.077 qpair failed and we were unable to recover it. 00:30:51.077 [2024-11-20 10:04:21.806931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.077 [2024-11-20 10:04:21.806958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.077 qpair failed and we were unable to recover it. 00:30:51.077 [2024-11-20 10:04:21.807303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.077 [2024-11-20 10:04:21.807333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.077 qpair failed and we were unable to recover it. 00:30:51.077 [2024-11-20 10:04:21.807546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.077 [2024-11-20 10:04:21.807574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.077 qpair failed and we were unable to recover it. 00:30:51.077 [2024-11-20 10:04:21.807921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.077 [2024-11-20 10:04:21.807948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.077 qpair failed and we were unable to recover it. 00:30:51.077 [2024-11-20 10:04:21.808319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.077 [2024-11-20 10:04:21.808349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.077 qpair failed and we were unable to recover it. 00:30:51.077 [2024-11-20 10:04:21.808700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.077 [2024-11-20 10:04:21.808728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.077 qpair failed and we were unable to recover it. 00:30:51.077 [2024-11-20 10:04:21.809045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.077 [2024-11-20 10:04:21.809072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.077 qpair failed and we were unable to recover it. 00:30:51.077 [2024-11-20 10:04:21.809429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.077 [2024-11-20 10:04:21.809458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.077 qpair failed and we were unable to recover it. 00:30:51.077 [2024-11-20 10:04:21.809806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.077 [2024-11-20 10:04:21.809835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.077 qpair failed and we were unable to recover it. 00:30:51.077 [2024-11-20 10:04:21.810179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.077 [2024-11-20 10:04:21.810208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.077 qpair failed and we were unable to recover it. 00:30:51.077 [2024-11-20 10:04:21.810542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.077 [2024-11-20 10:04:21.810572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.077 qpair failed and we were unable to recover it. 00:30:51.077 [2024-11-20 10:04:21.810930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.077 [2024-11-20 10:04:21.810958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.077 qpair failed and we were unable to recover it. 00:30:51.077 [2024-11-20 10:04:21.811303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.077 [2024-11-20 10:04:21.811332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.077 qpair failed and we were unable to recover it. 00:30:51.077 [2024-11-20 10:04:21.811582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.077 [2024-11-20 10:04:21.811609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.077 qpair failed and we were unable to recover it. 00:30:51.077 [2024-11-20 10:04:21.811961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.077 [2024-11-20 10:04:21.811989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.077 qpair failed and we were unable to recover it. 00:30:51.077 [2024-11-20 10:04:21.812337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.077 [2024-11-20 10:04:21.812367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.077 qpair failed and we were unable to recover it. 00:30:51.077 [2024-11-20 10:04:21.812701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.077 [2024-11-20 10:04:21.812730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.077 qpair failed and we were unable to recover it. 00:30:51.077 [2024-11-20 10:04:21.812926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.077 [2024-11-20 10:04:21.812954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.077 qpair failed and we were unable to recover it. 00:30:51.078 [2024-11-20 10:04:21.813296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.078 [2024-11-20 10:04:21.813325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.078 qpair failed and we were unable to recover it. 00:30:51.078 [2024-11-20 10:04:21.813545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.078 [2024-11-20 10:04:21.813572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.078 qpair failed and we were unable to recover it. 00:30:51.078 [2024-11-20 10:04:21.813767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.078 [2024-11-20 10:04:21.813796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.078 qpair failed and we were unable to recover it. 00:30:51.078 [2024-11-20 10:04:21.814142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.078 [2024-11-20 10:04:21.814179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.078 qpair failed and we were unable to recover it. 00:30:51.078 [2024-11-20 10:04:21.814529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.078 [2024-11-20 10:04:21.814558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.078 qpair failed and we were unable to recover it. 00:30:51.078 [2024-11-20 10:04:21.814848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.078 [2024-11-20 10:04:21.814875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.078 qpair failed and we were unable to recover it. 00:30:51.078 [2024-11-20 10:04:21.815197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.078 [2024-11-20 10:04:21.815227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.078 qpair failed and we were unable to recover it. 00:30:51.078 [2024-11-20 10:04:21.815630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.078 [2024-11-20 10:04:21.815658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.078 qpair failed and we were unable to recover it. 00:30:51.078 [2024-11-20 10:04:21.815878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.078 [2024-11-20 10:04:21.815905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.078 qpair failed and we were unable to recover it. 00:30:51.078 [2024-11-20 10:04:21.816236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.078 [2024-11-20 10:04:21.816266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.078 qpair failed and we were unable to recover it. 00:30:51.078 [2024-11-20 10:04:21.816578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.078 [2024-11-20 10:04:21.816613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.078 qpair failed and we were unable to recover it. 00:30:51.078 [2024-11-20 10:04:21.816967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.078 [2024-11-20 10:04:21.816995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.078 qpair failed and we were unable to recover it. 00:30:51.078 [2024-11-20 10:04:21.817353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.078 [2024-11-20 10:04:21.817383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.078 qpair failed and we were unable to recover it. 00:30:51.078 [2024-11-20 10:04:21.817727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.078 [2024-11-20 10:04:21.817754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.078 qpair failed and we were unable to recover it. 00:30:51.078 [2024-11-20 10:04:21.818054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.078 [2024-11-20 10:04:21.818083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.078 qpair failed and we were unable to recover it. 00:30:51.078 [2024-11-20 10:04:21.818400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.078 [2024-11-20 10:04:21.818430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.078 qpair failed and we were unable to recover it. 00:30:51.078 [2024-11-20 10:04:21.818788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.078 [2024-11-20 10:04:21.818816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.078 qpair failed and we were unable to recover it. 00:30:51.078 [2024-11-20 10:04:21.819154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.078 [2024-11-20 10:04:21.819197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.078 qpair failed and we were unable to recover it. 00:30:51.078 [2024-11-20 10:04:21.819557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.078 [2024-11-20 10:04:21.819585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.078 qpair failed and we were unable to recover it. 00:30:51.078 [2024-11-20 10:04:21.819846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.078 [2024-11-20 10:04:21.819873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.078 qpair failed and we were unable to recover it. 00:30:51.078 [2024-11-20 10:04:21.820206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.078 [2024-11-20 10:04:21.820236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.078 qpair failed and we were unable to recover it. 00:30:51.078 [2024-11-20 10:04:21.820550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.078 [2024-11-20 10:04:21.820580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.078 qpair failed and we were unable to recover it. 00:30:51.078 [2024-11-20 10:04:21.820802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.078 [2024-11-20 10:04:21.820831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.078 qpair failed and we were unable to recover it. 00:30:51.078 [2024-11-20 10:04:21.821171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.078 [2024-11-20 10:04:21.821200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.078 qpair failed and we were unable to recover it. 00:30:51.078 [2024-11-20 10:04:21.821445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.078 [2024-11-20 10:04:21.821473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.078 qpair failed and we were unable to recover it. 00:30:51.078 [2024-11-20 10:04:21.821816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.078 [2024-11-20 10:04:21.821845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.078 qpair failed and we were unable to recover it. 00:30:51.078 [2024-11-20 10:04:21.822184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.078 [2024-11-20 10:04:21.822212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.078 qpair failed and we were unable to recover it. 00:30:51.078 [2024-11-20 10:04:21.822434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.078 [2024-11-20 10:04:21.822461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.078 qpair failed and we were unable to recover it. 00:30:51.078 [2024-11-20 10:04:21.822799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.078 [2024-11-20 10:04:21.822827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.078 qpair failed and we were unable to recover it. 00:30:51.078 [2024-11-20 10:04:21.823045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.078 [2024-11-20 10:04:21.823080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.078 qpair failed and we were unable to recover it. 00:30:51.078 [2024-11-20 10:04:21.823467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.078 [2024-11-20 10:04:21.823496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.078 qpair failed and we were unable to recover it. 00:30:51.078 [2024-11-20 10:04:21.823877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.078 [2024-11-20 10:04:21.823904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.078 qpair failed and we were unable to recover it. 00:30:51.078 [2024-11-20 10:04:21.824025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.078 [2024-11-20 10:04:21.824052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.078 qpair failed and we were unable to recover it. 00:30:51.078 [2024-11-20 10:04:21.824291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.078 [2024-11-20 10:04:21.824320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.078 qpair failed and we were unable to recover it. 00:30:51.078 [2024-11-20 10:04:21.824660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.078 [2024-11-20 10:04:21.824688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.078 qpair failed and we were unable to recover it. 00:30:51.078 [2024-11-20 10:04:21.825032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.078 [2024-11-20 10:04:21.825060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.078 qpair failed and we were unable to recover it. 00:30:51.078 [2024-11-20 10:04:21.825453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.078 [2024-11-20 10:04:21.825484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.078 qpair failed and we were unable to recover it. 00:30:51.078 [2024-11-20 10:04:21.825738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.078 [2024-11-20 10:04:21.825770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.078 qpair failed and we were unable to recover it. 00:30:51.079 [2024-11-20 10:04:21.826107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.079 [2024-11-20 10:04:21.826136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.079 qpair failed and we were unable to recover it. 00:30:51.079 [2024-11-20 10:04:21.826504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.079 [2024-11-20 10:04:21.826534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.079 qpair failed and we were unable to recover it. 00:30:51.079 [2024-11-20 10:04:21.826761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.079 [2024-11-20 10:04:21.826789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.079 qpair failed and we were unable to recover it. 00:30:51.079 [2024-11-20 10:04:21.827154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.079 [2024-11-20 10:04:21.827192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.079 qpair failed and we were unable to recover it. 00:30:51.079 [2024-11-20 10:04:21.827555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.079 [2024-11-20 10:04:21.827583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.079 qpair failed and we were unable to recover it. 00:30:51.079 [2024-11-20 10:04:21.827804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.079 [2024-11-20 10:04:21.827831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.079 qpair failed and we were unable to recover it. 00:30:51.079 [2024-11-20 10:04:21.828065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.079 [2024-11-20 10:04:21.828093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.079 qpair failed and we were unable to recover it. 00:30:51.079 [2024-11-20 10:04:21.828221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.079 [2024-11-20 10:04:21.828250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.079 qpair failed and we were unable to recover it. 00:30:51.079 [2024-11-20 10:04:21.828660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.079 [2024-11-20 10:04:21.828688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.079 qpair failed and we were unable to recover it. 00:30:51.079 [2024-11-20 10:04:21.828931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.079 [2024-11-20 10:04:21.828958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.079 qpair failed and we were unable to recover it. 00:30:51.079 [2024-11-20 10:04:21.829232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.079 [2024-11-20 10:04:21.829262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.079 qpair failed and we were unable to recover it. 00:30:51.079 [2024-11-20 10:04:21.829597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.079 [2024-11-20 10:04:21.829625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.079 qpair failed and we were unable to recover it. 00:30:51.079 [2024-11-20 10:04:21.829971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.079 [2024-11-20 10:04:21.829998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.079 qpair failed and we were unable to recover it. 00:30:51.079 [2024-11-20 10:04:21.830329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.079 [2024-11-20 10:04:21.830358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.079 qpair failed and we were unable to recover it. 00:30:51.079 [2024-11-20 10:04:21.830716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.079 [2024-11-20 10:04:21.830744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.079 qpair failed and we were unable to recover it. 00:30:51.079 [2024-11-20 10:04:21.831094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.079 [2024-11-20 10:04:21.831122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.079 qpair failed and we were unable to recover it. 00:30:51.079 [2024-11-20 10:04:21.831464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.079 [2024-11-20 10:04:21.831493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.079 qpair failed and we were unable to recover it. 00:30:51.079 [2024-11-20 10:04:21.831843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.079 [2024-11-20 10:04:21.831872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.079 qpair failed and we were unable to recover it. 00:30:51.079 [2024-11-20 10:04:21.832220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.079 [2024-11-20 10:04:21.832251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.079 qpair failed and we were unable to recover it. 00:30:51.079 [2024-11-20 10:04:21.832562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.079 [2024-11-20 10:04:21.832590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.079 qpair failed and we were unable to recover it. 00:30:51.079 [2024-11-20 10:04:21.832948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.079 [2024-11-20 10:04:21.832975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.079 qpair failed and we were unable to recover it. 00:30:51.079 [2024-11-20 10:04:21.833349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.079 [2024-11-20 10:04:21.833378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.079 qpair failed and we were unable to recover it. 00:30:51.079 [2024-11-20 10:04:21.833769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.079 [2024-11-20 10:04:21.833797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.079 qpair failed and we were unable to recover it. 00:30:51.079 [2024-11-20 10:04:21.834010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.079 [2024-11-20 10:04:21.834037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.079 qpair failed and we were unable to recover it. 00:30:51.079 [2024-11-20 10:04:21.834397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.079 [2024-11-20 10:04:21.834426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.079 qpair failed and we were unable to recover it. 00:30:51.079 [2024-11-20 10:04:21.834726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.079 [2024-11-20 10:04:21.834755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.079 qpair failed and we were unable to recover it. 00:30:51.079 [2024-11-20 10:04:21.835085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.079 [2024-11-20 10:04:21.835113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.079 qpair failed and we were unable to recover it. 00:30:51.079 [2024-11-20 10:04:21.835361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.079 [2024-11-20 10:04:21.835390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.079 qpair failed and we were unable to recover it. 00:30:51.079 [2024-11-20 10:04:21.835767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.079 [2024-11-20 10:04:21.835795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.079 qpair failed and we were unable to recover it. 00:30:51.079 [2024-11-20 10:04:21.836131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.079 [2024-11-20 10:04:21.836177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.079 qpair failed and we were unable to recover it. 00:30:51.079 [2024-11-20 10:04:21.836305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.079 [2024-11-20 10:04:21.836333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.079 qpair failed and we were unable to recover it. 00:30:51.079 [2024-11-20 10:04:21.836719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.079 [2024-11-20 10:04:21.836746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.079 qpair failed and we were unable to recover it. 00:30:51.079 [2024-11-20 10:04:21.837082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.079 [2024-11-20 10:04:21.837116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.079 qpair failed and we were unable to recover it. 00:30:51.079 [2024-11-20 10:04:21.837481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.080 [2024-11-20 10:04:21.837511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.080 qpair failed and we were unable to recover it. 00:30:51.080 [2024-11-20 10:04:21.837831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.080 [2024-11-20 10:04:21.837859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.080 qpair failed and we were unable to recover it. 00:30:51.080 [2024-11-20 10:04:21.838208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.080 [2024-11-20 10:04:21.838238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.080 qpair failed and we were unable to recover it. 00:30:51.080 [2024-11-20 10:04:21.838586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.080 [2024-11-20 10:04:21.838614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.080 qpair failed and we were unable to recover it. 00:30:51.080 [2024-11-20 10:04:21.838933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.080 [2024-11-20 10:04:21.838961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.080 qpair failed and we were unable to recover it. 00:30:51.080 [2024-11-20 10:04:21.839310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.080 [2024-11-20 10:04:21.839340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.080 qpair failed and we were unable to recover it. 00:30:51.080 [2024-11-20 10:04:21.839681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.080 [2024-11-20 10:04:21.839709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.080 qpair failed and we were unable to recover it. 00:30:51.080 [2024-11-20 10:04:21.840056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.080 [2024-11-20 10:04:21.840085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.080 qpair failed and we were unable to recover it. 00:30:51.080 [2024-11-20 10:04:21.840442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.080 [2024-11-20 10:04:21.840473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.080 qpair failed and we were unable to recover it. 00:30:51.080 [2024-11-20 10:04:21.840821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.080 [2024-11-20 10:04:21.840849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.080 qpair failed and we were unable to recover it. 00:30:51.080 [2024-11-20 10:04:21.841071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.080 [2024-11-20 10:04:21.841098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.080 qpair failed and we were unable to recover it. 00:30:51.080 [2024-11-20 10:04:21.841424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.080 [2024-11-20 10:04:21.841453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.080 qpair failed and we were unable to recover it. 00:30:51.080 [2024-11-20 10:04:21.841791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.080 [2024-11-20 10:04:21.841819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.080 qpair failed and we were unable to recover it. 00:30:51.080 [2024-11-20 10:04:21.842012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.080 [2024-11-20 10:04:21.842040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.080 qpair failed and we were unable to recover it. 00:30:51.080 [2024-11-20 10:04:21.842302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.080 [2024-11-20 10:04:21.842332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.080 qpair failed and we were unable to recover it. 00:30:51.080 [2024-11-20 10:04:21.842567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.080 [2024-11-20 10:04:21.842594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.080 qpair failed and we were unable to recover it. 00:30:51.080 [2024-11-20 10:04:21.842800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.080 [2024-11-20 10:04:21.842828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.080 qpair failed and we were unable to recover it. 00:30:51.080 [2024-11-20 10:04:21.843149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.080 [2024-11-20 10:04:21.843252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.080 qpair failed and we were unable to recover it. 00:30:51.080 [2024-11-20 10:04:21.843585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.080 [2024-11-20 10:04:21.843612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.080 qpair failed and we were unable to recover it. 00:30:51.080 [2024-11-20 10:04:21.843948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.080 [2024-11-20 10:04:21.843976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.080 qpair failed and we were unable to recover it. 00:30:51.080 [2024-11-20 10:04:21.844303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.080 [2024-11-20 10:04:21.844333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.080 qpair failed and we were unable to recover it. 00:30:51.080 [2024-11-20 10:04:21.844672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.080 [2024-11-20 10:04:21.844700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.080 qpair failed and we were unable to recover it. 00:30:51.080 [2024-11-20 10:04:21.845060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.080 [2024-11-20 10:04:21.845087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.080 qpair failed and we were unable to recover it. 00:30:51.080 [2024-11-20 10:04:21.845445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.080 [2024-11-20 10:04:21.845474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.080 qpair failed and we were unable to recover it. 00:30:51.080 [2024-11-20 10:04:21.845691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.080 [2024-11-20 10:04:21.845719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.080 qpair failed and we were unable to recover it. 00:30:51.080 [2024-11-20 10:04:21.846063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.080 [2024-11-20 10:04:21.846090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.080 qpair failed and we were unable to recover it. 00:30:51.080 [2024-11-20 10:04:21.846282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.080 [2024-11-20 10:04:21.846318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.080 qpair failed and we were unable to recover it. 00:30:51.080 [2024-11-20 10:04:21.846675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.080 [2024-11-20 10:04:21.846704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.080 qpair failed and we were unable to recover it. 00:30:51.080 [2024-11-20 10:04:21.846957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.080 [2024-11-20 10:04:21.846987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.080 qpair failed and we were unable to recover it. 00:30:51.080 [2024-11-20 10:04:21.847305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.080 [2024-11-20 10:04:21.847334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.080 qpair failed and we were unable to recover it. 00:30:51.080 [2024-11-20 10:04:21.847661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.080 [2024-11-20 10:04:21.847690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.080 qpair failed and we were unable to recover it. 00:30:51.080 [2024-11-20 10:04:21.848034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.080 [2024-11-20 10:04:21.848061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.080 qpair failed and we were unable to recover it. 00:30:51.080 [2024-11-20 10:04:21.848290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.080 [2024-11-20 10:04:21.848323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.080 qpair failed and we were unable to recover it. 00:30:51.080 [2024-11-20 10:04:21.848656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.080 [2024-11-20 10:04:21.848684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.080 qpair failed and we were unable to recover it. 00:30:51.080 [2024-11-20 10:04:21.849036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.081 [2024-11-20 10:04:21.849063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.081 qpair failed and we were unable to recover it. 00:30:51.081 [2024-11-20 10:04:21.849412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.081 [2024-11-20 10:04:21.849441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.081 qpair failed and we were unable to recover it. 00:30:51.081 [2024-11-20 10:04:21.849792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.081 [2024-11-20 10:04:21.849820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.081 qpair failed and we were unable to recover it. 00:30:51.081 [2024-11-20 10:04:21.850175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.081 [2024-11-20 10:04:21.850205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.081 qpair failed and we were unable to recover it. 00:30:51.081 [2024-11-20 10:04:21.850421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.081 [2024-11-20 10:04:21.850450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.081 qpair failed and we were unable to recover it. 00:30:51.081 [2024-11-20 10:04:21.850787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.081 [2024-11-20 10:04:21.850815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.081 qpair failed and we were unable to recover it. 00:30:51.081 [2024-11-20 10:04:21.851178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.081 [2024-11-20 10:04:21.851208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.081 qpair failed and we were unable to recover it. 00:30:51.081 [2024-11-20 10:04:21.851540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.081 [2024-11-20 10:04:21.851568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.081 qpair failed and we were unable to recover it. 00:30:51.081 [2024-11-20 10:04:21.851835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.081 [2024-11-20 10:04:21.851864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.081 qpair failed and we were unable to recover it. 00:30:51.081 [2024-11-20 10:04:21.852194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.081 [2024-11-20 10:04:21.852225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.081 qpair failed and we were unable to recover it. 00:30:51.081 [2024-11-20 10:04:21.852576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.081 [2024-11-20 10:04:21.852605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.081 qpair failed and we were unable to recover it. 00:30:51.081 [2024-11-20 10:04:21.852935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.081 [2024-11-20 10:04:21.852963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.081 qpair failed and we were unable to recover it. 00:30:51.081 [2024-11-20 10:04:21.853309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.081 [2024-11-20 10:04:21.853338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.081 qpair failed and we were unable to recover it. 00:30:51.081 [2024-11-20 10:04:21.853695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.081 [2024-11-20 10:04:21.853723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.081 qpair failed and we were unable to recover it. 00:30:51.081 [2024-11-20 10:04:21.854095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.081 [2024-11-20 10:04:21.854123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.081 qpair failed and we were unable to recover it. 00:30:51.081 [2024-11-20 10:04:21.854448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.081 [2024-11-20 10:04:21.854478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.081 qpair failed and we were unable to recover it. 00:30:51.081 [2024-11-20 10:04:21.854811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.081 [2024-11-20 10:04:21.854840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.081 qpair failed and we were unable to recover it. 00:30:51.081 [2024-11-20 10:04:21.854948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.081 [2024-11-20 10:04:21.854975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.081 qpair failed and we were unable to recover it. 00:30:51.081 [2024-11-20 10:04:21.855107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.081 [2024-11-20 10:04:21.855138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.081 qpair failed and we were unable to recover it. 00:30:51.081 [2024-11-20 10:04:21.855497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.081 [2024-11-20 10:04:21.855526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.081 qpair failed and we were unable to recover it. 00:30:51.081 [2024-11-20 10:04:21.855785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.081 [2024-11-20 10:04:21.855814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.081 qpair failed and we were unable to recover it. 00:30:51.081 [2024-11-20 10:04:21.856157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.081 [2024-11-20 10:04:21.856204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.081 qpair failed and we were unable to recover it. 00:30:51.081 [2024-11-20 10:04:21.856532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.081 [2024-11-20 10:04:21.856561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.081 qpair failed and we were unable to recover it. 00:30:51.081 [2024-11-20 10:04:21.856943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.081 [2024-11-20 10:04:21.856972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.081 qpair failed and we were unable to recover it. 00:30:51.081 [2024-11-20 10:04:21.857184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.081 [2024-11-20 10:04:21.857214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.081 qpair failed and we were unable to recover it. 00:30:51.081 [2024-11-20 10:04:21.857545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.081 [2024-11-20 10:04:21.857574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.081 qpair failed and we were unable to recover it. 00:30:51.081 [2024-11-20 10:04:21.857933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.081 [2024-11-20 10:04:21.857962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.081 qpair failed and we were unable to recover it. 00:30:51.081 [2024-11-20 10:04:21.858155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.081 [2024-11-20 10:04:21.858192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.081 qpair failed and we were unable to recover it. 00:30:51.081 [2024-11-20 10:04:21.858481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.081 [2024-11-20 10:04:21.858510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.081 qpair failed and we were unable to recover it. 00:30:51.081 [2024-11-20 10:04:21.858848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.081 [2024-11-20 10:04:21.858878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.081 qpair failed and we were unable to recover it. 00:30:51.081 [2024-11-20 10:04:21.859081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.081 [2024-11-20 10:04:21.859111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.081 qpair failed and we were unable to recover it. 00:30:51.081 [2024-11-20 10:04:21.859458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.081 [2024-11-20 10:04:21.859488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.081 qpair failed and we were unable to recover it. 00:30:51.081 [2024-11-20 10:04:21.859832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.081 [2024-11-20 10:04:21.859861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.081 qpair failed and we were unable to recover it. 00:30:51.081 [2024-11-20 10:04:21.860204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.081 [2024-11-20 10:04:21.860239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.081 qpair failed and we were unable to recover it. 00:30:51.081 [2024-11-20 10:04:21.860459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.081 [2024-11-20 10:04:21.860488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.081 qpair failed and we were unable to recover it. 00:30:51.081 [2024-11-20 10:04:21.860837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.081 [2024-11-20 10:04:21.860865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.081 qpair failed and we were unable to recover it. 00:30:51.081 [2024-11-20 10:04:21.861213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.081 [2024-11-20 10:04:21.861243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.081 qpair failed and we were unable to recover it. 00:30:51.081 [2024-11-20 10:04:21.861334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.081 [2024-11-20 10:04:21.861361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.082 qpair failed and we were unable to recover it. 00:30:51.082 [2024-11-20 10:04:21.861663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.082 [2024-11-20 10:04:21.861690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.082 qpair failed and we were unable to recover it. 00:30:51.082 [2024-11-20 10:04:21.862055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.082 [2024-11-20 10:04:21.862083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.082 qpair failed and we were unable to recover it. 00:30:51.082 [2024-11-20 10:04:21.862290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.082 [2024-11-20 10:04:21.862319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.082 qpair failed and we were unable to recover it. 00:30:51.082 [2024-11-20 10:04:21.862650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.082 [2024-11-20 10:04:21.862677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.082 qpair failed and we were unable to recover it. 00:30:51.082 [2024-11-20 10:04:21.863030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.082 [2024-11-20 10:04:21.863058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.082 qpair failed and we were unable to recover it. 00:30:51.082 [2024-11-20 10:04:21.863403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.082 [2024-11-20 10:04:21.863431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.082 qpair failed and we were unable to recover it. 00:30:51.082 [2024-11-20 10:04:21.863753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.082 [2024-11-20 10:04:21.863781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.082 qpair failed and we were unable to recover it. 00:30:51.082 [2024-11-20 10:04:21.864124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.082 [2024-11-20 10:04:21.864151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.082 qpair failed and we were unable to recover it. 00:30:51.082 [2024-11-20 10:04:21.864377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.082 [2024-11-20 10:04:21.864405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.082 qpair failed and we were unable to recover it. 00:30:51.082 [2024-11-20 10:04:21.864752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.082 [2024-11-20 10:04:21.864781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.082 qpair failed and we were unable to recover it. 00:30:51.082 [2024-11-20 10:04:21.864975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.082 [2024-11-20 10:04:21.865002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.082 qpair failed and we were unable to recover it. 00:30:51.082 [2024-11-20 10:04:21.865223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.082 [2024-11-20 10:04:21.865253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.082 qpair failed and we were unable to recover it. 00:30:51.082 [2024-11-20 10:04:21.865659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.082 [2024-11-20 10:04:21.865689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.082 qpair failed and we were unable to recover it. 00:30:51.082 [2024-11-20 10:04:21.866028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.082 [2024-11-20 10:04:21.866057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.082 qpair failed and we were unable to recover it. 00:30:51.082 [2024-11-20 10:04:21.866313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.082 [2024-11-20 10:04:21.866345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.082 qpair failed and we were unable to recover it. 00:30:51.082 [2024-11-20 10:04:21.866574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.082 [2024-11-20 10:04:21.866603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.082 qpair failed and we were unable to recover it. 00:30:51.082 [2024-11-20 10:04:21.866837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.082 [2024-11-20 10:04:21.866865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.082 qpair failed and we were unable to recover it. 00:30:51.082 [2024-11-20 10:04:21.867205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.082 [2024-11-20 10:04:21.867233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.082 qpair failed and we were unable to recover it. 00:30:51.082 [2024-11-20 10:04:21.867566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.082 [2024-11-20 10:04:21.867593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.082 qpair failed and we were unable to recover it. 00:30:51.082 [2024-11-20 10:04:21.867934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.082 [2024-11-20 10:04:21.867962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.082 qpair failed and we were unable to recover it. 00:30:51.082 [2024-11-20 10:04:21.868316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.082 [2024-11-20 10:04:21.868344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.082 qpair failed and we were unable to recover it. 00:30:51.082 [2024-11-20 10:04:21.868700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.082 [2024-11-20 10:04:21.868727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.082 qpair failed and we were unable to recover it. 00:30:51.082 [2024-11-20 10:04:21.869069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.082 [2024-11-20 10:04:21.869104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.082 qpair failed and we were unable to recover it. 00:30:51.082 [2024-11-20 10:04:21.869298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.082 [2024-11-20 10:04:21.869328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.082 qpair failed and we were unable to recover it. 00:30:51.082 [2024-11-20 10:04:21.869667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.082 [2024-11-20 10:04:21.869695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.082 qpair failed and we were unable to recover it. 00:30:51.082 [2024-11-20 10:04:21.870066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.082 [2024-11-20 10:04:21.870094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.082 qpair failed and we were unable to recover it. 00:30:51.082 [2024-11-20 10:04:21.870373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.082 [2024-11-20 10:04:21.870402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.082 qpair failed and we were unable to recover it. 00:30:51.082 [2024-11-20 10:04:21.870717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.082 [2024-11-20 10:04:21.870746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.082 qpair failed and we were unable to recover it. 00:30:51.082 [2024-11-20 10:04:21.871100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.082 [2024-11-20 10:04:21.871127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.082 qpair failed and we were unable to recover it. 00:30:51.082 [2024-11-20 10:04:21.871347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.082 [2024-11-20 10:04:21.871377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.082 qpair failed and we were unable to recover it. 00:30:51.082 [2024-11-20 10:04:21.871702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.082 [2024-11-20 10:04:21.871731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.082 qpair failed and we were unable to recover it. 00:30:51.082 [2024-11-20 10:04:21.872135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.082 [2024-11-20 10:04:21.872170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.082 qpair failed and we were unable to recover it. 00:30:51.082 [2024-11-20 10:04:21.872423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.082 [2024-11-20 10:04:21.872450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.082 qpair failed and we were unable to recover it. 00:30:51.082 [2024-11-20 10:04:21.872663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.082 [2024-11-20 10:04:21.872691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.082 qpair failed and we were unable to recover it. 00:30:51.082 [2024-11-20 10:04:21.873054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.082 [2024-11-20 10:04:21.873081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.082 qpair failed and we were unable to recover it. 00:30:51.082 [2024-11-20 10:04:21.873288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.082 [2024-11-20 10:04:21.873317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.082 qpair failed and we were unable to recover it. 00:30:51.082 [2024-11-20 10:04:21.873650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.082 [2024-11-20 10:04:21.873680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.082 qpair failed and we were unable to recover it. 00:30:51.082 [2024-11-20 10:04:21.873875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.082 [2024-11-20 10:04:21.873903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.082 qpair failed and we were unable to recover it. 00:30:51.083 [2024-11-20 10:04:21.874185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.083 [2024-11-20 10:04:21.874215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.083 qpair failed and we were unable to recover it. 00:30:51.083 [2024-11-20 10:04:21.874374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.083 [2024-11-20 10:04:21.874402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.083 qpair failed and we were unable to recover it. 00:30:51.083 [2024-11-20 10:04:21.874749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.083 [2024-11-20 10:04:21.874777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.083 qpair failed and we were unable to recover it. 00:30:51.083 [2024-11-20 10:04:21.875107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.083 [2024-11-20 10:04:21.875135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.083 qpair failed and we were unable to recover it. 00:30:51.083 [2024-11-20 10:04:21.875358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.083 [2024-11-20 10:04:21.875386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.083 qpair failed and we were unable to recover it. 00:30:51.083 [2024-11-20 10:04:21.875723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.083 [2024-11-20 10:04:21.875752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.083 qpair failed and we were unable to recover it. 00:30:51.083 [2024-11-20 10:04:21.876107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.083 [2024-11-20 10:04:21.876134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.083 qpair failed and we were unable to recover it. 00:30:51.083 [2024-11-20 10:04:21.876490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.083 [2024-11-20 10:04:21.876519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.083 qpair failed and we were unable to recover it. 00:30:51.083 [2024-11-20 10:04:21.876773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.083 [2024-11-20 10:04:21.876802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.083 qpair failed and we were unable to recover it. 00:30:51.083 [2024-11-20 10:04:21.877128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.083 [2024-11-20 10:04:21.877156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.083 qpair failed and we were unable to recover it. 00:30:51.083 [2024-11-20 10:04:21.877506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.083 [2024-11-20 10:04:21.877535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.083 qpair failed and we were unable to recover it. 00:30:51.083 [2024-11-20 10:04:21.877880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.083 [2024-11-20 10:04:21.877907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.083 qpair failed and we were unable to recover it. 00:30:51.083 [2024-11-20 10:04:21.878246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.083 [2024-11-20 10:04:21.878276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.083 qpair failed and we were unable to recover it. 00:30:51.083 [2024-11-20 10:04:21.878656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.083 [2024-11-20 10:04:21.878684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.083 qpair failed and we were unable to recover it. 00:30:51.083 [2024-11-20 10:04:21.879019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.083 [2024-11-20 10:04:21.879047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.083 qpair failed and we were unable to recover it. 00:30:51.083 [2024-11-20 10:04:21.879394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.083 [2024-11-20 10:04:21.879423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.083 qpair failed and we were unable to recover it. 00:30:51.083 [2024-11-20 10:04:21.879775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.083 [2024-11-20 10:04:21.879803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.083 qpair failed and we were unable to recover it. 00:30:51.083 [2024-11-20 10:04:21.880155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.083 [2024-11-20 10:04:21.880192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.083 qpair failed and we were unable to recover it. 00:30:51.083 [2024-11-20 10:04:21.880504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.083 [2024-11-20 10:04:21.880532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.083 qpair failed and we were unable to recover it. 00:30:51.083 [2024-11-20 10:04:21.880857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.083 [2024-11-20 10:04:21.880885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.083 qpair failed and we were unable to recover it. 00:30:51.083 [2024-11-20 10:04:21.881260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.083 [2024-11-20 10:04:21.881289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.083 qpair failed and we were unable to recover it. 00:30:51.083 [2024-11-20 10:04:21.881622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.083 [2024-11-20 10:04:21.881650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.083 qpair failed and we were unable to recover it. 00:30:51.083 [2024-11-20 10:04:21.881976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.083 [2024-11-20 10:04:21.882005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.083 qpair failed and we were unable to recover it. 00:30:51.083 [2024-11-20 10:04:21.882195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.083 [2024-11-20 10:04:21.882226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.083 qpair failed and we were unable to recover it. 00:30:51.083 [2024-11-20 10:04:21.882592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.083 [2024-11-20 10:04:21.882626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.083 qpair failed and we were unable to recover it. 00:30:51.083 [2024-11-20 10:04:21.882967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.083 [2024-11-20 10:04:21.883004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.083 qpair failed and we were unable to recover it. 00:30:51.083 [2024-11-20 10:04:21.883370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.083 [2024-11-20 10:04:21.883399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.083 qpair failed and we were unable to recover it. 00:30:51.083 [2024-11-20 10:04:21.883623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.083 [2024-11-20 10:04:21.883650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.083 qpair failed and we were unable to recover it. 00:30:51.083 [2024-11-20 10:04:21.883997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.083 [2024-11-20 10:04:21.884025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.083 qpair failed and we were unable to recover it. 00:30:51.083 [2024-11-20 10:04:21.884397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.083 [2024-11-20 10:04:21.884426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.083 qpair failed and we were unable to recover it. 00:30:51.083 [2024-11-20 10:04:21.884638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.083 [2024-11-20 10:04:21.884670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.083 qpair failed and we were unable to recover it. 00:30:51.083 [2024-11-20 10:04:21.884902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.083 [2024-11-20 10:04:21.884930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.083 qpair failed and we were unable to recover it. 00:30:51.083 [2024-11-20 10:04:21.885261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.083 [2024-11-20 10:04:21.885290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.083 qpair failed and we were unable to recover it. 00:30:51.083 [2024-11-20 10:04:21.885632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.083 [2024-11-20 10:04:21.885661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.083 qpair failed and we were unable to recover it. 00:30:51.083 [2024-11-20 10:04:21.886002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.083 [2024-11-20 10:04:21.886030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.083 qpair failed and we were unable to recover it. 00:30:51.083 [2024-11-20 10:04:21.886118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.083 [2024-11-20 10:04:21.886145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.083 qpair failed and we were unable to recover it. 00:30:51.083 [2024-11-20 10:04:21.886576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.083 [2024-11-20 10:04:21.886668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.083 qpair failed and we were unable to recover it. 00:30:51.083 [2024-11-20 10:04:21.887053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.084 [2024-11-20 10:04:21.887090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.084 qpair failed and we were unable to recover it. 00:30:51.084 [2024-11-20 10:04:21.887527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.084 [2024-11-20 10:04:21.887618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3894000b90 with addr=10.0.0.2, port=4420 00:30:51.084 qpair failed and we were unable to recover it. 00:30:51.084 [2024-11-20 10:04:21.887998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.084 [2024-11-20 10:04:21.888030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.084 qpair failed and we were unable to recover it. 00:30:51.084 [2024-11-20 10:04:21.888384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.084 [2024-11-20 10:04:21.888412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.084 qpair failed and we were unable to recover it. 00:30:51.084 [2024-11-20 10:04:21.888633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.084 [2024-11-20 10:04:21.888660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.084 qpair failed and we were unable to recover it. 00:30:51.084 [2024-11-20 10:04:21.889043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.084 [2024-11-20 10:04:21.889071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.084 qpair failed and we were unable to recover it. 00:30:51.084 [2024-11-20 10:04:21.889269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.084 [2024-11-20 10:04:21.889298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.084 qpair failed and we were unable to recover it. 00:30:51.084 [2024-11-20 10:04:21.889648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.084 [2024-11-20 10:04:21.889676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.084 qpair failed and we were unable to recover it. 00:30:51.084 [2024-11-20 10:04:21.890048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.084 [2024-11-20 10:04:21.890076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.084 qpair failed and we were unable to recover it. 00:30:51.084 [2024-11-20 10:04:21.890411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.084 [2024-11-20 10:04:21.890442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.084 qpair failed and we were unable to recover it. 00:30:51.084 [2024-11-20 10:04:21.890801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.084 [2024-11-20 10:04:21.890829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.084 qpair failed and we were unable to recover it. 00:30:51.084 [2024-11-20 10:04:21.891048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.084 [2024-11-20 10:04:21.891076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.084 qpair failed and we were unable to recover it. 00:30:51.084 [2024-11-20 10:04:21.891429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.084 [2024-11-20 10:04:21.891458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.084 qpair failed and we were unable to recover it. 00:30:51.084 [2024-11-20 10:04:21.891661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.084 [2024-11-20 10:04:21.891688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.084 qpair failed and we were unable to recover it. 00:30:51.084 [2024-11-20 10:04:21.892020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.084 [2024-11-20 10:04:21.892048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.084 qpair failed and we were unable to recover it. 00:30:51.084 [2024-11-20 10:04:21.892407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.084 [2024-11-20 10:04:21.892443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.084 qpair failed and we were unable to recover it. 00:30:51.084 [2024-11-20 10:04:21.892784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.084 [2024-11-20 10:04:21.892812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.084 qpair failed and we were unable to recover it. 00:30:51.084 [2024-11-20 10:04:21.893036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.084 [2024-11-20 10:04:21.893063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.084 qpair failed and we were unable to recover it. 00:30:51.084 [2024-11-20 10:04:21.893285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.084 [2024-11-20 10:04:21.893318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.084 qpair failed and we were unable to recover it. 00:30:51.084 [2024-11-20 10:04:21.893661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.084 [2024-11-20 10:04:21.893689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.084 qpair failed and we were unable to recover it. 00:30:51.084 [2024-11-20 10:04:21.893980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.084 [2024-11-20 10:04:21.894010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.084 qpair failed and we were unable to recover it. 00:30:51.084 [2024-11-20 10:04:21.894285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.084 [2024-11-20 10:04:21.894315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.084 qpair failed and we were unable to recover it. 00:30:51.084 [2024-11-20 10:04:21.894644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.084 [2024-11-20 10:04:21.894671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.084 qpair failed and we were unable to recover it. 00:30:51.084 [2024-11-20 10:04:21.895001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.084 [2024-11-20 10:04:21.895029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.084 qpair failed and we were unable to recover it. 00:30:51.084 [2024-11-20 10:04:21.895400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.084 [2024-11-20 10:04:21.895430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.084 qpair failed and we were unable to recover it. 00:30:51.084 [2024-11-20 10:04:21.895672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.084 [2024-11-20 10:04:21.895699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.084 qpair failed and we were unable to recover it. 00:30:51.084 [2024-11-20 10:04:21.896062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.084 [2024-11-20 10:04:21.896090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.084 qpair failed and we were unable to recover it. 00:30:51.084 [2024-11-20 10:04:21.896429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.084 [2024-11-20 10:04:21.896458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.084 qpair failed and we were unable to recover it. 00:30:51.084 [2024-11-20 10:04:21.896803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.084 [2024-11-20 10:04:21.896830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.084 qpair failed and we were unable to recover it. 00:30:51.084 [2024-11-20 10:04:21.897133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.084 [2024-11-20 10:04:21.897178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.084 qpair failed and we were unable to recover it. 00:30:51.084 [2024-11-20 10:04:21.897518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.084 [2024-11-20 10:04:21.897547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.084 qpair failed and we were unable to recover it. 00:30:51.084 [2024-11-20 10:04:21.897899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.084 [2024-11-20 10:04:21.897927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.085 qpair failed and we were unable to recover it. 00:30:51.085 [2024-11-20 10:04:21.898257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.085 [2024-11-20 10:04:21.898286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.085 qpair failed and we were unable to recover it. 00:30:51.085 [2024-11-20 10:04:21.898643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.085 [2024-11-20 10:04:21.898672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.085 qpair failed and we were unable to recover it. 00:30:51.085 [2024-11-20 10:04:21.899027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.085 [2024-11-20 10:04:21.899055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.085 qpair failed and we were unable to recover it. 00:30:51.085 [2024-11-20 10:04:21.899403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.085 [2024-11-20 10:04:21.899432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.085 qpair failed and we were unable to recover it. 00:30:51.085 [2024-11-20 10:04:21.899784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.085 [2024-11-20 10:04:21.899812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.085 qpair failed and we were unable to recover it. 00:30:51.085 [2024-11-20 10:04:21.900176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.085 [2024-11-20 10:04:21.900205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.085 qpair failed and we were unable to recover it. 00:30:51.085 [2024-11-20 10:04:21.900520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.085 [2024-11-20 10:04:21.900548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.085 qpair failed and we were unable to recover it. 00:30:51.085 [2024-11-20 10:04:21.900885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.085 [2024-11-20 10:04:21.900913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.085 qpair failed and we were unable to recover it. 00:30:51.085 [2024-11-20 10:04:21.901260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.085 [2024-11-20 10:04:21.901290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.085 qpair failed and we were unable to recover it. 00:30:51.085 [2024-11-20 10:04:21.901704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.085 [2024-11-20 10:04:21.901732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.085 qpair failed and we were unable to recover it. 00:30:51.085 [2024-11-20 10:04:21.902017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.085 [2024-11-20 10:04:21.902045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.085 qpair failed and we were unable to recover it. 00:30:51.085 [2024-11-20 10:04:21.902334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.085 [2024-11-20 10:04:21.902363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.085 qpair failed and we were unable to recover it. 00:30:51.085 [2024-11-20 10:04:21.902586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.085 [2024-11-20 10:04:21.902619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.085 qpair failed and we were unable to recover it. 00:30:51.085 [2024-11-20 10:04:21.902970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.085 [2024-11-20 10:04:21.902998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.085 qpair failed and we were unable to recover it. 00:30:51.085 [2024-11-20 10:04:21.903348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.085 [2024-11-20 10:04:21.903378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.085 qpair failed and we were unable to recover it. 00:30:51.085 [2024-11-20 10:04:21.903690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.085 [2024-11-20 10:04:21.903718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.085 qpair failed and we were unable to recover it. 00:30:51.085 [2024-11-20 10:04:21.904050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.085 [2024-11-20 10:04:21.904078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.085 qpair failed and we were unable to recover it. 00:30:51.085 [2024-11-20 10:04:21.904515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.085 [2024-11-20 10:04:21.904545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.085 qpair failed and we were unable to recover it. 00:30:51.085 [2024-11-20 10:04:21.904748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.085 [2024-11-20 10:04:21.904775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.085 qpair failed and we were unable to recover it. 00:30:51.085 [2024-11-20 10:04:21.905122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.085 [2024-11-20 10:04:21.905150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.085 qpair failed and we were unable to recover it. 00:30:51.085 [2024-11-20 10:04:21.905499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.085 [2024-11-20 10:04:21.905527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.085 qpair failed and we were unable to recover it. 00:30:51.085 [2024-11-20 10:04:21.905878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.085 [2024-11-20 10:04:21.905906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.085 qpair failed and we were unable to recover it. 00:30:51.085 [2024-11-20 10:04:21.906264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.085 [2024-11-20 10:04:21.906293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.085 qpair failed and we were unable to recover it. 00:30:51.085 [2024-11-20 10:04:21.906625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.085 [2024-11-20 10:04:21.906653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.085 qpair failed and we were unable to recover it. 00:30:51.085 [2024-11-20 10:04:21.907014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.085 [2024-11-20 10:04:21.907049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.085 qpair failed and we were unable to recover it. 00:30:51.085 [2024-11-20 10:04:21.907443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.085 [2024-11-20 10:04:21.907474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.085 qpair failed and we were unable to recover it. 00:30:51.085 [2024-11-20 10:04:21.907803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.085 [2024-11-20 10:04:21.907831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.085 qpair failed and we were unable to recover it. 00:30:51.085 [2024-11-20 10:04:21.908184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.085 [2024-11-20 10:04:21.908214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.085 qpair failed and we were unable to recover it. 00:30:51.085 [2024-11-20 10:04:21.908469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.085 [2024-11-20 10:04:21.908497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.085 qpair failed and we were unable to recover it. 00:30:51.085 [2024-11-20 10:04:21.908840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.085 [2024-11-20 10:04:21.908868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.085 qpair failed and we were unable to recover it. 00:30:51.085 [2024-11-20 10:04:21.909120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.085 [2024-11-20 10:04:21.909150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.085 qpair failed and we were unable to recover it. 00:30:51.085 [2024-11-20 10:04:21.909372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.085 [2024-11-20 10:04:21.909402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.085 qpair failed and we were unable to recover it. 00:30:51.085 [2024-11-20 10:04:21.909723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.085 [2024-11-20 10:04:21.909751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.085 qpair failed and we were unable to recover it. 00:30:51.085 [2024-11-20 10:04:21.910089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.085 [2024-11-20 10:04:21.910117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.085 qpair failed and we were unable to recover it. 00:30:51.085 [2024-11-20 10:04:21.910426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.085 [2024-11-20 10:04:21.910457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.085 qpair failed and we were unable to recover it. 00:30:51.085 [2024-11-20 10:04:21.910668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.085 [2024-11-20 10:04:21.910696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.085 qpair failed and we were unable to recover it. 00:30:51.085 [2024-11-20 10:04:21.911021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.085 [2024-11-20 10:04:21.911048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.085 qpair failed and we were unable to recover it. 00:30:51.085 [2024-11-20 10:04:21.911401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.085 [2024-11-20 10:04:21.911432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.086 qpair failed and we were unable to recover it. 00:30:51.086 [2024-11-20 10:04:21.911785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.086 [2024-11-20 10:04:21.911813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.086 qpair failed and we were unable to recover it. 00:30:51.086 [2024-11-20 10:04:21.912174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.086 [2024-11-20 10:04:21.912203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.086 qpair failed and we were unable to recover it. 00:30:51.086 [2024-11-20 10:04:21.912606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.086 [2024-11-20 10:04:21.912635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.086 qpair failed and we were unable to recover it. 00:30:51.086 [2024-11-20 10:04:21.912851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.086 [2024-11-20 10:04:21.912878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.086 qpair failed and we were unable to recover it. 00:30:51.086 [2024-11-20 10:04:21.913083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.086 [2024-11-20 10:04:21.913111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.086 qpair failed and we were unable to recover it. 00:30:51.086 [2024-11-20 10:04:21.913360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.086 [2024-11-20 10:04:21.913390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.086 qpair failed and we were unable to recover it. 00:30:51.086 [2024-11-20 10:04:21.913721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.086 [2024-11-20 10:04:21.913749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.086 qpair failed and we were unable to recover it. 00:30:51.086 [2024-11-20 10:04:21.913857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.086 [2024-11-20 10:04:21.913890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.086 qpair failed and we were unable to recover it. 00:30:51.086 [2024-11-20 10:04:21.914266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.086 [2024-11-20 10:04:21.914295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.086 qpair failed and we were unable to recover it. 00:30:51.086 [2024-11-20 10:04:21.914384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.086 [2024-11-20 10:04:21.914412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.086 qpair failed and we were unable to recover it. 00:30:51.086 [2024-11-20 10:04:21.914715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.086 [2024-11-20 10:04:21.914743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.086 qpair failed and we were unable to recover it. 00:30:51.086 [2024-11-20 10:04:21.915086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.086 [2024-11-20 10:04:21.915114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.086 qpair failed and we were unable to recover it. 00:30:51.086 [2024-11-20 10:04:21.915472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.086 [2024-11-20 10:04:21.915502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.086 qpair failed and we were unable to recover it. 00:30:51.086 [2024-11-20 10:04:21.915801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.086 [2024-11-20 10:04:21.915835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.086 qpair failed and we were unable to recover it. 00:30:51.086 [2024-11-20 10:04:21.916179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.086 [2024-11-20 10:04:21.916209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.086 qpair failed and we were unable to recover it. 00:30:51.086 [2024-11-20 10:04:21.916595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.086 [2024-11-20 10:04:21.916623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.086 qpair failed and we were unable to recover it. 00:30:51.086 [2024-11-20 10:04:21.916887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.086 [2024-11-20 10:04:21.916914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.086 qpair failed and we were unable to recover it. 00:30:51.086 [2024-11-20 10:04:21.917262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.086 [2024-11-20 10:04:21.917292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.086 qpair failed and we were unable to recover it. 00:30:51.086 [2024-11-20 10:04:21.917535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.086 [2024-11-20 10:04:21.917562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.086 qpair failed and we were unable to recover it. 00:30:51.086 [2024-11-20 10:04:21.917901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.086 [2024-11-20 10:04:21.917929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.086 qpair failed and we were unable to recover it. 00:30:51.086 [2024-11-20 10:04:21.918286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.086 [2024-11-20 10:04:21.918315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.086 qpair failed and we were unable to recover it. 00:30:51.086 [2024-11-20 10:04:21.918684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.086 [2024-11-20 10:04:21.918713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.086 qpair failed and we were unable to recover it. 00:30:51.086 [2024-11-20 10:04:21.919070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.086 [2024-11-20 10:04:21.919098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.086 qpair failed and we were unable to recover it. 00:30:51.086 [2024-11-20 10:04:21.919458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.086 [2024-11-20 10:04:21.919488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.086 qpair failed and we were unable to recover it. 00:30:51.086 [2024-11-20 10:04:21.919768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.086 [2024-11-20 10:04:21.919795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.086 qpair failed and we were unable to recover it. 00:30:51.086 [2024-11-20 10:04:21.920146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.086 [2024-11-20 10:04:21.920185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.086 qpair failed and we were unable to recover it. 00:30:51.086 [2024-11-20 10:04:21.920461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.086 [2024-11-20 10:04:21.920489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.086 qpair failed and we were unable to recover it. 00:30:51.086 [2024-11-20 10:04:21.920841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.086 [2024-11-20 10:04:21.920870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.086 qpair failed and we were unable to recover it. 00:30:51.086 [2024-11-20 10:04:21.921067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.086 [2024-11-20 10:04:21.921095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.086 qpair failed and we were unable to recover it. 00:30:51.086 [2024-11-20 10:04:21.921451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.086 [2024-11-20 10:04:21.921480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.086 qpair failed and we were unable to recover it. 00:30:51.086 [2024-11-20 10:04:21.921842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.086 [2024-11-20 10:04:21.921871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.086 qpair failed and we were unable to recover it. 00:30:51.086 [2024-11-20 10:04:21.922206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.086 [2024-11-20 10:04:21.922236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.086 qpair failed and we were unable to recover it. 00:30:51.086 [2024-11-20 10:04:21.922635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.086 [2024-11-20 10:04:21.922663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.086 qpair failed and we were unable to recover it. 00:30:51.086 [2024-11-20 10:04:21.922909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.086 [2024-11-20 10:04:21.922937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.086 qpair failed and we were unable to recover it. 00:30:51.086 [2024-11-20 10:04:21.923250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.086 [2024-11-20 10:04:21.923282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.086 qpair failed and we were unable to recover it. 00:30:51.086 [2024-11-20 10:04:21.923496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.086 [2024-11-20 10:04:21.923525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.086 qpair failed and we were unable to recover it. 00:30:51.086 [2024-11-20 10:04:21.923747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.086 [2024-11-20 10:04:21.923777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.086 qpair failed and we were unable to recover it. 00:30:51.086 [2024-11-20 10:04:21.923968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.087 [2024-11-20 10:04:21.923996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.087 qpair failed and we were unable to recover it. 00:30:51.087 [2024-11-20 10:04:21.924322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.087 [2024-11-20 10:04:21.924351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.087 qpair failed and we were unable to recover it. 00:30:51.087 [2024-11-20 10:04:21.924714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.087 [2024-11-20 10:04:21.924743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.087 qpair failed and we were unable to recover it. 00:30:51.087 [2024-11-20 10:04:21.924990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.087 [2024-11-20 10:04:21.925018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.087 qpair failed and we were unable to recover it. 00:30:51.087 [2024-11-20 10:04:21.925237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.087 [2024-11-20 10:04:21.925268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.087 qpair failed and we were unable to recover it. 00:30:51.087 [2024-11-20 10:04:21.925608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.087 [2024-11-20 10:04:21.925639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.087 qpair failed and we were unable to recover it. 00:30:51.087 [2024-11-20 10:04:21.926027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.087 [2024-11-20 10:04:21.926056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.087 qpair failed and we were unable to recover it. 00:30:51.087 [2024-11-20 10:04:21.926305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.087 [2024-11-20 10:04:21.926334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.087 qpair failed and we were unable to recover it. 00:30:51.087 [2024-11-20 10:04:21.926585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.087 [2024-11-20 10:04:21.926614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.087 qpair failed and we were unable to recover it. 00:30:51.087 [2024-11-20 10:04:21.926987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.087 [2024-11-20 10:04:21.927016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.087 qpair failed and we were unable to recover it. 00:30:51.087 [2024-11-20 10:04:21.927247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.087 [2024-11-20 10:04:21.927276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.087 qpair failed and we were unable to recover it. 00:30:51.087 [2024-11-20 10:04:21.927595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.087 [2024-11-20 10:04:21.927623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.087 qpair failed and we were unable to recover it. 00:30:51.087 [2024-11-20 10:04:21.927961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.087 [2024-11-20 10:04:21.927990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.087 qpair failed and we were unable to recover it. 00:30:51.087 [2024-11-20 10:04:21.928226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.087 [2024-11-20 10:04:21.928256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.087 qpair failed and we were unable to recover it. 00:30:51.087 [2024-11-20 10:04:21.928386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.087 [2024-11-20 10:04:21.928413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.087 qpair failed and we were unable to recover it. 00:30:51.087 [2024-11-20 10:04:21.928617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.087 [2024-11-20 10:04:21.928644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.087 qpair failed and we were unable to recover it. 00:30:51.087 [2024-11-20 10:04:21.928861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.087 [2024-11-20 10:04:21.928889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.087 qpair failed and we were unable to recover it. 00:30:51.087 [2024-11-20 10:04:21.929226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.087 [2024-11-20 10:04:21.929266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.087 qpair failed and we were unable to recover it. 00:30:51.087 [2024-11-20 10:04:21.929588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.087 [2024-11-20 10:04:21.929617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.087 qpair failed and we were unable to recover it. 00:30:51.087 [2024-11-20 10:04:21.929972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.087 [2024-11-20 10:04:21.930001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.087 qpair failed and we were unable to recover it. 00:30:51.087 [2024-11-20 10:04:21.930363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.087 [2024-11-20 10:04:21.930390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.087 qpair failed and we were unable to recover it. 00:30:51.087 [2024-11-20 10:04:21.930772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.087 [2024-11-20 10:04:21.930800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.087 qpair failed and we were unable to recover it. 00:30:51.087 [2024-11-20 10:04:21.931024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.087 [2024-11-20 10:04:21.931052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.087 qpair failed and we were unable to recover it. 00:30:51.087 [2024-11-20 10:04:21.931420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.087 [2024-11-20 10:04:21.931450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.087 qpair failed and we were unable to recover it. 00:30:51.087 [2024-11-20 10:04:21.931784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.087 [2024-11-20 10:04:21.931812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.087 qpair failed and we were unable to recover it. 00:30:51.087 [2024-11-20 10:04:21.932189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.087 [2024-11-20 10:04:21.932220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.087 qpair failed and we were unable to recover it. 00:30:51.087 [2024-11-20 10:04:21.932564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.087 [2024-11-20 10:04:21.932591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.087 qpair failed and we were unable to recover it. 00:30:51.087 [2024-11-20 10:04:21.932790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.087 [2024-11-20 10:04:21.932819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.087 qpair failed and we were unable to recover it. 00:30:51.087 [2024-11-20 10:04:21.933240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.087 [2024-11-20 10:04:21.933271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.087 qpair failed and we were unable to recover it. 00:30:51.087 [2024-11-20 10:04:21.933595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.087 [2024-11-20 10:04:21.933622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.087 qpair failed and we were unable to recover it. 00:30:51.087 [2024-11-20 10:04:21.933966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.087 [2024-11-20 10:04:21.933994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.087 qpair failed and we were unable to recover it. 00:30:51.087 [2024-11-20 10:04:21.934209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.087 [2024-11-20 10:04:21.934239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.087 qpair failed and we were unable to recover it. 00:30:51.088 [2024-11-20 10:04:21.934612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.088 [2024-11-20 10:04:21.934640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.088 qpair failed and we were unable to recover it. 00:30:51.088 [2024-11-20 10:04:21.934993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.088 [2024-11-20 10:04:21.935021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.088 qpair failed and we were unable to recover it. 00:30:51.088 [2024-11-20 10:04:21.935399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.088 [2024-11-20 10:04:21.935434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.088 qpair failed and we were unable to recover it. 00:30:51.088 [2024-11-20 10:04:21.935654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.088 [2024-11-20 10:04:21.935687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.088 qpair failed and we were unable to recover it. 00:30:51.088 [2024-11-20 10:04:21.936039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.088 [2024-11-20 10:04:21.936069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.088 qpair failed and we were unable to recover it. 00:30:51.088 [2024-11-20 10:04:21.936434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.088 [2024-11-20 10:04:21.936466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.088 qpair failed and we were unable to recover it. 00:30:51.088 [2024-11-20 10:04:21.936814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.088 [2024-11-20 10:04:21.936842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.088 qpair failed and we were unable to recover it. 00:30:51.088 [2024-11-20 10:04:21.937146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.088 [2024-11-20 10:04:21.937186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.088 qpair failed and we were unable to recover it. 00:30:51.088 [2024-11-20 10:04:21.937504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.088 [2024-11-20 10:04:21.937533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.088 qpair failed and we were unable to recover it. 00:30:51.088 [2024-11-20 10:04:21.937791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.088 [2024-11-20 10:04:21.937820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.088 qpair failed and we were unable to recover it. 00:30:51.088 [2024-11-20 10:04:21.937935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.088 [2024-11-20 10:04:21.937966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.088 qpair failed and we were unable to recover it. 00:30:51.088 [2024-11-20 10:04:21.938388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.088 [2024-11-20 10:04:21.938419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.088 qpair failed and we were unable to recover it. 00:30:51.088 [2024-11-20 10:04:21.938808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.088 [2024-11-20 10:04:21.938837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.088 qpair failed and we were unable to recover it. 00:30:51.088 [2024-11-20 10:04:21.939175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.088 [2024-11-20 10:04:21.939206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.088 qpair failed and we were unable to recover it. 00:30:51.088 [2024-11-20 10:04:21.939421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.088 [2024-11-20 10:04:21.939450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.088 qpair failed and we were unable to recover it. 00:30:51.088 [2024-11-20 10:04:21.939770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.088 [2024-11-20 10:04:21.939798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.088 qpair failed and we were unable to recover it. 00:30:51.088 [2024-11-20 10:04:21.940038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.088 [2024-11-20 10:04:21.940066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.088 qpair failed and we were unable to recover it. 00:30:51.088 [2024-11-20 10:04:21.940434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.088 [2024-11-20 10:04:21.940466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.088 qpair failed and we were unable to recover it. 00:30:51.088 [2024-11-20 10:04:21.940676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.088 [2024-11-20 10:04:21.940707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.088 qpair failed and we were unable to recover it. 00:30:51.088 [2024-11-20 10:04:21.940855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.088 [2024-11-20 10:04:21.940885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.088 qpair failed and we were unable to recover it. 00:30:51.088 [2024-11-20 10:04:21.941234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.088 [2024-11-20 10:04:21.941264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.088 qpair failed and we were unable to recover it. 00:30:51.088 [2024-11-20 10:04:21.941608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.088 [2024-11-20 10:04:21.941637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.088 qpair failed and we were unable to recover it. 00:30:51.088 [2024-11-20 10:04:21.941994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.088 [2024-11-20 10:04:21.942023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.088 qpair failed and we were unable to recover it. 00:30:51.088 [2024-11-20 10:04:21.942428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.088 [2024-11-20 10:04:21.942459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.088 qpair failed and we were unable to recover it. 00:30:51.088 [2024-11-20 10:04:21.942792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.088 [2024-11-20 10:04:21.942821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.088 qpair failed and we were unable to recover it. 00:30:51.088 [2024-11-20 10:04:21.943023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.088 [2024-11-20 10:04:21.943051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.088 qpair failed and we were unable to recover it. 00:30:51.088 [2024-11-20 10:04:21.943423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.088 [2024-11-20 10:04:21.943454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.088 qpair failed and we were unable to recover it. 00:30:51.088 [2024-11-20 10:04:21.943665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.088 [2024-11-20 10:04:21.943693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.088 qpair failed and we were unable to recover it. 00:30:51.088 [2024-11-20 10:04:21.943926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.088 [2024-11-20 10:04:21.943954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.088 qpair failed and we were unable to recover it. 00:30:51.088 [2024-11-20 10:04:21.944300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.088 [2024-11-20 10:04:21.944329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.088 qpair failed and we were unable to recover it. 00:30:51.088 [2024-11-20 10:04:21.944688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.088 [2024-11-20 10:04:21.944716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.088 qpair failed and we were unable to recover it. 00:30:51.088 [2024-11-20 10:04:21.944974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.088 [2024-11-20 10:04:21.945002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.088 qpair failed and we were unable to recover it. 00:30:51.088 [2024-11-20 10:04:21.945243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.088 [2024-11-20 10:04:21.945274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.088 qpair failed and we were unable to recover it. 00:30:51.088 [2024-11-20 10:04:21.945572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.088 [2024-11-20 10:04:21.945601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.088 qpair failed and we were unable to recover it. 00:30:51.088 [2024-11-20 10:04:21.945964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.088 [2024-11-20 10:04:21.945993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.088 qpair failed and we were unable to recover it. 00:30:51.088 [2024-11-20 10:04:21.946240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.088 [2024-11-20 10:04:21.946270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.088 qpair failed and we were unable to recover it. 00:30:51.088 [2024-11-20 10:04:21.946623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.088 [2024-11-20 10:04:21.946652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.088 qpair failed and we were unable to recover it. 00:30:51.088 [2024-11-20 10:04:21.946978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.088 [2024-11-20 10:04:21.947006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.089 qpair failed and we were unable to recover it. 00:30:51.089 [2024-11-20 10:04:21.947294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.089 [2024-11-20 10:04:21.947323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.089 qpair failed and we were unable to recover it. 00:30:51.089 [2024-11-20 10:04:21.947658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.089 [2024-11-20 10:04:21.947687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.089 qpair failed and we were unable to recover it. 00:30:51.089 [2024-11-20 10:04:21.947784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.089 [2024-11-20 10:04:21.947812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.089 qpair failed and we were unable to recover it. 00:30:51.089 [2024-11-20 10:04:21.948060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.089 [2024-11-20 10:04:21.948088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.089 qpair failed and we were unable to recover it. 00:30:51.089 [2024-11-20 10:04:21.948343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.089 [2024-11-20 10:04:21.948373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.089 qpair failed and we were unable to recover it. 00:30:51.089 [2024-11-20 10:04:21.948660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.089 [2024-11-20 10:04:21.948690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.089 qpair failed and we were unable to recover it. 00:30:51.089 [2024-11-20 10:04:21.948894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.089 [2024-11-20 10:04:21.948921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.089 qpair failed and we were unable to recover it. 00:30:51.089 [2024-11-20 10:04:21.949266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.089 [2024-11-20 10:04:21.949298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.089 qpair failed and we were unable to recover it. 00:30:51.089 [2024-11-20 10:04:21.949617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.089 [2024-11-20 10:04:21.949644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.089 qpair failed and we were unable to recover it. 00:30:51.089 [2024-11-20 10:04:21.949966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.089 [2024-11-20 10:04:21.949994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.089 qpair failed and we were unable to recover it. 00:30:51.089 [2024-11-20 10:04:21.950206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.089 [2024-11-20 10:04:21.950236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.089 qpair failed and we were unable to recover it. 00:30:51.089 [2024-11-20 10:04:21.950578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.089 [2024-11-20 10:04:21.950609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.089 qpair failed and we were unable to recover it. 00:30:51.089 [2024-11-20 10:04:21.950968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.089 [2024-11-20 10:04:21.950996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.089 qpair failed and we were unable to recover it. 00:30:51.089 [2024-11-20 10:04:21.951203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.089 [2024-11-20 10:04:21.951233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.089 qpair failed and we were unable to recover it. 00:30:51.089 [2024-11-20 10:04:21.951548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.089 [2024-11-20 10:04:21.951577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.089 qpair failed and we were unable to recover it. 00:30:51.089 [2024-11-20 10:04:21.951777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.089 [2024-11-20 10:04:21.951810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.089 qpair failed and we were unable to recover it. 00:30:51.089 [2024-11-20 10:04:21.952017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.089 [2024-11-20 10:04:21.952053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.089 qpair failed and we were unable to recover it. 00:30:51.089 [2024-11-20 10:04:21.952322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.089 [2024-11-20 10:04:21.952352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.089 qpair failed and we were unable to recover it. 00:30:51.089 [2024-11-20 10:04:21.952707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.089 [2024-11-20 10:04:21.952736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.089 qpair failed and we were unable to recover it. 00:30:51.089 [2024-11-20 10:04:21.952983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.089 [2024-11-20 10:04:21.953011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.089 qpair failed and we were unable to recover it. 00:30:51.089 [2024-11-20 10:04:21.953251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.089 [2024-11-20 10:04:21.953281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.089 qpair failed and we were unable to recover it. 00:30:51.089 [2024-11-20 10:04:21.953611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.089 [2024-11-20 10:04:21.953638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.089 qpair failed and we were unable to recover it. 00:30:51.089 [2024-11-20 10:04:21.953888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.089 [2024-11-20 10:04:21.953920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.089 qpair failed and we were unable to recover it. 00:30:51.089 [2024-11-20 10:04:21.954318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.089 [2024-11-20 10:04:21.954348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.089 qpair failed and we were unable to recover it. 00:30:51.089 [2024-11-20 10:04:21.954689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.089 [2024-11-20 10:04:21.954717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.089 qpair failed and we were unable to recover it. 00:30:51.089 [2024-11-20 10:04:21.954934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.089 [2024-11-20 10:04:21.954965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.089 qpair failed and we were unable to recover it. 00:30:51.089 [2024-11-20 10:04:21.955200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.089 [2024-11-20 10:04:21.955229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.089 qpair failed and we were unable to recover it. 00:30:51.089 [2024-11-20 10:04:21.955587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.089 [2024-11-20 10:04:21.955615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.089 qpair failed and we were unable to recover it. 00:30:51.089 [2024-11-20 10:04:21.955977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.089 [2024-11-20 10:04:21.956005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.089 qpair failed and we were unable to recover it. 00:30:51.089 [2024-11-20 10:04:21.956360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.089 [2024-11-20 10:04:21.956390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.089 qpair failed and we were unable to recover it. 00:30:51.089 [2024-11-20 10:04:21.956762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.089 [2024-11-20 10:04:21.956791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.089 qpair failed and we were unable to recover it. 00:30:51.089 [2024-11-20 10:04:21.957008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.089 [2024-11-20 10:04:21.957036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.089 qpair failed and we were unable to recover it. 00:30:51.089 [2024-11-20 10:04:21.957393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.089 [2024-11-20 10:04:21.957423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.089 qpair failed and we were unable to recover it. 00:30:51.089 [2024-11-20 10:04:21.957782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.090 [2024-11-20 10:04:21.957811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.090 qpair failed and we were unable to recover it. 00:30:51.090 [2024-11-20 10:04:21.958192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.090 [2024-11-20 10:04:21.958222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.090 qpair failed and we were unable to recover it. 00:30:51.090 [2024-11-20 10:04:21.958460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.090 [2024-11-20 10:04:21.958488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.090 qpair failed and we were unable to recover it. 00:30:51.090 [2024-11-20 10:04:21.958858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.090 [2024-11-20 10:04:21.958886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.090 qpair failed and we were unable to recover it. 00:30:51.090 [2024-11-20 10:04:21.959104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.090 [2024-11-20 10:04:21.959131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.090 qpair failed and we were unable to recover it. 00:30:51.090 [2024-11-20 10:04:21.959496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.090 [2024-11-20 10:04:21.959526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.090 qpair failed and we were unable to recover it. 00:30:51.090 [2024-11-20 10:04:21.959887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.090 [2024-11-20 10:04:21.959917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.090 qpair failed and we were unable to recover it. 00:30:51.090 [2024-11-20 10:04:21.960120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.090 [2024-11-20 10:04:21.960148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.090 qpair failed and we were unable to recover it. 00:30:51.090 [2024-11-20 10:04:21.960514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.090 [2024-11-20 10:04:21.960543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.090 qpair failed and we were unable to recover it. 00:30:51.361 [2024-11-20 10:04:21.960892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.361 [2024-11-20 10:04:21.960922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.361 qpair failed and we were unable to recover it. 00:30:51.361 [2024-11-20 10:04:21.961095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.361 [2024-11-20 10:04:21.961123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.361 qpair failed and we were unable to recover it. 00:30:51.361 [2024-11-20 10:04:21.961494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.361 [2024-11-20 10:04:21.961524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.361 qpair failed and we were unable to recover it. 00:30:51.361 [2024-11-20 10:04:21.961747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.361 [2024-11-20 10:04:21.961775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.361 qpair failed and we were unable to recover it. 00:30:51.361 [2024-11-20 10:04:21.962117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.361 [2024-11-20 10:04:21.962145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.361 qpair failed and we were unable to recover it. 00:30:51.361 [2024-11-20 10:04:21.962480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.361 [2024-11-20 10:04:21.962509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.361 qpair failed and we were unable to recover it. 00:30:51.361 [2024-11-20 10:04:21.962892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.362 [2024-11-20 10:04:21.962921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.362 qpair failed and we were unable to recover it. 00:30:51.362 [2024-11-20 10:04:21.963278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.362 [2024-11-20 10:04:21.963307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.362 qpair failed and we were unable to recover it. 00:30:51.362 [2024-11-20 10:04:21.963655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.362 [2024-11-20 10:04:21.963682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.362 qpair failed and we were unable to recover it. 00:30:51.362 [2024-11-20 10:04:21.964018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.362 [2024-11-20 10:04:21.964046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.362 qpair failed and we were unable to recover it. 00:30:51.362 [2024-11-20 10:04:21.964276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.362 [2024-11-20 10:04:21.964308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.362 qpair failed and we were unable to recover it. 00:30:51.362 [2024-11-20 10:04:21.964652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.362 [2024-11-20 10:04:21.964681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.362 qpair failed and we were unable to recover it. 00:30:51.362 [2024-11-20 10:04:21.965046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.362 [2024-11-20 10:04:21.965074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.362 qpair failed and we were unable to recover it. 00:30:51.362 [2024-11-20 10:04:21.965427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.362 [2024-11-20 10:04:21.965457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.362 qpair failed and we were unable to recover it. 00:30:51.362 [2024-11-20 10:04:21.965807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.362 [2024-11-20 10:04:21.965836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.362 qpair failed and we were unable to recover it. 00:30:51.362 [2024-11-20 10:04:21.966042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.362 [2024-11-20 10:04:21.966069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.362 qpair failed and we were unable to recover it. 00:30:51.362 [2024-11-20 10:04:21.966401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.362 [2024-11-20 10:04:21.966430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.362 qpair failed and we were unable to recover it. 00:30:51.362 [2024-11-20 10:04:21.966672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.362 [2024-11-20 10:04:21.966699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.362 qpair failed and we were unable to recover it. 00:30:51.362 [2024-11-20 10:04:21.967064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.362 [2024-11-20 10:04:21.967092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.362 qpair failed and we were unable to recover it. 00:30:51.362 [2024-11-20 10:04:21.967330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.362 [2024-11-20 10:04:21.967360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.362 qpair failed and we were unable to recover it. 00:30:51.362 [2024-11-20 10:04:21.967712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.362 [2024-11-20 10:04:21.967741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.362 qpair failed and we were unable to recover it. 00:30:51.362 [2024-11-20 10:04:21.967827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.362 [2024-11-20 10:04:21.967856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.362 qpair failed and we were unable to recover it. 00:30:51.362 [2024-11-20 10:04:21.968184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.362 [2024-11-20 10:04:21.968214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.362 qpair failed and we were unable to recover it. 00:30:51.362 [2024-11-20 10:04:21.968427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.362 [2024-11-20 10:04:21.968454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.362 qpair failed and we were unable to recover it. 00:30:51.362 [2024-11-20 10:04:21.968675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.362 [2024-11-20 10:04:21.968703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.362 qpair failed and we were unable to recover it. 00:30:51.362 [2024-11-20 10:04:21.968914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.362 [2024-11-20 10:04:21.968943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.362 qpair failed and we were unable to recover it. 00:30:51.362 [2024-11-20 10:04:21.969259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.362 [2024-11-20 10:04:21.969287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.362 qpair failed and we were unable to recover it. 00:30:51.362 [2024-11-20 10:04:21.969571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.362 [2024-11-20 10:04:21.969599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.362 qpair failed and we were unable to recover it. 00:30:51.362 [2024-11-20 10:04:21.969968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.362 [2024-11-20 10:04:21.969996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.362 qpair failed and we were unable to recover it. 00:30:51.362 [2024-11-20 10:04:21.970346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.362 [2024-11-20 10:04:21.970376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.362 qpair failed and we were unable to recover it. 00:30:51.362 [2024-11-20 10:04:21.970711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.362 [2024-11-20 10:04:21.970739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.362 qpair failed and we were unable to recover it. 00:30:51.362 [2024-11-20 10:04:21.971062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.362 [2024-11-20 10:04:21.971089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.362 qpair failed and we were unable to recover it. 00:30:51.362 [2024-11-20 10:04:21.971418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.362 [2024-11-20 10:04:21.971447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.362 qpair failed and we were unable to recover it. 00:30:51.362 [2024-11-20 10:04:21.971673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.362 [2024-11-20 10:04:21.971704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.362 qpair failed and we were unable to recover it. 00:30:51.362 [2024-11-20 10:04:21.972058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.362 [2024-11-20 10:04:21.972087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.362 qpair failed and we were unable to recover it. 00:30:51.362 [2024-11-20 10:04:21.972290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.362 [2024-11-20 10:04:21.972320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.362 qpair failed and we were unable to recover it. 00:30:51.362 [2024-11-20 10:04:21.972680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.362 [2024-11-20 10:04:21.972708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.362 qpair failed and we were unable to recover it. 00:30:51.362 [2024-11-20 10:04:21.973042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.362 [2024-11-20 10:04:21.973070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.362 qpair failed and we were unable to recover it. 00:30:51.362 [2024-11-20 10:04:21.973438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.362 [2024-11-20 10:04:21.973467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.362 qpair failed and we were unable to recover it. 00:30:51.362 [2024-11-20 10:04:21.973806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.362 [2024-11-20 10:04:21.973835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.362 qpair failed and we were unable to recover it. 00:30:51.362 [2024-11-20 10:04:21.974191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.362 [2024-11-20 10:04:21.974221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.362 qpair failed and we were unable to recover it. 00:30:51.362 [2024-11-20 10:04:21.974615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.362 [2024-11-20 10:04:21.974649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.362 qpair failed and we were unable to recover it. 00:30:51.362 [2024-11-20 10:04:21.974919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.362 [2024-11-20 10:04:21.974947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.362 qpair failed and we were unable to recover it. 00:30:51.362 [2024-11-20 10:04:21.975157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.362 [2024-11-20 10:04:21.975197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.363 qpair failed and we were unable to recover it. 00:30:51.363 [2024-11-20 10:04:21.975568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.363 [2024-11-20 10:04:21.975597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.363 qpair failed and we were unable to recover it. 00:30:51.363 [2024-11-20 10:04:21.975961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.363 [2024-11-20 10:04:21.975989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.363 qpair failed and we were unable to recover it. 00:30:51.363 [2024-11-20 10:04:21.976187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.363 [2024-11-20 10:04:21.976217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.363 qpair failed and we were unable to recover it. 00:30:51.363 [2024-11-20 10:04:21.976467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.363 [2024-11-20 10:04:21.976495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.363 qpair failed and we were unable to recover it. 00:30:51.363 [2024-11-20 10:04:21.976715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.363 [2024-11-20 10:04:21.976748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.363 qpair failed and we were unable to recover it. 00:30:51.363 [2024-11-20 10:04:21.976970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.363 [2024-11-20 10:04:21.976998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.363 qpair failed and we were unable to recover it. 00:30:51.363 [2024-11-20 10:04:21.977112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.363 [2024-11-20 10:04:21.977139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9410c0 with addr=10.0.0.2, port=4420 00:30:51.363 qpair failed and we were unable to recover it. 00:30:51.363 [2024-11-20 10:04:21.977662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.363 [2024-11-20 10:04:21.977753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.363 qpair failed and we were unable to recover it. 00:30:51.363 [2024-11-20 10:04:21.978131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.363 [2024-11-20 10:04:21.978184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.363 qpair failed and we were unable to recover it. 00:30:51.363 [2024-11-20 10:04:21.978542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.363 [2024-11-20 10:04:21.978631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.363 qpair failed and we were unable to recover it. 00:30:51.363 [2024-11-20 10:04:21.978951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.363 [2024-11-20 10:04:21.978988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.363 qpair failed and we were unable to recover it. 00:30:51.363 [2024-11-20 10:04:21.979427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.363 [2024-11-20 10:04:21.979516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.363 qpair failed and we were unable to recover it. 00:30:51.363 [2024-11-20 10:04:21.979906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.363 [2024-11-20 10:04:21.979943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.363 qpair failed and we were unable to recover it. 00:30:51.363 [2024-11-20 10:04:21.980459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.363 [2024-11-20 10:04:21.980552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.363 qpair failed and we were unable to recover it. 00:30:51.363 [2024-11-20 10:04:21.980988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.363 [2024-11-20 10:04:21.981025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.363 qpair failed and we were unable to recover it. 00:30:51.363 [2024-11-20 10:04:21.981294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.363 [2024-11-20 10:04:21.981325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.363 qpair failed and we were unable to recover it. 00:30:51.363 [2024-11-20 10:04:21.981677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.363 [2024-11-20 10:04:21.981707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.363 qpair failed and we were unable to recover it. 00:30:51.363 [2024-11-20 10:04:21.981914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.363 [2024-11-20 10:04:21.981942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.363 qpair failed and we were unable to recover it. 00:30:51.363 [2024-11-20 10:04:21.982337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.363 [2024-11-20 10:04:21.982367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.363 qpair failed and we were unable to recover it. 00:30:51.363 [2024-11-20 10:04:21.982741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.363 [2024-11-20 10:04:21.982769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.363 qpair failed and we were unable to recover it. 00:30:51.363 [2024-11-20 10:04:21.983112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.363 [2024-11-20 10:04:21.983141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.363 qpair failed and we were unable to recover it. 00:30:51.363 [2024-11-20 10:04:21.983395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.363 [2024-11-20 10:04:21.983430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.363 qpair failed and we were unable to recover it. 00:30:51.363 [2024-11-20 10:04:21.983756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.363 [2024-11-20 10:04:21.983785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.363 qpair failed and we were unable to recover it. 00:30:51.363 [2024-11-20 10:04:21.984129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.363 [2024-11-20 10:04:21.984157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.363 qpair failed and we were unable to recover it. 00:30:51.363 [2024-11-20 10:04:21.984520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.363 [2024-11-20 10:04:21.984560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.363 qpair failed and we were unable to recover it. 00:30:51.363 [2024-11-20 10:04:21.984802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.363 [2024-11-20 10:04:21.984830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.363 qpair failed and we were unable to recover it. 00:30:51.363 [2024-11-20 10:04:21.985191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.363 [2024-11-20 10:04:21.985236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.363 qpair failed and we were unable to recover it. 00:30:51.363 [2024-11-20 10:04:21.985559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.363 [2024-11-20 10:04:21.985588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.363 qpair failed and we were unable to recover it. 00:30:51.363 [2024-11-20 10:04:21.985964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.363 [2024-11-20 10:04:21.985993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.363 qpair failed and we were unable to recover it. 00:30:51.363 [2024-11-20 10:04:21.986226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.363 [2024-11-20 10:04:21.986255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.363 qpair failed and we were unable to recover it. 00:30:51.363 [2024-11-20 10:04:21.986501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.363 [2024-11-20 10:04:21.986535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.363 qpair failed and we were unable to recover it. 00:30:51.363 [2024-11-20 10:04:21.986877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.363 [2024-11-20 10:04:21.986906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.363 qpair failed and we were unable to recover it. 00:30:51.363 [2024-11-20 10:04:21.987220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.363 [2024-11-20 10:04:21.987249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.363 qpair failed and we were unable to recover it. 00:30:51.363 [2024-11-20 10:04:21.987470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.363 [2024-11-20 10:04:21.987498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.363 qpair failed and we were unable to recover it. 00:30:51.363 [2024-11-20 10:04:21.987863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.363 [2024-11-20 10:04:21.987892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.363 qpair failed and we were unable to recover it. 00:30:51.363 [2024-11-20 10:04:21.988231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.363 [2024-11-20 10:04:21.988260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.363 qpair failed and we were unable to recover it. 00:30:51.363 [2024-11-20 10:04:21.988472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.363 [2024-11-20 10:04:21.988500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.363 qpair failed and we were unable to recover it. 00:30:51.363 [2024-11-20 10:04:21.988752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.363 [2024-11-20 10:04:21.988786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.364 qpair failed and we were unable to recover it. 00:30:51.364 [2024-11-20 10:04:21.989131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.364 [2024-11-20 10:04:21.989168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.364 qpair failed and we were unable to recover it. 00:30:51.364 [2024-11-20 10:04:21.989375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.364 [2024-11-20 10:04:21.989403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.364 qpair failed and we were unable to recover it. 00:30:51.364 [2024-11-20 10:04:21.989664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.364 [2024-11-20 10:04:21.989692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.364 qpair failed and we were unable to recover it. 00:30:51.364 [2024-11-20 10:04:21.990047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.364 [2024-11-20 10:04:21.990075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.364 qpair failed and we were unable to recover it. 00:30:51.364 [2024-11-20 10:04:21.990301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.364 [2024-11-20 10:04:21.990330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.364 qpair failed and we were unable to recover it. 00:30:51.364 [2024-11-20 10:04:21.990734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.364 [2024-11-20 10:04:21.990762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.364 qpair failed and we were unable to recover it. 00:30:51.364 [2024-11-20 10:04:21.991116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.364 [2024-11-20 10:04:21.991145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.364 qpair failed and we were unable to recover it. 00:30:51.364 [2024-11-20 10:04:21.991406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.364 [2024-11-20 10:04:21.991436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.364 qpair failed and we were unable to recover it. 00:30:51.364 [2024-11-20 10:04:21.991690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.364 [2024-11-20 10:04:21.991723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.364 qpair failed and we were unable to recover it. 00:30:51.364 [2024-11-20 10:04:21.992070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.364 [2024-11-20 10:04:21.992099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.364 qpair failed and we were unable to recover it. 00:30:51.364 [2024-11-20 10:04:21.992332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.364 [2024-11-20 10:04:21.992361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.364 qpair failed and we were unable to recover it. 00:30:51.364 [2024-11-20 10:04:21.992741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.364 [2024-11-20 10:04:21.992770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.364 qpair failed and we were unable to recover it. 00:30:51.364 [2024-11-20 10:04:21.993104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.364 [2024-11-20 10:04:21.993132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.364 qpair failed and we were unable to recover it. 00:30:51.364 [2024-11-20 10:04:21.993476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.364 [2024-11-20 10:04:21.993504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.364 qpair failed and we were unable to recover it. 00:30:51.364 [2024-11-20 10:04:21.993828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.364 [2024-11-20 10:04:21.993856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.364 qpair failed and we were unable to recover it. 00:30:51.364 [2024-11-20 10:04:21.994207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.364 [2024-11-20 10:04:21.994238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.364 qpair failed and we were unable to recover it. 00:30:51.364 [2024-11-20 10:04:21.994592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.364 [2024-11-20 10:04:21.994629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.364 qpair failed and we were unable to recover it. 00:30:51.364 [2024-11-20 10:04:21.994954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.364 [2024-11-20 10:04:21.994983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.364 qpair failed and we were unable to recover it. 00:30:51.364 [2024-11-20 10:04:21.995319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.364 [2024-11-20 10:04:21.995349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.364 qpair failed and we were unable to recover it. 00:30:51.364 [2024-11-20 10:04:21.995698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.364 [2024-11-20 10:04:21.995726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.364 qpair failed and we were unable to recover it. 00:30:51.364 10:04:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:51.364 [2024-11-20 10:04:21.996060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.364 [2024-11-20 10:04:21.996089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.364 qpair failed and we were unable to recover it. 00:30:51.364 10:04:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:30:51.364 [2024-11-20 10:04:21.996422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.364 [2024-11-20 10:04:21.996451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.364 qpair failed and we were unable to recover it. 00:30:51.364 [2024-11-20 10:04:21.996656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.364 [2024-11-20 10:04:21.996684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.364 qpair failed and we were unable to recover it. 00:30:51.364 10:04:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:51.364 10:04:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:51.364 [2024-11-20 10:04:21.997031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.364 [2024-11-20 10:04:21.997059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.364 qpair failed and we were unable to recover it. 00:30:51.364 10:04:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:51.364 [2024-11-20 10:04:21.997459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.364 [2024-11-20 10:04:21.997494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.364 qpair failed and we were unable to recover it. 00:30:51.364 [2024-11-20 10:04:21.997856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.364 [2024-11-20 10:04:21.997884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.364 qpair failed and we were unable to recover it. 00:30:51.364 [2024-11-20 10:04:21.998095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.364 [2024-11-20 10:04:21.998122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.364 qpair failed and we were unable to recover it. 00:30:51.364 [2024-11-20 10:04:21.998522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.364 [2024-11-20 10:04:21.998554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.364 qpair failed and we were unable to recover it. 00:30:51.364 [2024-11-20 10:04:21.998743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.364 [2024-11-20 10:04:21.998771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.364 qpair failed and we were unable to recover it. 00:30:51.364 [2024-11-20 10:04:21.999096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.364 [2024-11-20 10:04:21.999124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.364 qpair failed and we were unable to recover it. 00:30:51.364 [2024-11-20 10:04:21.999471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.364 [2024-11-20 10:04:21.999501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.364 qpair failed and we were unable to recover it. 00:30:51.364 [2024-11-20 10:04:21.999727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.364 [2024-11-20 10:04:21.999757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.364 qpair failed and we were unable to recover it. 00:30:51.364 [2024-11-20 10:04:22.000083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.364 [2024-11-20 10:04:22.000112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.364 qpair failed and we were unable to recover it. 00:30:51.364 [2024-11-20 10:04:22.000458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.364 [2024-11-20 10:04:22.000488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.364 qpair failed and we were unable to recover it. 00:30:51.364 [2024-11-20 10:04:22.000832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.364 [2024-11-20 10:04:22.000861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.364 qpair failed and we were unable to recover it. 00:30:51.364 [2024-11-20 10:04:22.001211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.364 [2024-11-20 10:04:22.001242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.365 qpair failed and we were unable to recover it. 00:30:51.365 [2024-11-20 10:04:22.001571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.365 [2024-11-20 10:04:22.001599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.365 qpair failed and we were unable to recover it. 00:30:51.365 [2024-11-20 10:04:22.001910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.365 [2024-11-20 10:04:22.001939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.365 qpair failed and we were unable to recover it. 00:30:51.365 [2024-11-20 10:04:22.002149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.365 [2024-11-20 10:04:22.002202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.365 qpair failed and we were unable to recover it. 00:30:51.365 [2024-11-20 10:04:22.002486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.365 [2024-11-20 10:04:22.002515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.365 qpair failed and we were unable to recover it. 00:30:51.365 [2024-11-20 10:04:22.002835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.365 [2024-11-20 10:04:22.002863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.365 qpair failed and we were unable to recover it. 00:30:51.365 [2024-11-20 10:04:22.003216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.365 [2024-11-20 10:04:22.003246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.365 qpair failed and we were unable to recover it. 00:30:51.365 [2024-11-20 10:04:22.003577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.365 [2024-11-20 10:04:22.003605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.365 qpair failed and we were unable to recover it. 00:30:51.365 [2024-11-20 10:04:22.003964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.365 [2024-11-20 10:04:22.003992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.365 qpair failed and we were unable to recover it. 00:30:51.365 [2024-11-20 10:04:22.004319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.365 [2024-11-20 10:04:22.004349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.365 qpair failed and we were unable to recover it. 00:30:51.365 [2024-11-20 10:04:22.004700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.365 [2024-11-20 10:04:22.004727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.365 qpair failed and we were unable to recover it. 00:30:51.365 [2024-11-20 10:04:22.005047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.365 [2024-11-20 10:04:22.005076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.365 qpair failed and we were unable to recover it. 00:30:51.365 [2024-11-20 10:04:22.005471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.365 [2024-11-20 10:04:22.005501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.365 qpair failed and we were unable to recover it. 00:30:51.365 [2024-11-20 10:04:22.005821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.365 [2024-11-20 10:04:22.005857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.365 qpair failed and we were unable to recover it. 00:30:51.365 [2024-11-20 10:04:22.006071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.365 [2024-11-20 10:04:22.006099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.365 qpair failed and we were unable to recover it. 00:30:51.365 [2024-11-20 10:04:22.006470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.365 [2024-11-20 10:04:22.006498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.365 qpair failed and we were unable to recover it. 00:30:51.365 [2024-11-20 10:04:22.006847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.365 [2024-11-20 10:04:22.006877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.365 qpair failed and we were unable to recover it. 00:30:51.365 [2024-11-20 10:04:22.007226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.365 [2024-11-20 10:04:22.007255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.365 qpair failed and we were unable to recover it. 00:30:51.365 [2024-11-20 10:04:22.007573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.365 [2024-11-20 10:04:22.007602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.365 qpair failed and we were unable to recover it. 00:30:51.365 [2024-11-20 10:04:22.007954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.365 [2024-11-20 10:04:22.007984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.365 qpair failed and we were unable to recover it. 00:30:51.365 [2024-11-20 10:04:22.008357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.365 [2024-11-20 10:04:22.008385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.365 qpair failed and we were unable to recover it. 00:30:51.365 [2024-11-20 10:04:22.008685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.365 [2024-11-20 10:04:22.008713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.365 qpair failed and we were unable to recover it. 00:30:51.365 [2024-11-20 10:04:22.009057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.365 [2024-11-20 10:04:22.009088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.365 qpair failed and we were unable to recover it. 00:30:51.365 [2024-11-20 10:04:22.009356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.365 [2024-11-20 10:04:22.009390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.365 qpair failed and we were unable to recover it. 00:30:51.365 [2024-11-20 10:04:22.009486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.365 [2024-11-20 10:04:22.009515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.365 qpair failed and we were unable to recover it. 00:30:51.365 [2024-11-20 10:04:22.009902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.365 [2024-11-20 10:04:22.009932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.365 qpair failed and we were unable to recover it. 00:30:51.365 [2024-11-20 10:04:22.010288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.365 [2024-11-20 10:04:22.010319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.365 qpair failed and we were unable to recover it. 00:30:51.365 [2024-11-20 10:04:22.010671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.365 [2024-11-20 10:04:22.010699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.365 qpair failed and we were unable to recover it. 00:30:51.365 [2024-11-20 10:04:22.010934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.365 [2024-11-20 10:04:22.010962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.365 qpair failed and we were unable to recover it. 00:30:51.365 [2024-11-20 10:04:22.011189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.365 [2024-11-20 10:04:22.011231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.365 qpair failed and we were unable to recover it. 00:30:51.365 [2024-11-20 10:04:22.011556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.365 [2024-11-20 10:04:22.011585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.365 qpair failed and we were unable to recover it. 00:30:51.365 [2024-11-20 10:04:22.011940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.365 [2024-11-20 10:04:22.011968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.365 qpair failed and we were unable to recover it. 00:30:51.365 [2024-11-20 10:04:22.012328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.365 [2024-11-20 10:04:22.012358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.365 qpair failed and we were unable to recover it. 00:30:51.365 [2024-11-20 10:04:22.012704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.365 [2024-11-20 10:04:22.012733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.365 qpair failed and we were unable to recover it. 00:30:51.365 [2024-11-20 10:04:22.013077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.365 [2024-11-20 10:04:22.013105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.365 qpair failed and we were unable to recover it. 00:30:51.365 [2024-11-20 10:04:22.013451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.365 [2024-11-20 10:04:22.013481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.365 qpair failed and we were unable to recover it. 00:30:51.365 [2024-11-20 10:04:22.013698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.365 [2024-11-20 10:04:22.013726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.365 qpair failed and we were unable to recover it. 00:30:51.365 [2024-11-20 10:04:22.014078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.365 [2024-11-20 10:04:22.014106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.365 qpair failed and we were unable to recover it. 00:30:51.365 [2024-11-20 10:04:22.014516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.365 [2024-11-20 10:04:22.014547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.366 qpair failed and we were unable to recover it. 00:30:51.366 [2024-11-20 10:04:22.014870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.366 [2024-11-20 10:04:22.014899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.366 qpair failed and we were unable to recover it. 00:30:51.366 [2024-11-20 10:04:22.015274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.366 [2024-11-20 10:04:22.015304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.366 qpair failed and we were unable to recover it. 00:30:51.366 [2024-11-20 10:04:22.015635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.366 [2024-11-20 10:04:22.015663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.366 qpair failed and we were unable to recover it. 00:30:51.366 [2024-11-20 10:04:22.015975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.366 [2024-11-20 10:04:22.016003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.366 qpair failed and we were unable to recover it. 00:30:51.366 [2024-11-20 10:04:22.016428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.366 [2024-11-20 10:04:22.016458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.366 qpair failed and we were unable to recover it. 00:30:51.366 [2024-11-20 10:04:22.016829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.366 [2024-11-20 10:04:22.016857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.366 qpair failed and we were unable to recover it. 00:30:51.366 [2024-11-20 10:04:22.017064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.366 [2024-11-20 10:04:22.017093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.366 qpair failed and we were unable to recover it. 00:30:51.366 [2024-11-20 10:04:22.017444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.366 [2024-11-20 10:04:22.017474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.366 qpair failed and we were unable to recover it. 00:30:51.366 [2024-11-20 10:04:22.017705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.366 [2024-11-20 10:04:22.017733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.366 qpair failed and we were unable to recover it. 00:30:51.366 [2024-11-20 10:04:22.017955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.366 [2024-11-20 10:04:22.017988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.366 qpair failed and we were unable to recover it. 00:30:51.366 [2024-11-20 10:04:22.018336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.366 [2024-11-20 10:04:22.018366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.366 qpair failed and we were unable to recover it. 00:30:51.366 [2024-11-20 10:04:22.018702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.366 [2024-11-20 10:04:22.018731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.366 qpair failed and we were unable to recover it. 00:30:51.366 [2024-11-20 10:04:22.019074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.366 [2024-11-20 10:04:22.019103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.366 qpair failed and we were unable to recover it. 00:30:51.366 [2024-11-20 10:04:22.019484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.366 [2024-11-20 10:04:22.019514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.366 qpair failed and we were unable to recover it. 00:30:51.366 [2024-11-20 10:04:22.019865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.366 [2024-11-20 10:04:22.019894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.366 qpair failed and we were unable to recover it. 00:30:51.366 [2024-11-20 10:04:22.020228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.366 [2024-11-20 10:04:22.020257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.366 qpair failed and we were unable to recover it. 00:30:51.366 [2024-11-20 10:04:22.020501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.366 [2024-11-20 10:04:22.020529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.366 qpair failed and we were unable to recover it. 00:30:51.366 [2024-11-20 10:04:22.020871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.366 [2024-11-20 10:04:22.020899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.366 qpair failed and we were unable to recover it. 00:30:51.366 [2024-11-20 10:04:22.021114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.366 [2024-11-20 10:04:22.021145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.366 qpair failed and we were unable to recover it. 00:30:51.366 [2024-11-20 10:04:22.021504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.366 [2024-11-20 10:04:22.021534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.366 qpair failed and we were unable to recover it. 00:30:51.366 [2024-11-20 10:04:22.021901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.366 [2024-11-20 10:04:22.021930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.366 qpair failed and we were unable to recover it. 00:30:51.366 [2024-11-20 10:04:22.022278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.366 [2024-11-20 10:04:22.022308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.366 qpair failed and we were unable to recover it. 00:30:51.366 [2024-11-20 10:04:22.022638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.366 [2024-11-20 10:04:22.022666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.366 qpair failed and we were unable to recover it. 00:30:51.366 [2024-11-20 10:04:22.022996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.366 [2024-11-20 10:04:22.023024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.366 qpair failed and we were unable to recover it. 00:30:51.366 [2024-11-20 10:04:22.023225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.366 [2024-11-20 10:04:22.023255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.366 qpair failed and we were unable to recover it. 00:30:51.366 [2024-11-20 10:04:22.023626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.366 [2024-11-20 10:04:22.023654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.366 qpair failed and we were unable to recover it. 00:30:51.366 [2024-11-20 10:04:22.023877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.366 [2024-11-20 10:04:22.023905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.366 qpair failed and we were unable to recover it. 00:30:51.366 [2024-11-20 10:04:22.024115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.366 [2024-11-20 10:04:22.024143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.366 qpair failed and we were unable to recover it. 00:30:51.366 [2024-11-20 10:04:22.024521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.366 [2024-11-20 10:04:22.024550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.366 qpair failed and we were unable to recover it. 00:30:51.366 [2024-11-20 10:04:22.024899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.366 [2024-11-20 10:04:22.024927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.366 qpair failed and we were unable to recover it. 00:30:51.366 [2024-11-20 10:04:22.025270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.366 [2024-11-20 10:04:22.025305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.366 qpair failed and we were unable to recover it. 00:30:51.366 [2024-11-20 10:04:22.025668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.366 [2024-11-20 10:04:22.025697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.366 qpair failed and we were unable to recover it. 00:30:51.367 [2024-11-20 10:04:22.025928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.367 [2024-11-20 10:04:22.025956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.367 qpair failed and we were unable to recover it. 00:30:51.367 [2024-11-20 10:04:22.026282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.367 [2024-11-20 10:04:22.026312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.367 qpair failed and we were unable to recover it. 00:30:51.367 [2024-11-20 10:04:22.026682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.367 [2024-11-20 10:04:22.026710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.367 qpair failed and we were unable to recover it. 00:30:51.367 [2024-11-20 10:04:22.027043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.367 [2024-11-20 10:04:22.027072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.367 qpair failed and we were unable to recover it. 00:30:51.367 [2024-11-20 10:04:22.027440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.367 [2024-11-20 10:04:22.027470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.367 qpair failed and we were unable to recover it. 00:30:51.367 [2024-11-20 10:04:22.027818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.367 [2024-11-20 10:04:22.027846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.367 qpair failed and we were unable to recover it. 00:30:51.367 [2024-11-20 10:04:22.028193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.367 [2024-11-20 10:04:22.028222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.367 qpair failed and we were unable to recover it. 00:30:51.367 [2024-11-20 10:04:22.028577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.367 [2024-11-20 10:04:22.028606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.367 qpair failed and we were unable to recover it. 00:30:51.367 [2024-11-20 10:04:22.028947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.367 [2024-11-20 10:04:22.028975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.367 qpair failed and we were unable to recover it. 00:30:51.367 [2024-11-20 10:04:22.029338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.367 [2024-11-20 10:04:22.029368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.367 qpair failed and we were unable to recover it. 00:30:51.367 [2024-11-20 10:04:22.029668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.367 [2024-11-20 10:04:22.029697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.367 qpair failed and we were unable to recover it. 00:30:51.367 [2024-11-20 10:04:22.030036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.367 [2024-11-20 10:04:22.030064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.367 qpair failed and we were unable to recover it. 00:30:51.367 [2024-11-20 10:04:22.030444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.367 [2024-11-20 10:04:22.030474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.367 qpair failed and we were unable to recover it. 00:30:51.367 [2024-11-20 10:04:22.030823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.367 [2024-11-20 10:04:22.030852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.367 qpair failed and we were unable to recover it. 00:30:51.367 [2024-11-20 10:04:22.031087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.367 [2024-11-20 10:04:22.031115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.367 qpair failed and we were unable to recover it. 00:30:51.367 [2024-11-20 10:04:22.031364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.367 [2024-11-20 10:04:22.031394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.367 qpair failed and we were unable to recover it. 00:30:51.367 [2024-11-20 10:04:22.031744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.367 [2024-11-20 10:04:22.031773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.367 qpair failed and we were unable to recover it. 00:30:51.367 [2024-11-20 10:04:22.032105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.367 [2024-11-20 10:04:22.032134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.367 qpair failed and we were unable to recover it. 00:30:51.367 [2024-11-20 10:04:22.032488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.367 [2024-11-20 10:04:22.032518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.367 qpair failed and we were unable to recover it. 00:30:51.367 [2024-11-20 10:04:22.032900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.367 [2024-11-20 10:04:22.032929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.367 qpair failed and we were unable to recover it. 00:30:51.367 [2024-11-20 10:04:22.033272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.367 [2024-11-20 10:04:22.033302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.367 qpair failed and we were unable to recover it. 00:30:51.367 [2024-11-20 10:04:22.033663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.367 [2024-11-20 10:04:22.033691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.367 qpair failed and we were unable to recover it. 00:30:51.367 [2024-11-20 10:04:22.034000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.367 [2024-11-20 10:04:22.034029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.367 qpair failed and we were unable to recover it. 00:30:51.367 [2024-11-20 10:04:22.034267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.367 [2024-11-20 10:04:22.034296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.367 qpair failed and we were unable to recover it. 00:30:51.367 [2024-11-20 10:04:22.034634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.367 [2024-11-20 10:04:22.034662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.367 qpair failed and we were unable to recover it. 00:30:51.367 [2024-11-20 10:04:22.035052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.367 [2024-11-20 10:04:22.035081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.367 qpair failed and we were unable to recover it. 00:30:51.367 [2024-11-20 10:04:22.035414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.367 [2024-11-20 10:04:22.035443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.367 qpair failed and we were unable to recover it. 00:30:51.367 [2024-11-20 10:04:22.035650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.367 [2024-11-20 10:04:22.035679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.367 qpair failed and we were unable to recover it. 00:30:51.367 [2024-11-20 10:04:22.035840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.367 [2024-11-20 10:04:22.035867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.367 qpair failed and we were unable to recover it. 00:30:51.367 10:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:51.367 [2024-11-20 10:04:22.036272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.367 [2024-11-20 10:04:22.036302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.367 qpair failed and we were unable to recover it. 00:30:51.367 10:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:51.367 [2024-11-20 10:04:22.036646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.367 [2024-11-20 10:04:22.036680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.367 qpair failed and we were unable to recover it. 00:30:51.367 10:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.367 [2024-11-20 10:04:22.036981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.367 [2024-11-20 10:04:22.037010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.367 qpair failed and we were unable to recover it. 00:30:51.367 10:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:51.367 [2024-11-20 10:04:22.037382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.367 [2024-11-20 10:04:22.037412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.367 qpair failed and we were unable to recover it. 00:30:51.367 [2024-11-20 10:04:22.037761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.367 [2024-11-20 10:04:22.037789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.367 qpair failed and we were unable to recover it. 00:30:51.367 [2024-11-20 10:04:22.038127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.367 [2024-11-20 10:04:22.038155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.367 qpair failed and we were unable to recover it. 00:30:51.367 [2024-11-20 10:04:22.038485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.367 [2024-11-20 10:04:22.038515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.367 qpair failed and we were unable to recover it. 00:30:51.368 [2024-11-20 10:04:22.038852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.368 [2024-11-20 10:04:22.038886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.368 qpair failed and we were unable to recover it. 00:30:51.368 [2024-11-20 10:04:22.039097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.368 [2024-11-20 10:04:22.039125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.368 qpair failed and we were unable to recover it. 00:30:51.368 [2024-11-20 10:04:22.039323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.368 [2024-11-20 10:04:22.039353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.368 qpair failed and we were unable to recover it. 00:30:51.368 [2024-11-20 10:04:22.039590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.368 [2024-11-20 10:04:22.039617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.368 qpair failed and we were unable to recover it. 00:30:51.368 [2024-11-20 10:04:22.039968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.368 [2024-11-20 10:04:22.039996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.368 qpair failed and we were unable to recover it. 00:30:51.368 [2024-11-20 10:04:22.040332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.368 [2024-11-20 10:04:22.040363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.368 qpair failed and we were unable to recover it. 00:30:51.368 [2024-11-20 10:04:22.040571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.368 [2024-11-20 10:04:22.040599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.368 qpair failed and we were unable to recover it. 00:30:51.368 [2024-11-20 10:04:22.040954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.368 [2024-11-20 10:04:22.040982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.368 qpair failed and we were unable to recover it. 00:30:51.368 [2024-11-20 10:04:22.041282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.368 [2024-11-20 10:04:22.041310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.368 qpair failed and we were unable to recover it. 00:30:51.368 [2024-11-20 10:04:22.041685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.368 [2024-11-20 10:04:22.041713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.368 qpair failed and we were unable to recover it. 00:30:51.368 [2024-11-20 10:04:22.042031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.368 [2024-11-20 10:04:22.042059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.368 qpair failed and we were unable to recover it. 00:30:51.368 [2024-11-20 10:04:22.042279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.368 [2024-11-20 10:04:22.042308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.368 qpair failed and we were unable to recover it. 00:30:51.368 [2024-11-20 10:04:22.042625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.368 [2024-11-20 10:04:22.042653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.368 qpair failed and we were unable to recover it. 00:30:51.368 [2024-11-20 10:04:22.043004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.368 [2024-11-20 10:04:22.043034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.368 qpair failed and we were unable to recover it. 00:30:51.368 [2024-11-20 10:04:22.043376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.368 [2024-11-20 10:04:22.043406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.368 qpair failed and we were unable to recover it. 00:30:51.368 [2024-11-20 10:04:22.043663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.368 [2024-11-20 10:04:22.043691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.368 qpair failed and we were unable to recover it. 00:30:51.368 [2024-11-20 10:04:22.044035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.368 [2024-11-20 10:04:22.044063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.368 qpair failed and we were unable to recover it. 00:30:51.368 [2024-11-20 10:04:22.044286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.368 [2024-11-20 10:04:22.044315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.368 qpair failed and we were unable to recover it. 00:30:51.368 [2024-11-20 10:04:22.044647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.368 [2024-11-20 10:04:22.044675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.368 qpair failed and we were unable to recover it. 00:30:51.368 [2024-11-20 10:04:22.045013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.368 [2024-11-20 10:04:22.045041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.368 qpair failed and we were unable to recover it. 00:30:51.368 [2024-11-20 10:04:22.045406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.368 [2024-11-20 10:04:22.045435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.368 qpair failed and we were unable to recover it. 00:30:51.368 [2024-11-20 10:04:22.045790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.368 [2024-11-20 10:04:22.045818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.368 qpair failed and we were unable to recover it. 00:30:51.368 [2024-11-20 10:04:22.046040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.368 [2024-11-20 10:04:22.046068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.368 qpair failed and we were unable to recover it. 00:30:51.368 [2024-11-20 10:04:22.046369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.368 [2024-11-20 10:04:22.046398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.368 qpair failed and we were unable to recover it. 00:30:51.368 [2024-11-20 10:04:22.046728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.368 [2024-11-20 10:04:22.046756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.368 qpair failed and we were unable to recover it. 00:30:51.368 [2024-11-20 10:04:22.046981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.368 [2024-11-20 10:04:22.047009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.368 qpair failed and we were unable to recover it. 00:30:51.368 [2024-11-20 10:04:22.047327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.368 [2024-11-20 10:04:22.047356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.368 qpair failed and we were unable to recover it. 00:30:51.368 [2024-11-20 10:04:22.047697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.368 [2024-11-20 10:04:22.047725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.368 qpair failed and we were unable to recover it. 00:30:51.368 [2024-11-20 10:04:22.047942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.368 [2024-11-20 10:04:22.047970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.368 qpair failed and we were unable to recover it. 00:30:51.368 [2024-11-20 10:04:22.048189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.368 [2024-11-20 10:04:22.048219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.368 qpair failed and we were unable to recover it. 00:30:51.368 [2024-11-20 10:04:22.048587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.368 [2024-11-20 10:04:22.048616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.368 qpair failed and we were unable to recover it. 00:30:51.368 [2024-11-20 10:04:22.048836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.368 [2024-11-20 10:04:22.048864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.368 qpair failed and we were unable to recover it. 00:30:51.368 [2024-11-20 10:04:22.049206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.368 [2024-11-20 10:04:22.049235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.368 qpair failed and we were unable to recover it. 00:30:51.368 [2024-11-20 10:04:22.049447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.368 [2024-11-20 10:04:22.049475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.368 qpair failed and we were unable to recover it. 00:30:51.368 [2024-11-20 10:04:22.049847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.368 [2024-11-20 10:04:22.049875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.368 qpair failed and we were unable to recover it. 00:30:51.368 [2024-11-20 10:04:22.050080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.368 [2024-11-20 10:04:22.050107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.368 qpair failed and we were unable to recover it. 00:30:51.368 [2024-11-20 10:04:22.050451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.368 [2024-11-20 10:04:22.050480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.368 qpair failed and we were unable to recover it. 00:30:51.368 [2024-11-20 10:04:22.050831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.368 [2024-11-20 10:04:22.050859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.368 qpair failed and we were unable to recover it. 00:30:51.369 [2024-11-20 10:04:22.051107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.369 [2024-11-20 10:04:22.051135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.369 qpair failed and we were unable to recover it. 00:30:51.369 [2024-11-20 10:04:22.051387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.369 [2024-11-20 10:04:22.051417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.369 qpair failed and we were unable to recover it. 00:30:51.369 [2024-11-20 10:04:22.051733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.369 [2024-11-20 10:04:22.051768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.369 qpair failed and we were unable to recover it. 00:30:51.369 [2024-11-20 10:04:22.052106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.369 [2024-11-20 10:04:22.052135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.369 qpair failed and we were unable to recover it. 00:30:51.369 [2024-11-20 10:04:22.052526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.369 [2024-11-20 10:04:22.052556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.369 qpair failed and we were unable to recover it. 00:30:51.369 [2024-11-20 10:04:22.052924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.369 [2024-11-20 10:04:22.052952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.369 qpair failed and we were unable to recover it. 00:30:51.369 [2024-11-20 10:04:22.053326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.369 [2024-11-20 10:04:22.053356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.369 qpair failed and we were unable to recover it. 00:30:51.369 [2024-11-20 10:04:22.053681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.369 [2024-11-20 10:04:22.053709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.369 qpair failed and we were unable to recover it. 00:30:51.369 [2024-11-20 10:04:22.053978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.369 [2024-11-20 10:04:22.054005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.369 qpair failed and we were unable to recover it. 00:30:51.369 [2024-11-20 10:04:22.054344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.369 [2024-11-20 10:04:22.054373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.369 qpair failed and we were unable to recover it. 00:30:51.369 [2024-11-20 10:04:22.054717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.369 [2024-11-20 10:04:22.054745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.369 qpair failed and we were unable to recover it. 00:30:51.369 [2024-11-20 10:04:22.055103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.369 [2024-11-20 10:04:22.055131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.369 qpair failed and we were unable to recover it. 00:30:51.369 [2024-11-20 10:04:22.055490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.369 [2024-11-20 10:04:22.055520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.369 qpair failed and we were unable to recover it. 00:30:51.369 [2024-11-20 10:04:22.055738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.369 [2024-11-20 10:04:22.055766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.369 qpair failed and we were unable to recover it. 00:30:51.369 [2024-11-20 10:04:22.056116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.369 [2024-11-20 10:04:22.056144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.369 qpair failed and we were unable to recover it. 00:30:51.369 [2024-11-20 10:04:22.056399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.369 [2024-11-20 10:04:22.056427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.369 qpair failed and we were unable to recover it. 00:30:51.369 [2024-11-20 10:04:22.056785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.369 [2024-11-20 10:04:22.056813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.369 qpair failed and we were unable to recover it. 00:30:51.369 [2024-11-20 10:04:22.057171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.369 [2024-11-20 10:04:22.057201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.369 qpair failed and we were unable to recover it. 00:30:51.369 [2024-11-20 10:04:22.057553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.369 [2024-11-20 10:04:22.057582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.369 qpair failed and we were unable to recover it. 00:30:51.369 [2024-11-20 10:04:22.057785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.369 [2024-11-20 10:04:22.057813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.369 qpair failed and we were unable to recover it. 00:30:51.369 [2024-11-20 10:04:22.058143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.369 [2024-11-20 10:04:22.058179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.369 qpair failed and we were unable to recover it. 00:30:51.369 [2024-11-20 10:04:22.058536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.369 [2024-11-20 10:04:22.058564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.369 qpair failed and we were unable to recover it. 00:30:51.369 [2024-11-20 10:04:22.058921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.369 [2024-11-20 10:04:22.058948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.369 qpair failed and we were unable to recover it. 00:30:51.369 [2024-11-20 10:04:22.059317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.369 [2024-11-20 10:04:22.059346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.369 qpair failed and we were unable to recover it. 00:30:51.369 [2024-11-20 10:04:22.059575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.369 [2024-11-20 10:04:22.059603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.369 qpair failed and we were unable to recover it. 00:30:51.369 [2024-11-20 10:04:22.059980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.369 [2024-11-20 10:04:22.060009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.369 qpair failed and we were unable to recover it. 00:30:51.369 [2024-11-20 10:04:22.060364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.369 [2024-11-20 10:04:22.060393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.369 qpair failed and we were unable to recover it. 00:30:51.369 [2024-11-20 10:04:22.060745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.369 [2024-11-20 10:04:22.060773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.369 qpair failed and we were unable to recover it. 00:30:51.369 [2024-11-20 10:04:22.061120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.369 [2024-11-20 10:04:22.061147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.369 qpair failed and we were unable to recover it. 00:30:51.369 [2024-11-20 10:04:22.061484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.369 [2024-11-20 10:04:22.061513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.369 qpair failed and we were unable to recover it. 00:30:51.369 [2024-11-20 10:04:22.061863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.369 [2024-11-20 10:04:22.061891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.369 qpair failed and we were unable to recover it. 00:30:51.369 [2024-11-20 10:04:22.062188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.369 [2024-11-20 10:04:22.062218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.369 qpair failed and we were unable to recover it. 00:30:51.369 [2024-11-20 10:04:22.062566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.369 [2024-11-20 10:04:22.062594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.369 qpair failed and we were unable to recover it. 00:30:51.369 [2024-11-20 10:04:22.062927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.369 [2024-11-20 10:04:22.062955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.369 qpair failed and we were unable to recover it. 00:30:51.369 [2024-11-20 10:04:22.063330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.369 [2024-11-20 10:04:22.063359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.369 qpair failed and we were unable to recover it. 00:30:51.369 [2024-11-20 10:04:22.063693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.369 Malloc0 00:30:51.369 [2024-11-20 10:04:22.063721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.369 qpair failed and we were unable to recover it. 00:30:51.369 [2024-11-20 10:04:22.064039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.369 [2024-11-20 10:04:22.064067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.369 qpair failed and we were unable to recover it. 00:30:51.369 [2024-11-20 10:04:22.064454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.369 10:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.370 [2024-11-20 10:04:22.064483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.370 qpair failed and we were unable to recover it. 00:30:51.370 10:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:51.370 [2024-11-20 10:04:22.064860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.370 [2024-11-20 10:04:22.064888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.370 qpair failed and we were unable to recover it. 00:30:51.370 [2024-11-20 10:04:22.065108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.370 [2024-11-20 10:04:22.065136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.370 qpair failed and we were unable to recover it. 00:30:51.370 10:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.370 10:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:51.370 [2024-11-20 10:04:22.065485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.370 [2024-11-20 10:04:22.065515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.370 qpair failed and we were unable to recover it. 00:30:51.370 [2024-11-20 10:04:22.065800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.370 [2024-11-20 10:04:22.065829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.370 qpair failed and we were unable to recover it. 00:30:51.370 [2024-11-20 10:04:22.066190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.370 [2024-11-20 10:04:22.066221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.370 qpair failed and we were unable to recover it. 00:30:51.370 [2024-11-20 10:04:22.066463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.370 [2024-11-20 10:04:22.066491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.370 qpair failed and we were unable to recover it. 00:30:51.370 [2024-11-20 10:04:22.066834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.370 [2024-11-20 10:04:22.066862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.370 qpair failed and we were unable to recover it. 00:30:51.370 [2024-11-20 10:04:22.067211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.370 [2024-11-20 10:04:22.067240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.370 qpair failed and we were unable to recover it. 00:30:51.370 [2024-11-20 10:04:22.067587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.370 [2024-11-20 10:04:22.067615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.370 qpair failed and we were unable to recover it. 00:30:51.370 [2024-11-20 10:04:22.067826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.370 [2024-11-20 10:04:22.067854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.370 qpair failed and we were unable to recover it. 00:30:51.370 [2024-11-20 10:04:22.068089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.370 [2024-11-20 10:04:22.068118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.370 qpair failed and we were unable to recover it. 00:30:51.370 [2024-11-20 10:04:22.068418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.370 [2024-11-20 10:04:22.068447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.370 qpair failed and we were unable to recover it. 00:30:51.370 [2024-11-20 10:04:22.068794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.370 [2024-11-20 10:04:22.068822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.370 qpair failed and we were unable to recover it. 00:30:51.370 [2024-11-20 10:04:22.069069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.370 [2024-11-20 10:04:22.069097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.370 qpair failed and we were unable to recover it. 00:30:51.370 [2024-11-20 10:04:22.069464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.370 [2024-11-20 10:04:22.069495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.370 qpair failed and we were unable to recover it. 00:30:51.370 [2024-11-20 10:04:22.069824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.370 [2024-11-20 10:04:22.069852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.370 qpair failed and we were unable to recover it. 00:30:51.370 [2024-11-20 10:04:22.070219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.370 [2024-11-20 10:04:22.070248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.370 qpair failed and we were unable to recover it. 00:30:51.370 [2024-11-20 10:04:22.070609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.370 [2024-11-20 10:04:22.070638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.370 qpair failed and we were unable to recover it. 00:30:51.370 [2024-11-20 10:04:22.070991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.370 [2024-11-20 10:04:22.071019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.370 qpair failed and we were unable to recover it. 00:30:51.370 [2024-11-20 10:04:22.071212] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:51.370 [2024-11-20 10:04:22.071228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.370 [2024-11-20 10:04:22.071257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.370 qpair failed and we were unable to recover it. 00:30:51.370 [2024-11-20 10:04:22.071574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.370 [2024-11-20 10:04:22.071602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.370 qpair failed and we were unable to recover it. 00:30:51.370 [2024-11-20 10:04:22.071951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.370 [2024-11-20 10:04:22.071979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.370 qpair failed and we were unable to recover it. 00:30:51.370 [2024-11-20 10:04:22.072302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.370 [2024-11-20 10:04:22.072332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.370 qpair failed and we were unable to recover it. 00:30:51.370 [2024-11-20 10:04:22.072545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.370 [2024-11-20 10:04:22.072572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.370 qpair failed and we were unable to recover it. 00:30:51.370 [2024-11-20 10:04:22.072932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.370 [2024-11-20 10:04:22.072960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.370 qpair failed and we were unable to recover it. 00:30:51.370 [2024-11-20 10:04:22.073317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.370 [2024-11-20 10:04:22.073346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.370 qpair failed and we were unable to recover it. 00:30:51.370 [2024-11-20 10:04:22.073698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.370 [2024-11-20 10:04:22.073725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.370 qpair failed and we were unable to recover it. 00:30:51.370 [2024-11-20 10:04:22.073943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.370 [2024-11-20 10:04:22.073971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.370 qpair failed and we were unable to recover it. 00:30:51.370 [2024-11-20 10:04:22.074237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.370 [2024-11-20 10:04:22.074269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.370 qpair failed and we were unable to recover it. 00:30:51.370 [2024-11-20 10:04:22.074635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.370 [2024-11-20 10:04:22.074665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.370 qpair failed and we were unable to recover it. 00:30:51.370 [2024-11-20 10:04:22.075006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.370 [2024-11-20 10:04:22.075035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.370 qpair failed and we were unable to recover it. 00:30:51.370 [2024-11-20 10:04:22.075373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.370 [2024-11-20 10:04:22.075403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.370 qpair failed and we were unable to recover it. 00:30:51.370 [2024-11-20 10:04:22.075612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.370 [2024-11-20 10:04:22.075639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.371 qpair failed and we were unable to recover it. 00:30:51.371 [2024-11-20 10:04:22.075979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.371 [2024-11-20 10:04:22.076008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.371 qpair failed and we were unable to recover it. 00:30:51.371 [2024-11-20 10:04:22.076375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.371 [2024-11-20 10:04:22.076407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.371 qpair failed and we were unable to recover it. 00:30:51.371 [2024-11-20 10:04:22.076513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.371 [2024-11-20 10:04:22.076542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.371 qpair failed and we were unable to recover it. 00:30:51.371 [2024-11-20 10:04:22.076873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.371 [2024-11-20 10:04:22.076901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.371 qpair failed and we were unable to recover it. 00:30:51.371 [2024-11-20 10:04:22.077253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.371 [2024-11-20 10:04:22.077282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.371 qpair failed and we were unable to recover it. 00:30:51.371 [2024-11-20 10:04:22.077640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.371 [2024-11-20 10:04:22.077668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.371 qpair failed and we were unable to recover it. 00:30:51.371 [2024-11-20 10:04:22.078016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.371 [2024-11-20 10:04:22.078045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.371 qpair failed and we were unable to recover it. 00:30:51.371 [2024-11-20 10:04:22.078262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.371 [2024-11-20 10:04:22.078294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.371 qpair failed and we were unable to recover it. 00:30:51.371 [2024-11-20 10:04:22.078626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.371 [2024-11-20 10:04:22.078655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.371 qpair failed and we were unable to recover it. 00:30:51.371 [2024-11-20 10:04:22.078998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.371 [2024-11-20 10:04:22.079040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.371 qpair failed and we were unable to recover it. 00:30:51.371 [2024-11-20 10:04:22.079425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.371 [2024-11-20 10:04:22.079455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.371 qpair failed and we were unable to recover it. 00:30:51.371 [2024-11-20 10:04:22.079782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.371 [2024-11-20 10:04:22.079811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.371 qpair failed and we were unable to recover it. 00:30:51.371 [2024-11-20 10:04:22.080010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.371 [2024-11-20 10:04:22.080038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.371 qpair failed and we were unable to recover it. 00:30:51.371 10:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.371 [2024-11-20 10:04:22.080263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.371 [2024-11-20 10:04:22.080291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.371 qpair failed and we were unable to recover it. 00:30:51.371 10:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:51.371 [2024-11-20 10:04:22.080644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.371 [2024-11-20 10:04:22.080672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.371 qpair failed and we were unable to recover it. 00:30:51.371 10:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.371 [2024-11-20 10:04:22.081020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.371 [2024-11-20 10:04:22.081048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.371 qpair failed and we were unable to recover it. 00:30:51.371 10:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:51.371 [2024-11-20 10:04:22.081264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.371 [2024-11-20 10:04:22.081292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.371 qpair failed and we were unable to recover it. 00:30:51.371 [2024-11-20 10:04:22.081649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.371 [2024-11-20 10:04:22.081677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.371 qpair failed and we were unable to recover it. 00:30:51.371 [2024-11-20 10:04:22.081990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.371 [2024-11-20 10:04:22.082018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.371 qpair failed and we were unable to recover it. 00:30:51.371 [2024-11-20 10:04:22.082263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.371 [2024-11-20 10:04:22.082297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.371 qpair failed and we were unable to recover it. 00:30:51.371 [2024-11-20 10:04:22.082541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.371 [2024-11-20 10:04:22.082568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.371 qpair failed and we were unable to recover it. 00:30:51.371 [2024-11-20 10:04:22.082851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.371 [2024-11-20 10:04:22.082879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.371 qpair failed and we were unable to recover it. 00:30:51.371 [2024-11-20 10:04:22.083015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.371 [2024-11-20 10:04:22.083043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.371 qpair failed and we were unable to recover it. 00:30:51.371 [2024-11-20 10:04:22.083381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.371 [2024-11-20 10:04:22.083410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.371 qpair failed and we were unable to recover it. 00:30:51.371 [2024-11-20 10:04:22.083639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.371 [2024-11-20 10:04:22.083666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.371 qpair failed and we were unable to recover it. 00:30:51.371 [2024-11-20 10:04:22.083820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.371 [2024-11-20 10:04:22.083849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.371 qpair failed and we were unable to recover it. 00:30:51.371 [2024-11-20 10:04:22.084201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.371 [2024-11-20 10:04:22.084231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.371 qpair failed and we were unable to recover it. 00:30:51.371 [2024-11-20 10:04:22.084547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.371 [2024-11-20 10:04:22.084575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.371 qpair failed and we were unable to recover it. 00:30:51.371 [2024-11-20 10:04:22.084948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.371 [2024-11-20 10:04:22.084976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.371 qpair failed and we were unable to recover it. 00:30:51.371 [2024-11-20 10:04:22.085132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.371 [2024-11-20 10:04:22.085167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.371 qpair failed and we were unable to recover it. 00:30:51.371 [2024-11-20 10:04:22.085522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.371 [2024-11-20 10:04:22.085551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.371 qpair failed and we were unable to recover it. 00:30:51.371 [2024-11-20 10:04:22.085882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.371 [2024-11-20 10:04:22.085910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.371 qpair failed and we were unable to recover it. 00:30:51.371 [2024-11-20 10:04:22.086260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.372 [2024-11-20 10:04:22.086288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.372 qpair failed and we were unable to recover it. 00:30:51.372 [2024-11-20 10:04:22.086601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.372 [2024-11-20 10:04:22.086630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.372 qpair failed and we were unable to recover it. 00:30:51.372 [2024-11-20 10:04:22.086983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.372 [2024-11-20 10:04:22.087012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.372 qpair failed and we were unable to recover it. 00:30:51.372 [2024-11-20 10:04:22.087357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.372 [2024-11-20 10:04:22.087386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.372 qpair failed and we were unable to recover it. 00:30:51.372 [2024-11-20 10:04:22.087726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.372 [2024-11-20 10:04:22.087753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.372 qpair failed and we were unable to recover it. 00:30:51.372 [2024-11-20 10:04:22.088101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.372 [2024-11-20 10:04:22.088129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.372 qpair failed and we were unable to recover it. 00:30:51.372 [2024-11-20 10:04:22.088487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.372 [2024-11-20 10:04:22.088516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.372 qpair failed and we were unable to recover it. 00:30:51.372 [2024-11-20 10:04:22.088722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.372 [2024-11-20 10:04:22.088750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.372 qpair failed and we were unable to recover it. 00:30:51.372 [2024-11-20 10:04:22.089093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.372 [2024-11-20 10:04:22.089122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.372 qpair failed and we were unable to recover it. 00:30:51.372 [2024-11-20 10:04:22.089394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.372 [2024-11-20 10:04:22.089426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.372 qpair failed and we were unable to recover it. 00:30:51.372 [2024-11-20 10:04:22.089765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.372 [2024-11-20 10:04:22.089793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.372 qpair failed and we were unable to recover it. 00:30:51.372 [2024-11-20 10:04:22.090151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.372 [2024-11-20 10:04:22.090188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.372 qpair failed and we were unable to recover it. 00:30:51.372 [2024-11-20 10:04:22.090560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.372 [2024-11-20 10:04:22.090590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.372 qpair failed and we were unable to recover it. 00:30:51.372 [2024-11-20 10:04:22.090802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.372 [2024-11-20 10:04:22.090829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.372 qpair failed and we were unable to recover it. 00:30:51.372 [2024-11-20 10:04:22.091157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.372 [2024-11-20 10:04:22.091194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.372 qpair failed and we were unable to recover it. 00:30:51.372 [2024-11-20 10:04:22.091596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.372 [2024-11-20 10:04:22.091630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.372 qpair failed and we were unable to recover it. 00:30:51.372 [2024-11-20 10:04:22.091847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.372 [2024-11-20 10:04:22.091874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.372 qpair failed and we were unable to recover it. 00:30:51.372 [2024-11-20 10:04:22.092077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.372 [2024-11-20 10:04:22.092105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.372 qpair failed and we were unable to recover it. 00:30:51.372 10:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.372 [2024-11-20 10:04:22.092477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.372 [2024-11-20 10:04:22.092507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.372 qpair failed and we were unable to recover it. 00:30:51.372 10:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:51.372 [2024-11-20 10:04:22.092869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.372 [2024-11-20 10:04:22.092898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.372 qpair failed and we were unable to recover it. 00:30:51.372 10:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.372 10:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:51.372 [2024-11-20 10:04:22.093230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.372 [2024-11-20 10:04:22.093260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.372 qpair failed and we were unable to recover it. 00:30:51.372 [2024-11-20 10:04:22.093601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.372 [2024-11-20 10:04:22.093629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.372 qpair failed and we were unable to recover it. 00:30:51.372 [2024-11-20 10:04:22.093952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.372 [2024-11-20 10:04:22.093980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.372 qpair failed and we were unable to recover it. 00:30:51.372 [2024-11-20 10:04:22.094363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.372 [2024-11-20 10:04:22.094393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.372 qpair failed and we were unable to recover it. 00:30:51.372 [2024-11-20 10:04:22.094745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.372 [2024-11-20 10:04:22.094772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.372 qpair failed and we were unable to recover it. 00:30:51.372 [2024-11-20 10:04:22.095138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.372 [2024-11-20 10:04:22.095173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.372 qpair failed and we were unable to recover it. 00:30:51.372 [2024-11-20 10:04:22.095525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.372 [2024-11-20 10:04:22.095553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.372 qpair failed and we were unable to recover it. 00:30:51.372 [2024-11-20 10:04:22.095902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.372 [2024-11-20 10:04:22.095930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.372 qpair failed and we were unable to recover it. 00:30:51.372 [2024-11-20 10:04:22.096283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.372 [2024-11-20 10:04:22.096312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.372 qpair failed and we were unable to recover it. 00:30:51.372 [2024-11-20 10:04:22.096552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.372 [2024-11-20 10:04:22.096579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.372 qpair failed and we were unable to recover it. 00:30:51.372 [2024-11-20 10:04:22.096905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.372 [2024-11-20 10:04:22.096933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.372 qpair failed and we were unable to recover it. 00:30:51.372 [2024-11-20 10:04:22.097299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.372 [2024-11-20 10:04:22.097329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.372 qpair failed and we were unable to recover it. 00:30:51.372 [2024-11-20 10:04:22.097423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.372 [2024-11-20 10:04:22.097450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.372 qpair failed and we were unable to recover it. 00:30:51.372 [2024-11-20 10:04:22.097678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.372 [2024-11-20 10:04:22.097705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.372 qpair failed and we were unable to recover it. 00:30:51.372 [2024-11-20 10:04:22.097968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.372 [2024-11-20 10:04:22.097996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.372 qpair failed and we were unable to recover it. 00:30:51.372 [2024-11-20 10:04:22.098203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.372 [2024-11-20 10:04:22.098231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.372 qpair failed and we were unable to recover it. 00:30:51.372 [2024-11-20 10:04:22.098456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.372 [2024-11-20 10:04:22.098483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.373 qpair failed and we were unable to recover it. 00:30:51.373 [2024-11-20 10:04:22.098719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.373 [2024-11-20 10:04:22.098747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.373 qpair failed and we were unable to recover it. 00:30:51.373 [2024-11-20 10:04:22.098986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.373 [2024-11-20 10:04:22.099017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.373 qpair failed and we were unable to recover it. 00:30:51.373 [2024-11-20 10:04:22.099337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.373 [2024-11-20 10:04:22.099367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.373 qpair failed and we were unable to recover it. 00:30:51.373 [2024-11-20 10:04:22.099673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.373 [2024-11-20 10:04:22.099708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.373 qpair failed and we were unable to recover it. 00:30:51.373 [2024-11-20 10:04:22.099927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.373 [2024-11-20 10:04:22.099955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.373 qpair failed and we were unable to recover it. 00:30:51.373 [2024-11-20 10:04:22.100323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.373 [2024-11-20 10:04:22.100353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.373 qpair failed and we were unable to recover it. 00:30:51.373 [2024-11-20 10:04:22.100563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.373 [2024-11-20 10:04:22.100590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.373 qpair failed and we were unable to recover it. 00:30:51.373 [2024-11-20 10:04:22.100937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.373 [2024-11-20 10:04:22.100965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.373 qpair failed and we were unable to recover it. 00:30:51.373 [2024-11-20 10:04:22.101190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.373 [2024-11-20 10:04:22.101220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.373 qpair failed and we were unable to recover it. 00:30:51.373 [2024-11-20 10:04:22.101558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.373 [2024-11-20 10:04:22.101585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.373 qpair failed and we were unable to recover it. 00:30:51.373 [2024-11-20 10:04:22.101784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.373 [2024-11-20 10:04:22.101812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.373 qpair failed and we were unable to recover it. 00:30:51.373 [2024-11-20 10:04:22.102023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.373 [2024-11-20 10:04:22.102050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.373 qpair failed and we were unable to recover it. 00:30:51.373 [2024-11-20 10:04:22.102409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.373 [2024-11-20 10:04:22.102438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.373 qpair failed and we were unable to recover it. 00:30:51.373 [2024-11-20 10:04:22.102715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.373 [2024-11-20 10:04:22.102743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.373 qpair failed and we were unable to recover it. 00:30:51.373 [2024-11-20 10:04:22.103093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.373 [2024-11-20 10:04:22.103121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.373 qpair failed and we were unable to recover it. 00:30:51.373 [2024-11-20 10:04:22.103240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.373 [2024-11-20 10:04:22.103268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.373 qpair failed and we were unable to recover it. 00:30:51.373 [2024-11-20 10:04:22.103598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.373 [2024-11-20 10:04:22.103626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.373 qpair failed and we were unable to recover it. 00:30:51.373 [2024-11-20 10:04:22.103854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.373 [2024-11-20 10:04:22.103883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.373 qpair failed and we were unable to recover it. 00:30:51.373 [2024-11-20 10:04:22.104230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.373 10:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.373 [2024-11-20 10:04:22.104259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.373 qpair failed and we were unable to recover it. 00:30:51.373 [2024-11-20 10:04:22.104627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.373 10:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:51.373 [2024-11-20 10:04:22.104655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.373 qpair failed and we were unable to recover it. 00:30:51.373 [2024-11-20 10:04:22.104883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.373 [2024-11-20 10:04:22.104912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.373 qpair failed and we were unable to recover it. 00:30:51.373 10:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.373 [2024-11-20 10:04:22.105142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.373 [2024-11-20 10:04:22.105177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.373 qpair failed and we were unable to recover it. 00:30:51.373 10:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:51.373 [2024-11-20 10:04:22.105567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.373 [2024-11-20 10:04:22.105595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.373 qpair failed and we were unable to recover it. 00:30:51.373 [2024-11-20 10:04:22.105807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.373 [2024-11-20 10:04:22.105834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.373 qpair failed and we were unable to recover it. 00:30:51.373 [2024-11-20 10:04:22.106173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.373 [2024-11-20 10:04:22.106202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.373 qpair failed and we were unable to recover it. 00:30:51.373 [2024-11-20 10:04:22.106525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.373 [2024-11-20 10:04:22.106553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.373 qpair failed and we were unable to recover it. 00:30:51.373 [2024-11-20 10:04:22.106919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.373 [2024-11-20 10:04:22.106947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.373 qpair failed and we were unable to recover it. 00:30:51.373 [2024-11-20 10:04:22.107189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.373 [2024-11-20 10:04:22.107217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.373 qpair failed and we were unable to recover it. 00:30:51.373 [2024-11-20 10:04:22.107572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.373 [2024-11-20 10:04:22.107601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.373 qpair failed and we were unable to recover it. 00:30:51.373 [2024-11-20 10:04:22.107978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.373 [2024-11-20 10:04:22.108006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.373 qpair failed and we were unable to recover it. 00:30:51.373 [2024-11-20 10:04:22.108233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.373 [2024-11-20 10:04:22.108262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.373 qpair failed and we were unable to recover it. 00:30:51.373 [2024-11-20 10:04:22.108445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.373 [2024-11-20 10:04:22.108473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.373 qpair failed and we were unable to recover it. 00:30:51.373 [2024-11-20 10:04:22.108806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.373 [2024-11-20 10:04:22.108834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.373 qpair failed and we were unable to recover it. 00:30:51.373 [2024-11-20 10:04:22.108919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.373 [2024-11-20 10:04:22.108946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.373 qpair failed and we were unable to recover it. 00:30:51.373 [2024-11-20 10:04:22.109268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.373 [2024-11-20 10:04:22.109298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.373 qpair failed and we were unable to recover it. 00:30:51.373 [2024-11-20 10:04:22.109610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.373 [2024-11-20 10:04:22.109639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.374 qpair failed and we were unable to recover it. 00:30:51.374 [2024-11-20 10:04:22.109858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.374 [2024-11-20 10:04:22.109885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.374 qpair failed and we were unable to recover it. 00:30:51.374 [2024-11-20 10:04:22.110123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.374 [2024-11-20 10:04:22.110151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.374 qpair failed and we were unable to recover it. 00:30:51.374 [2024-11-20 10:04:22.110543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.374 [2024-11-20 10:04:22.110572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.374 qpair failed and we were unable to recover it. 00:30:51.374 [2024-11-20 10:04:22.110903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.374 [2024-11-20 10:04:22.110930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.374 qpair failed and we were unable to recover it. 00:30:51.374 [2024-11-20 10:04:22.111280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.374 [2024-11-20 10:04:22.111309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f389c000b90 with addr=10.0.0.2, port=4420 00:30:51.374 qpair failed and we were unable to recover it. 00:30:51.374 [2024-11-20 10:04:22.111466] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:51.374 10:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.374 10:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:51.374 10:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.374 10:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:51.374 [2024-11-20 10:04:22.122191] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.374 [2024-11-20 10:04:22.122309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.374 [2024-11-20 10:04:22.122355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.374 [2024-11-20 10:04:22.122377] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.374 [2024-11-20 10:04:22.122397] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:51.374 [2024-11-20 10:04:22.122450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.374 qpair failed and we were unable to recover it. 00:30:51.374 10:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.374 10:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1564691 00:30:51.374 [2024-11-20 10:04:22.132112] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.374 [2024-11-20 10:04:22.132201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.374 [2024-11-20 10:04:22.132229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.374 [2024-11-20 10:04:22.132244] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.374 [2024-11-20 10:04:22.132258] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:51.374 [2024-11-20 10:04:22.132292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.374 qpair failed and we were unable to recover it. 00:30:51.374 [2024-11-20 10:04:22.142023] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.374 [2024-11-20 10:04:22.142082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.374 [2024-11-20 10:04:22.142101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.374 [2024-11-20 10:04:22.142112] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.374 [2024-11-20 10:04:22.142121] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:51.374 [2024-11-20 10:04:22.142143] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.374 qpair failed and we were unable to recover it. 00:30:51.374 [2024-11-20 10:04:22.152110] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.374 [2024-11-20 10:04:22.152173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.374 [2024-11-20 10:04:22.152187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.374 [2024-11-20 10:04:22.152198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.374 [2024-11-20 10:04:22.152204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:51.374 [2024-11-20 10:04:22.152219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.374 qpair failed and we were unable to recover it. 00:30:51.374 [2024-11-20 10:04:22.161985] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.374 [2024-11-20 10:04:22.162083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.374 [2024-11-20 10:04:22.162097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.374 [2024-11-20 10:04:22.162105] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.374 [2024-11-20 10:04:22.162112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:51.374 [2024-11-20 10:04:22.162127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.374 qpair failed and we were unable to recover it. 00:30:51.374 [2024-11-20 10:04:22.172098] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.374 [2024-11-20 10:04:22.172156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.374 [2024-11-20 10:04:22.172176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.374 [2024-11-20 10:04:22.172183] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.374 [2024-11-20 10:04:22.172193] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:51.374 [2024-11-20 10:04:22.172209] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.374 qpair failed and we were unable to recover it. 00:30:51.374 [2024-11-20 10:04:22.182046] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.374 [2024-11-20 10:04:22.182093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.374 [2024-11-20 10:04:22.182108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.374 [2024-11-20 10:04:22.182116] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.374 [2024-11-20 10:04:22.182123] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:51.374 [2024-11-20 10:04:22.182137] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.374 qpair failed and we were unable to recover it. 00:30:51.374 [2024-11-20 10:04:22.192021] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.374 [2024-11-20 10:04:22.192085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.374 [2024-11-20 10:04:22.192098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.374 [2024-11-20 10:04:22.192105] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.374 [2024-11-20 10:04:22.192112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:51.374 [2024-11-20 10:04:22.192130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.374 qpair failed and we were unable to recover it. 00:30:51.374 [2024-11-20 10:04:22.202200] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.374 [2024-11-20 10:04:22.202256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.374 [2024-11-20 10:04:22.202269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.374 [2024-11-20 10:04:22.202278] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.374 [2024-11-20 10:04:22.202284] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:51.374 [2024-11-20 10:04:22.202299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.374 qpair failed and we were unable to recover it. 00:30:51.374 [2024-11-20 10:04:22.212227] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.374 [2024-11-20 10:04:22.212296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.374 [2024-11-20 10:04:22.212310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.374 [2024-11-20 10:04:22.212318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.374 [2024-11-20 10:04:22.212324] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:51.374 [2024-11-20 10:04:22.212339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.374 qpair failed and we were unable to recover it. 00:30:51.374 [2024-11-20 10:04:22.222095] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.374 [2024-11-20 10:04:22.222141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.375 [2024-11-20 10:04:22.222156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.375 [2024-11-20 10:04:22.222167] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.375 [2024-11-20 10:04:22.222174] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:51.375 [2024-11-20 10:04:22.222189] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.375 qpair failed and we were unable to recover it. 00:30:51.375 [2024-11-20 10:04:22.232239] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.375 [2024-11-20 10:04:22.232297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.375 [2024-11-20 10:04:22.232311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.375 [2024-11-20 10:04:22.232318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.375 [2024-11-20 10:04:22.232324] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:51.375 [2024-11-20 10:04:22.232339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.375 qpair failed and we were unable to recover it. 00:30:51.375 [2024-11-20 10:04:22.242333] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.375 [2024-11-20 10:04:22.242401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.375 [2024-11-20 10:04:22.242415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.375 [2024-11-20 10:04:22.242422] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.375 [2024-11-20 10:04:22.242428] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:51.375 [2024-11-20 10:04:22.242443] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.375 qpair failed and we were unable to recover it. 00:30:51.375 [2024-11-20 10:04:22.252273] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.375 [2024-11-20 10:04:22.252330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.375 [2024-11-20 10:04:22.252343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.375 [2024-11-20 10:04:22.252350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.375 [2024-11-20 10:04:22.252357] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:51.375 [2024-11-20 10:04:22.252371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.375 qpair failed and we were unable to recover it. 00:30:51.375 [2024-11-20 10:04:22.262297] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.375 [2024-11-20 10:04:22.262353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.375 [2024-11-20 10:04:22.262366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.375 [2024-11-20 10:04:22.262374] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.375 [2024-11-20 10:04:22.262380] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:51.375 [2024-11-20 10:04:22.262395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.375 qpair failed and we were unable to recover it. 00:30:51.637 [2024-11-20 10:04:22.272366] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.637 [2024-11-20 10:04:22.272427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.637 [2024-11-20 10:04:22.272440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.637 [2024-11-20 10:04:22.272448] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.637 [2024-11-20 10:04:22.272454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:51.637 [2024-11-20 10:04:22.272468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.637 qpair failed and we were unable to recover it. 00:30:51.637 [2024-11-20 10:04:22.282400] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.637 [2024-11-20 10:04:22.282456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.637 [2024-11-20 10:04:22.282473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.637 [2024-11-20 10:04:22.282480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.637 [2024-11-20 10:04:22.282486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:51.638 [2024-11-20 10:04:22.282501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.638 qpair failed and we were unable to recover it. 00:30:51.638 [2024-11-20 10:04:22.292412] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.638 [2024-11-20 10:04:22.292467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.638 [2024-11-20 10:04:22.292480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.638 [2024-11-20 10:04:22.292487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.638 [2024-11-20 10:04:22.292494] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:51.638 [2024-11-20 10:04:22.292508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.638 qpair failed and we were unable to recover it. 00:30:51.638 [2024-11-20 10:04:22.302277] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.638 [2024-11-20 10:04:22.302324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.638 [2024-11-20 10:04:22.302338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.638 [2024-11-20 10:04:22.302345] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.638 [2024-11-20 10:04:22.302351] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:51.638 [2024-11-20 10:04:22.302366] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.638 qpair failed and we were unable to recover it. 00:30:51.638 [2024-11-20 10:04:22.312469] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.638 [2024-11-20 10:04:22.312524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.638 [2024-11-20 10:04:22.312538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.638 [2024-11-20 10:04:22.312545] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.638 [2024-11-20 10:04:22.312551] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:51.638 [2024-11-20 10:04:22.312566] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.638 qpair failed and we were unable to recover it. 00:30:51.638 [2024-11-20 10:04:22.322513] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.638 [2024-11-20 10:04:22.322567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.638 [2024-11-20 10:04:22.322581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.638 [2024-11-20 10:04:22.322588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.638 [2024-11-20 10:04:22.322598] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:51.638 [2024-11-20 10:04:22.322613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.638 qpair failed and we were unable to recover it. 00:30:51.638 [2024-11-20 10:04:22.332503] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.638 [2024-11-20 10:04:22.332557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.638 [2024-11-20 10:04:22.332570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.638 [2024-11-20 10:04:22.332578] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.638 [2024-11-20 10:04:22.332584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:51.638 [2024-11-20 10:04:22.332598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.638 qpair failed and we were unable to recover it. 00:30:51.638 [2024-11-20 10:04:22.342521] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.638 [2024-11-20 10:04:22.342571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.638 [2024-11-20 10:04:22.342584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.638 [2024-11-20 10:04:22.342591] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.638 [2024-11-20 10:04:22.342598] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:51.638 [2024-11-20 10:04:22.342613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.638 qpair failed and we were unable to recover it. 00:30:51.638 [2024-11-20 10:04:22.352723] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.638 [2024-11-20 10:04:22.352787] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.638 [2024-11-20 10:04:22.352800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.638 [2024-11-20 10:04:22.352807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.638 [2024-11-20 10:04:22.352814] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:51.638 [2024-11-20 10:04:22.352828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.638 qpair failed and we were unable to recover it. 00:30:51.638 [2024-11-20 10:04:22.362658] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.638 [2024-11-20 10:04:22.362714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.638 [2024-11-20 10:04:22.362727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.638 [2024-11-20 10:04:22.362734] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.638 [2024-11-20 10:04:22.362741] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:51.638 [2024-11-20 10:04:22.362756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.638 qpair failed and we were unable to recover it. 00:30:51.638 [2024-11-20 10:04:22.372591] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.638 [2024-11-20 10:04:22.372683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.638 [2024-11-20 10:04:22.372697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.638 [2024-11-20 10:04:22.372704] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.638 [2024-11-20 10:04:22.372711] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:51.638 [2024-11-20 10:04:22.372725] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.638 qpair failed and we were unable to recover it. 00:30:51.638 [2024-11-20 10:04:22.382646] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.638 [2024-11-20 10:04:22.382693] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.638 [2024-11-20 10:04:22.382706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.638 [2024-11-20 10:04:22.382714] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.638 [2024-11-20 10:04:22.382720] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:51.638 [2024-11-20 10:04:22.382734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.638 qpair failed and we were unable to recover it. 00:30:51.638 [2024-11-20 10:04:22.392659] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.638 [2024-11-20 10:04:22.392746] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.638 [2024-11-20 10:04:22.392759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.638 [2024-11-20 10:04:22.392768] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.638 [2024-11-20 10:04:22.392775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:51.638 [2024-11-20 10:04:22.392789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.638 qpair failed and we were unable to recover it. 00:30:51.638 [2024-11-20 10:04:22.402750] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.638 [2024-11-20 10:04:22.402806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.638 [2024-11-20 10:04:22.402819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.638 [2024-11-20 10:04:22.402827] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.638 [2024-11-20 10:04:22.402833] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:51.638 [2024-11-20 10:04:22.402848] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.638 qpair failed and we were unable to recover it. 00:30:51.638 [2024-11-20 10:04:22.412762] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.638 [2024-11-20 10:04:22.412810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.638 [2024-11-20 10:04:22.412830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.638 [2024-11-20 10:04:22.412839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.639 [2024-11-20 10:04:22.412846] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:51.639 [2024-11-20 10:04:22.412860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.639 qpair failed and we were unable to recover it. 00:30:51.639 [2024-11-20 10:04:22.422736] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.639 [2024-11-20 10:04:22.422789] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.639 [2024-11-20 10:04:22.422802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.639 [2024-11-20 10:04:22.422809] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.639 [2024-11-20 10:04:22.422816] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:51.639 [2024-11-20 10:04:22.422830] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.639 qpair failed and we were unable to recover it. 00:30:51.639 [2024-11-20 10:04:22.432786] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.639 [2024-11-20 10:04:22.432837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.639 [2024-11-20 10:04:22.432851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.639 [2024-11-20 10:04:22.432858] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.639 [2024-11-20 10:04:22.432865] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:51.639 [2024-11-20 10:04:22.432879] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.639 qpair failed and we were unable to recover it. 00:30:51.639 [2024-11-20 10:04:22.442834] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.639 [2024-11-20 10:04:22.442894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.639 [2024-11-20 10:04:22.442919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.639 [2024-11-20 10:04:22.442928] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.639 [2024-11-20 10:04:22.442935] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:51.639 [2024-11-20 10:04:22.442957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.639 qpair failed and we were unable to recover it. 00:30:51.639 [2024-11-20 10:04:22.452840] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.639 [2024-11-20 10:04:22.452898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.639 [2024-11-20 10:04:22.452913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.639 [2024-11-20 10:04:22.452921] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.639 [2024-11-20 10:04:22.452932] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:51.639 [2024-11-20 10:04:22.452948] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.639 qpair failed and we were unable to recover it. 00:30:51.639 [2024-11-20 10:04:22.462857] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.639 [2024-11-20 10:04:22.462908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.639 [2024-11-20 10:04:22.462922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.639 [2024-11-20 10:04:22.462929] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.639 [2024-11-20 10:04:22.462936] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:51.639 [2024-11-20 10:04:22.462952] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.639 qpair failed and we were unable to recover it. 00:30:51.639 [2024-11-20 10:04:22.472907] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.639 [2024-11-20 10:04:22.472968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.639 [2024-11-20 10:04:22.472983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.639 [2024-11-20 10:04:22.472990] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.639 [2024-11-20 10:04:22.472997] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:51.639 [2024-11-20 10:04:22.473017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.639 qpair failed and we were unable to recover it. 00:30:51.639 [2024-11-20 10:04:22.482834] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.639 [2024-11-20 10:04:22.482894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.639 [2024-11-20 10:04:22.482908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.639 [2024-11-20 10:04:22.482916] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.639 [2024-11-20 10:04:22.482923] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:51.639 [2024-11-20 10:04:22.482943] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.639 qpair failed and we were unable to recover it. 00:30:51.639 [2024-11-20 10:04:22.492997] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.639 [2024-11-20 10:04:22.493045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.639 [2024-11-20 10:04:22.493058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.639 [2024-11-20 10:04:22.493066] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.639 [2024-11-20 10:04:22.493072] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:51.639 [2024-11-20 10:04:22.493088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.639 qpair failed and we were unable to recover it. 00:30:51.639 [2024-11-20 10:04:22.502960] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.639 [2024-11-20 10:04:22.503008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.639 [2024-11-20 10:04:22.503021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.639 [2024-11-20 10:04:22.503029] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.639 [2024-11-20 10:04:22.503036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:51.639 [2024-11-20 10:04:22.503051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.639 qpair failed and we were unable to recover it. 00:30:51.639 [2024-11-20 10:04:22.513021] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.639 [2024-11-20 10:04:22.513075] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.639 [2024-11-20 10:04:22.513088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.639 [2024-11-20 10:04:22.513096] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.639 [2024-11-20 10:04:22.513103] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:51.639 [2024-11-20 10:04:22.513117] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.639 qpair failed and we were unable to recover it. 00:30:51.639 [2024-11-20 10:04:22.523035] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.639 [2024-11-20 10:04:22.523115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.639 [2024-11-20 10:04:22.523128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.639 [2024-11-20 10:04:22.523135] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.639 [2024-11-20 10:04:22.523143] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:51.639 [2024-11-20 10:04:22.523161] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.639 qpair failed and we were unable to recover it. 00:30:51.639 [2024-11-20 10:04:22.532983] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.639 [2024-11-20 10:04:22.533079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.639 [2024-11-20 10:04:22.533093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.639 [2024-11-20 10:04:22.533100] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.639 [2024-11-20 10:04:22.533107] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:51.639 [2024-11-20 10:04:22.533121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.639 qpair failed and we were unable to recover it. 00:30:51.639 [2024-11-20 10:04:22.543061] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.639 [2024-11-20 10:04:22.543105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.639 [2024-11-20 10:04:22.543121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.639 [2024-11-20 10:04:22.543129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.640 [2024-11-20 10:04:22.543135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:51.640 [2024-11-20 10:04:22.543150] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.640 qpair failed and we were unable to recover it. 00:30:51.903 [2024-11-20 10:04:22.553106] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.903 [2024-11-20 10:04:22.553166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.903 [2024-11-20 10:04:22.553180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.903 [2024-11-20 10:04:22.553187] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.903 [2024-11-20 10:04:22.553194] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:51.903 [2024-11-20 10:04:22.553209] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.903 qpair failed and we were unable to recover it. 00:30:51.903 [2024-11-20 10:04:22.563213] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.903 [2024-11-20 10:04:22.563279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.903 [2024-11-20 10:04:22.563293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.903 [2024-11-20 10:04:22.563300] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.903 [2024-11-20 10:04:22.563307] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:51.903 [2024-11-20 10:04:22.563322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.903 qpair failed and we were unable to recover it. 00:30:51.903 [2024-11-20 10:04:22.573198] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.903 [2024-11-20 10:04:22.573248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.903 [2024-11-20 10:04:22.573261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.903 [2024-11-20 10:04:22.573268] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.903 [2024-11-20 10:04:22.573275] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:51.903 [2024-11-20 10:04:22.573290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.903 qpair failed and we were unable to recover it. 00:30:51.903 [2024-11-20 10:04:22.583179] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.903 [2024-11-20 10:04:22.583225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.903 [2024-11-20 10:04:22.583239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.903 [2024-11-20 10:04:22.583249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.903 [2024-11-20 10:04:22.583256] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:51.903 [2024-11-20 10:04:22.583271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.903 qpair failed and we were unable to recover it. 00:30:51.903 [2024-11-20 10:04:22.593259] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.904 [2024-11-20 10:04:22.593346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.904 [2024-11-20 10:04:22.593359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.904 [2024-11-20 10:04:22.593367] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.904 [2024-11-20 10:04:22.593374] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:51.904 [2024-11-20 10:04:22.593389] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.904 qpair failed and we were unable to recover it. 00:30:51.904 [2024-11-20 10:04:22.603290] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.904 [2024-11-20 10:04:22.603346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.904 [2024-11-20 10:04:22.603359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.904 [2024-11-20 10:04:22.603367] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.904 [2024-11-20 10:04:22.603373] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:51.904 [2024-11-20 10:04:22.603388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.904 qpair failed and we were unable to recover it. 00:30:51.904 [2024-11-20 10:04:22.613291] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.904 [2024-11-20 10:04:22.613343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.904 [2024-11-20 10:04:22.613357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.904 [2024-11-20 10:04:22.613364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.904 [2024-11-20 10:04:22.613371] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:51.904 [2024-11-20 10:04:22.613386] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.904 qpair failed and we were unable to recover it. 00:30:51.904 [2024-11-20 10:04:22.623323] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.904 [2024-11-20 10:04:22.623367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.904 [2024-11-20 10:04:22.623381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.904 [2024-11-20 10:04:22.623388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.904 [2024-11-20 10:04:22.623395] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:51.904 [2024-11-20 10:04:22.623414] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.904 qpair failed and we were unable to recover it. 00:30:51.904 [2024-11-20 10:04:22.633354] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.904 [2024-11-20 10:04:22.633410] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.904 [2024-11-20 10:04:22.633423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.904 [2024-11-20 10:04:22.633431] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.904 [2024-11-20 10:04:22.633438] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:51.904 [2024-11-20 10:04:22.633453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.904 qpair failed and we were unable to recover it. 00:30:51.904 [2024-11-20 10:04:22.643384] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.904 [2024-11-20 10:04:22.643440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.904 [2024-11-20 10:04:22.643453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.904 [2024-11-20 10:04:22.643460] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.904 [2024-11-20 10:04:22.643467] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:51.904 [2024-11-20 10:04:22.643482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.904 qpair failed and we were unable to recover it. 00:30:51.904 [2024-11-20 10:04:22.653399] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.904 [2024-11-20 10:04:22.653454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.904 [2024-11-20 10:04:22.653467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.904 [2024-11-20 10:04:22.653474] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.904 [2024-11-20 10:04:22.653481] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:51.904 [2024-11-20 10:04:22.653495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.904 qpair failed and we were unable to recover it. 00:30:51.904 [2024-11-20 10:04:22.663424] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.904 [2024-11-20 10:04:22.663470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.904 [2024-11-20 10:04:22.663484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.904 [2024-11-20 10:04:22.663491] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.904 [2024-11-20 10:04:22.663498] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:51.904 [2024-11-20 10:04:22.663513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.904 qpair failed and we were unable to recover it. 00:30:51.904 [2024-11-20 10:04:22.673480] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.904 [2024-11-20 10:04:22.673536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.904 [2024-11-20 10:04:22.673550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.904 [2024-11-20 10:04:22.673557] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.904 [2024-11-20 10:04:22.673564] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:51.904 [2024-11-20 10:04:22.673578] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.904 qpair failed and we were unable to recover it. 00:30:51.904 [2024-11-20 10:04:22.683519] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.904 [2024-11-20 10:04:22.683574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.904 [2024-11-20 10:04:22.683587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.904 [2024-11-20 10:04:22.683594] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.904 [2024-11-20 10:04:22.683601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:51.904 [2024-11-20 10:04:22.683615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.904 qpair failed and we were unable to recover it. 00:30:51.904 [2024-11-20 10:04:22.693491] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.904 [2024-11-20 10:04:22.693547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.904 [2024-11-20 10:04:22.693560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.904 [2024-11-20 10:04:22.693567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.904 [2024-11-20 10:04:22.693574] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:51.904 [2024-11-20 10:04:22.693589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.904 qpair failed and we were unable to recover it. 00:30:51.904 [2024-11-20 10:04:22.703505] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.904 [2024-11-20 10:04:22.703557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.904 [2024-11-20 10:04:22.703570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.904 [2024-11-20 10:04:22.703578] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.904 [2024-11-20 10:04:22.703585] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:51.904 [2024-11-20 10:04:22.703599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.904 qpair failed and we were unable to recover it. 00:30:51.904 [2024-11-20 10:04:22.713462] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.904 [2024-11-20 10:04:22.713521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.904 [2024-11-20 10:04:22.713533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.904 [2024-11-20 10:04:22.713544] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.904 [2024-11-20 10:04:22.713551] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:51.904 [2024-11-20 10:04:22.713566] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.904 qpair failed and we were unable to recover it. 00:30:51.904 [2024-11-20 10:04:22.723582] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.904 [2024-11-20 10:04:22.723634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.905 [2024-11-20 10:04:22.723648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.905 [2024-11-20 10:04:22.723655] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.905 [2024-11-20 10:04:22.723661] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:51.905 [2024-11-20 10:04:22.723676] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.905 qpair failed and we were unable to recover it. 00:30:51.905 [2024-11-20 10:04:22.733620] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.905 [2024-11-20 10:04:22.733670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.905 [2024-11-20 10:04:22.733683] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.905 [2024-11-20 10:04:22.733691] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.905 [2024-11-20 10:04:22.733697] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:51.905 [2024-11-20 10:04:22.733712] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.905 qpair failed and we were unable to recover it. 00:30:51.905 [2024-11-20 10:04:22.743580] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.905 [2024-11-20 10:04:22.743628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.905 [2024-11-20 10:04:22.743641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.905 [2024-11-20 10:04:22.743648] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.905 [2024-11-20 10:04:22.743655] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:51.905 [2024-11-20 10:04:22.743670] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.905 qpair failed and we were unable to recover it. 00:30:51.905 [2024-11-20 10:04:22.753675] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.905 [2024-11-20 10:04:22.753733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.905 [2024-11-20 10:04:22.753746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.905 [2024-11-20 10:04:22.753753] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.905 [2024-11-20 10:04:22.753760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:51.905 [2024-11-20 10:04:22.753779] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.905 qpair failed and we were unable to recover it. 00:30:51.905 [2024-11-20 10:04:22.763725] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.905 [2024-11-20 10:04:22.763781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.905 [2024-11-20 10:04:22.763794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.905 [2024-11-20 10:04:22.763801] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.905 [2024-11-20 10:04:22.763808] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:51.905 [2024-11-20 10:04:22.763822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.905 qpair failed and we were unable to recover it. 00:30:51.905 [2024-11-20 10:04:22.773736] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.905 [2024-11-20 10:04:22.773785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.905 [2024-11-20 10:04:22.773798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.905 [2024-11-20 10:04:22.773805] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.905 [2024-11-20 10:04:22.773812] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:51.905 [2024-11-20 10:04:22.773827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.905 qpair failed and we were unable to recover it. 00:30:51.905 [2024-11-20 10:04:22.783725] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.905 [2024-11-20 10:04:22.783818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.905 [2024-11-20 10:04:22.783831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.905 [2024-11-20 10:04:22.783839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.905 [2024-11-20 10:04:22.783845] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:51.905 [2024-11-20 10:04:22.783860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.905 qpair failed and we were unable to recover it. 00:30:51.905 [2024-11-20 10:04:22.793807] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.905 [2024-11-20 10:04:22.793860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.905 [2024-11-20 10:04:22.793873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.905 [2024-11-20 10:04:22.793880] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.905 [2024-11-20 10:04:22.793887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:51.905 [2024-11-20 10:04:22.793901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.905 qpair failed and we were unable to recover it. 00:30:51.905 [2024-11-20 10:04:22.803842] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.905 [2024-11-20 10:04:22.803910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.905 [2024-11-20 10:04:22.803934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.905 [2024-11-20 10:04:22.803943] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.905 [2024-11-20 10:04:22.803951] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:51.905 [2024-11-20 10:04:22.803971] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.905 qpair failed and we were unable to recover it. 00:30:51.905 [2024-11-20 10:04:22.813738] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.905 [2024-11-20 10:04:22.813797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.905 [2024-11-20 10:04:22.813813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.905 [2024-11-20 10:04:22.813821] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.905 [2024-11-20 10:04:22.813828] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:51.905 [2024-11-20 10:04:22.813847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.905 qpair failed and we were unable to recover it. 00:30:52.169 [2024-11-20 10:04:22.823843] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.169 [2024-11-20 10:04:22.823890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.169 [2024-11-20 10:04:22.823905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.169 [2024-11-20 10:04:22.823912] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.169 [2024-11-20 10:04:22.823919] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.169 [2024-11-20 10:04:22.823935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.169 qpair failed and we were unable to recover it. 00:30:52.169 [2024-11-20 10:04:22.833936] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.169 [2024-11-20 10:04:22.834020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.169 [2024-11-20 10:04:22.834045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.169 [2024-11-20 10:04:22.834054] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.169 [2024-11-20 10:04:22.834061] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.169 [2024-11-20 10:04:22.834081] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.169 qpair failed and we were unable to recover it. 00:30:52.169 [2024-11-20 10:04:22.843952] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.169 [2024-11-20 10:04:22.844010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.169 [2024-11-20 10:04:22.844030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.169 [2024-11-20 10:04:22.844037] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.169 [2024-11-20 10:04:22.844045] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.169 [2024-11-20 10:04:22.844061] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.169 qpair failed and we were unable to recover it. 00:30:52.169 [2024-11-20 10:04:22.853967] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.169 [2024-11-20 10:04:22.854020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.169 [2024-11-20 10:04:22.854034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.169 [2024-11-20 10:04:22.854042] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.169 [2024-11-20 10:04:22.854048] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.169 [2024-11-20 10:04:22.854064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.169 qpair failed and we were unable to recover it. 00:30:52.169 [2024-11-20 10:04:22.863934] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.169 [2024-11-20 10:04:22.863981] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.169 [2024-11-20 10:04:22.863995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.169 [2024-11-20 10:04:22.864002] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.169 [2024-11-20 10:04:22.864009] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.169 [2024-11-20 10:04:22.864023] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.169 qpair failed and we were unable to recover it. 00:30:52.169 [2024-11-20 10:04:22.874087] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.169 [2024-11-20 10:04:22.874142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.169 [2024-11-20 10:04:22.874155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.169 [2024-11-20 10:04:22.874166] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.169 [2024-11-20 10:04:22.874173] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.169 [2024-11-20 10:04:22.874188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.169 qpair failed and we were unable to recover it. 00:30:52.169 [2024-11-20 10:04:22.884068] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.169 [2024-11-20 10:04:22.884124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.169 [2024-11-20 10:04:22.884137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.169 [2024-11-20 10:04:22.884145] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.169 [2024-11-20 10:04:22.884155] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.169 [2024-11-20 10:04:22.884174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.169 qpair failed and we were unable to recover it. 00:30:52.169 [2024-11-20 10:04:22.894089] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.169 [2024-11-20 10:04:22.894178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.169 [2024-11-20 10:04:22.894191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.169 [2024-11-20 10:04:22.894199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.169 [2024-11-20 10:04:22.894206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.169 [2024-11-20 10:04:22.894220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.169 qpair failed and we were unable to recover it. 00:30:52.169 [2024-11-20 10:04:22.904066] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.169 [2024-11-20 10:04:22.904116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.169 [2024-11-20 10:04:22.904129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.169 [2024-11-20 10:04:22.904136] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.169 [2024-11-20 10:04:22.904143] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.169 [2024-11-20 10:04:22.904162] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.169 qpair failed and we were unable to recover it. 00:30:52.169 [2024-11-20 10:04:22.914108] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.170 [2024-11-20 10:04:22.914169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.170 [2024-11-20 10:04:22.914182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.170 [2024-11-20 10:04:22.914190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.170 [2024-11-20 10:04:22.914196] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.170 [2024-11-20 10:04:22.914211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.170 qpair failed and we were unable to recover it. 00:30:52.170 [2024-11-20 10:04:22.924195] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.170 [2024-11-20 10:04:22.924248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.170 [2024-11-20 10:04:22.924261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.170 [2024-11-20 10:04:22.924269] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.170 [2024-11-20 10:04:22.924276] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.170 [2024-11-20 10:04:22.924290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.170 qpair failed and we were unable to recover it. 00:30:52.170 [2024-11-20 10:04:22.934186] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.170 [2024-11-20 10:04:22.934240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.170 [2024-11-20 10:04:22.934253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.170 [2024-11-20 10:04:22.934261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.170 [2024-11-20 10:04:22.934267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.170 [2024-11-20 10:04:22.934282] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.170 qpair failed and we were unable to recover it. 00:30:52.170 [2024-11-20 10:04:22.944177] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.170 [2024-11-20 10:04:22.944228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.170 [2024-11-20 10:04:22.944242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.170 [2024-11-20 10:04:22.944250] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.170 [2024-11-20 10:04:22.944256] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.170 [2024-11-20 10:04:22.944271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.170 qpair failed and we were unable to recover it. 00:30:52.170 [2024-11-20 10:04:22.954276] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.170 [2024-11-20 10:04:22.954328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.170 [2024-11-20 10:04:22.954341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.170 [2024-11-20 10:04:22.954348] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.170 [2024-11-20 10:04:22.954355] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.170 [2024-11-20 10:04:22.954369] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.170 qpair failed and we were unable to recover it. 00:30:52.170 [2024-11-20 10:04:22.964271] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.170 [2024-11-20 10:04:22.964325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.170 [2024-11-20 10:04:22.964338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.170 [2024-11-20 10:04:22.964346] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.170 [2024-11-20 10:04:22.964352] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.170 [2024-11-20 10:04:22.964367] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.170 qpair failed and we were unable to recover it. 00:30:52.170 [2024-11-20 10:04:22.974312] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.170 [2024-11-20 10:04:22.974361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.170 [2024-11-20 10:04:22.974378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.170 [2024-11-20 10:04:22.974386] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.170 [2024-11-20 10:04:22.974392] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.170 [2024-11-20 10:04:22.974407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.170 qpair failed and we were unable to recover it. 00:30:52.170 [2024-11-20 10:04:22.984306] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.170 [2024-11-20 10:04:22.984359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.170 [2024-11-20 10:04:22.984372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.170 [2024-11-20 10:04:22.984380] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.170 [2024-11-20 10:04:22.984386] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.170 [2024-11-20 10:04:22.984401] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.170 qpair failed and we were unable to recover it. 00:30:52.170 [2024-11-20 10:04:22.994232] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.170 [2024-11-20 10:04:22.994288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.170 [2024-11-20 10:04:22.994301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.170 [2024-11-20 10:04:22.994308] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.170 [2024-11-20 10:04:22.994315] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.170 [2024-11-20 10:04:22.994330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.170 qpair failed and we were unable to recover it. 00:30:52.170 [2024-11-20 10:04:23.004406] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.170 [2024-11-20 10:04:23.004459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.170 [2024-11-20 10:04:23.004472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.170 [2024-11-20 10:04:23.004479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.170 [2024-11-20 10:04:23.004486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.170 [2024-11-20 10:04:23.004500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.170 qpair failed and we were unable to recover it. 00:30:52.170 [2024-11-20 10:04:23.014407] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.170 [2024-11-20 10:04:23.014458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.170 [2024-11-20 10:04:23.014471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.170 [2024-11-20 10:04:23.014479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.170 [2024-11-20 10:04:23.014489] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.170 [2024-11-20 10:04:23.014503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.170 qpair failed and we were unable to recover it. 00:30:52.170 [2024-11-20 10:04:23.024408] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.170 [2024-11-20 10:04:23.024457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.170 [2024-11-20 10:04:23.024470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.170 [2024-11-20 10:04:23.024478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.170 [2024-11-20 10:04:23.024484] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.170 [2024-11-20 10:04:23.024499] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.170 qpair failed and we were unable to recover it. 00:30:52.170 [2024-11-20 10:04:23.034499] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.170 [2024-11-20 10:04:23.034555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.170 [2024-11-20 10:04:23.034568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.170 [2024-11-20 10:04:23.034575] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.170 [2024-11-20 10:04:23.034582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.170 [2024-11-20 10:04:23.034597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.170 qpair failed and we were unable to recover it. 00:30:52.171 [2024-11-20 10:04:23.044508] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.171 [2024-11-20 10:04:23.044560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.171 [2024-11-20 10:04:23.044572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.171 [2024-11-20 10:04:23.044580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.171 [2024-11-20 10:04:23.044586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.171 [2024-11-20 10:04:23.044601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.171 qpair failed and we were unable to recover it. 00:30:52.171 [2024-11-20 10:04:23.054503] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.171 [2024-11-20 10:04:23.054559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.171 [2024-11-20 10:04:23.054572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.171 [2024-11-20 10:04:23.054579] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.171 [2024-11-20 10:04:23.054586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.171 [2024-11-20 10:04:23.054600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.171 qpair failed and we were unable to recover it. 00:30:52.171 [2024-11-20 10:04:23.064410] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.171 [2024-11-20 10:04:23.064461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.171 [2024-11-20 10:04:23.064474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.171 [2024-11-20 10:04:23.064482] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.171 [2024-11-20 10:04:23.064489] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.171 [2024-11-20 10:04:23.064503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.171 qpair failed and we were unable to recover it. 00:30:52.171 [2024-11-20 10:04:23.074603] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.171 [2024-11-20 10:04:23.074659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.171 [2024-11-20 10:04:23.074672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.171 [2024-11-20 10:04:23.074679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.171 [2024-11-20 10:04:23.074686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.171 [2024-11-20 10:04:23.074700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.171 qpair failed and we were unable to recover it. 00:30:52.433 [2024-11-20 10:04:23.084637] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.433 [2024-11-20 10:04:23.084693] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.433 [2024-11-20 10:04:23.084706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.433 [2024-11-20 10:04:23.084713] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.433 [2024-11-20 10:04:23.084720] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.433 [2024-11-20 10:04:23.084734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.433 qpair failed and we were unable to recover it. 00:30:52.433 [2024-11-20 10:04:23.094638] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.433 [2024-11-20 10:04:23.094740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.433 [2024-11-20 10:04:23.094753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.433 [2024-11-20 10:04:23.094761] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.433 [2024-11-20 10:04:23.094767] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.433 [2024-11-20 10:04:23.094782] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.433 qpair failed and we were unable to recover it. 00:30:52.433 [2024-11-20 10:04:23.104633] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.433 [2024-11-20 10:04:23.104680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.433 [2024-11-20 10:04:23.104701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.433 [2024-11-20 10:04:23.104708] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.433 [2024-11-20 10:04:23.104715] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.433 [2024-11-20 10:04:23.104729] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.433 qpair failed and we were unable to recover it. 00:30:52.433 [2024-11-20 10:04:23.114711] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.433 [2024-11-20 10:04:23.114765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.434 [2024-11-20 10:04:23.114779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.434 [2024-11-20 10:04:23.114786] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.434 [2024-11-20 10:04:23.114792] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.434 [2024-11-20 10:04:23.114807] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.434 qpair failed and we were unable to recover it. 00:30:52.434 [2024-11-20 10:04:23.124740] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.434 [2024-11-20 10:04:23.124794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.434 [2024-11-20 10:04:23.124807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.434 [2024-11-20 10:04:23.124814] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.434 [2024-11-20 10:04:23.124821] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.434 [2024-11-20 10:04:23.124835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.434 qpair failed and we were unable to recover it. 00:30:52.434 [2024-11-20 10:04:23.134620] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.434 [2024-11-20 10:04:23.134674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.434 [2024-11-20 10:04:23.134688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.434 [2024-11-20 10:04:23.134695] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.434 [2024-11-20 10:04:23.134702] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.434 [2024-11-20 10:04:23.134717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.434 qpair failed and we were unable to recover it. 00:30:52.434 [2024-11-20 10:04:23.144786] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.434 [2024-11-20 10:04:23.144856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.434 [2024-11-20 10:04:23.144870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.434 [2024-11-20 10:04:23.144880] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.434 [2024-11-20 10:04:23.144887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.434 [2024-11-20 10:04:23.144903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.434 qpair failed and we were unable to recover it. 00:30:52.434 [2024-11-20 10:04:23.154809] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.434 [2024-11-20 10:04:23.154859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.434 [2024-11-20 10:04:23.154873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.434 [2024-11-20 10:04:23.154880] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.434 [2024-11-20 10:04:23.154887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.434 [2024-11-20 10:04:23.154901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.434 qpair failed and we were unable to recover it. 00:30:52.434 [2024-11-20 10:04:23.164862] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.434 [2024-11-20 10:04:23.164914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.434 [2024-11-20 10:04:23.164928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.434 [2024-11-20 10:04:23.164935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.434 [2024-11-20 10:04:23.164942] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.434 [2024-11-20 10:04:23.164957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.434 qpair failed and we were unable to recover it. 00:30:52.434 [2024-11-20 10:04:23.174875] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.434 [2024-11-20 10:04:23.174928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.434 [2024-11-20 10:04:23.174941] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.434 [2024-11-20 10:04:23.174948] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.434 [2024-11-20 10:04:23.174955] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.434 [2024-11-20 10:04:23.174969] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.434 qpair failed and we were unable to recover it. 00:30:52.434 [2024-11-20 10:04:23.184873] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.434 [2024-11-20 10:04:23.184960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.434 [2024-11-20 10:04:23.184973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.434 [2024-11-20 10:04:23.184981] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.434 [2024-11-20 10:04:23.184988] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.434 [2024-11-20 10:04:23.185006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.434 qpair failed and we were unable to recover it. 00:30:52.434 [2024-11-20 10:04:23.194943] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.434 [2024-11-20 10:04:23.194997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.434 [2024-11-20 10:04:23.195010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.434 [2024-11-20 10:04:23.195017] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.434 [2024-11-20 10:04:23.195024] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.434 [2024-11-20 10:04:23.195038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.434 qpair failed and we were unable to recover it. 00:30:52.434 [2024-11-20 10:04:23.204968] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.434 [2024-11-20 10:04:23.205018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.434 [2024-11-20 10:04:23.205032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.434 [2024-11-20 10:04:23.205039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.434 [2024-11-20 10:04:23.205046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.434 [2024-11-20 10:04:23.205060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.434 qpair failed and we were unable to recover it. 00:30:52.434 [2024-11-20 10:04:23.214983] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.434 [2024-11-20 10:04:23.215034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.434 [2024-11-20 10:04:23.215047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.434 [2024-11-20 10:04:23.215054] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.434 [2024-11-20 10:04:23.215060] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.434 [2024-11-20 10:04:23.215075] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.434 qpair failed and we were unable to recover it. 00:30:52.434 [2024-11-20 10:04:23.224979] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.434 [2024-11-20 10:04:23.225028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.434 [2024-11-20 10:04:23.225041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.434 [2024-11-20 10:04:23.225049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.434 [2024-11-20 10:04:23.225055] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.434 [2024-11-20 10:04:23.225069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.434 qpair failed and we were unable to recover it. 00:30:52.434 [2024-11-20 10:04:23.234928] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.434 [2024-11-20 10:04:23.234988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.434 [2024-11-20 10:04:23.235001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.434 [2024-11-20 10:04:23.235008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.434 [2024-11-20 10:04:23.235015] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.434 [2024-11-20 10:04:23.235029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.434 qpair failed and we were unable to recover it. 00:30:52.434 [2024-11-20 10:04:23.245070] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.434 [2024-11-20 10:04:23.245127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.435 [2024-11-20 10:04:23.245140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.435 [2024-11-20 10:04:23.245147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.435 [2024-11-20 10:04:23.245153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.435 [2024-11-20 10:04:23.245172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.435 qpair failed and we were unable to recover it. 00:30:52.435 [2024-11-20 10:04:23.255098] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.435 [2024-11-20 10:04:23.255150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.435 [2024-11-20 10:04:23.255166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.435 [2024-11-20 10:04:23.255174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.435 [2024-11-20 10:04:23.255180] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.435 [2024-11-20 10:04:23.255194] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.435 qpair failed and we were unable to recover it. 00:30:52.435 [2024-11-20 10:04:23.265075] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.435 [2024-11-20 10:04:23.265120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.435 [2024-11-20 10:04:23.265133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.435 [2024-11-20 10:04:23.265141] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.435 [2024-11-20 10:04:23.265147] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.435 [2024-11-20 10:04:23.265168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.435 qpair failed and we were unable to recover it. 00:30:52.435 [2024-11-20 10:04:23.275127] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.435 [2024-11-20 10:04:23.275186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.435 [2024-11-20 10:04:23.275199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.435 [2024-11-20 10:04:23.275210] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.435 [2024-11-20 10:04:23.275217] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.435 [2024-11-20 10:04:23.275231] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.435 qpair failed and we were unable to recover it. 00:30:52.435 [2024-11-20 10:04:23.285197] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.435 [2024-11-20 10:04:23.285250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.435 [2024-11-20 10:04:23.285263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.435 [2024-11-20 10:04:23.285271] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.435 [2024-11-20 10:04:23.285278] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.435 [2024-11-20 10:04:23.285293] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.435 qpair failed and we were unable to recover it. 00:30:52.435 [2024-11-20 10:04:23.295197] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.435 [2024-11-20 10:04:23.295248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.435 [2024-11-20 10:04:23.295261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.435 [2024-11-20 10:04:23.295268] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.435 [2024-11-20 10:04:23.295275] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.435 [2024-11-20 10:04:23.295290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.435 qpair failed and we were unable to recover it. 00:30:52.435 [2024-11-20 10:04:23.305192] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.435 [2024-11-20 10:04:23.305245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.435 [2024-11-20 10:04:23.305258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.435 [2024-11-20 10:04:23.305266] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.435 [2024-11-20 10:04:23.305273] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.435 [2024-11-20 10:04:23.305287] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.435 qpair failed and we were unable to recover it. 00:30:52.435 [2024-11-20 10:04:23.315267] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.435 [2024-11-20 10:04:23.315323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.435 [2024-11-20 10:04:23.315335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.435 [2024-11-20 10:04:23.315343] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.435 [2024-11-20 10:04:23.315349] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.435 [2024-11-20 10:04:23.315368] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.435 qpair failed and we were unable to recover it. 00:30:52.435 [2024-11-20 10:04:23.325169] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.435 [2024-11-20 10:04:23.325232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.435 [2024-11-20 10:04:23.325247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.435 [2024-11-20 10:04:23.325254] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.435 [2024-11-20 10:04:23.325261] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.435 [2024-11-20 10:04:23.325276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.435 qpair failed and we were unable to recover it. 00:30:52.435 [2024-11-20 10:04:23.335319] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.435 [2024-11-20 10:04:23.335369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.435 [2024-11-20 10:04:23.335383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.435 [2024-11-20 10:04:23.335390] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.435 [2024-11-20 10:04:23.335397] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.435 [2024-11-20 10:04:23.335412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.435 qpair failed and we were unable to recover it. 00:30:52.696 [2024-11-20 10:04:23.345277] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.696 [2024-11-20 10:04:23.345321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.696 [2024-11-20 10:04:23.345334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.696 [2024-11-20 10:04:23.345342] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.696 [2024-11-20 10:04:23.345348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.696 [2024-11-20 10:04:23.345363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.696 qpair failed and we were unable to recover it. 00:30:52.696 [2024-11-20 10:04:23.355251] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.696 [2024-11-20 10:04:23.355307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.696 [2024-11-20 10:04:23.355321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.696 [2024-11-20 10:04:23.355332] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.696 [2024-11-20 10:04:23.355339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.696 [2024-11-20 10:04:23.355354] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.696 qpair failed and we were unable to recover it. 00:30:52.696 [2024-11-20 10:04:23.365403] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.696 [2024-11-20 10:04:23.365455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.696 [2024-11-20 10:04:23.365469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.696 [2024-11-20 10:04:23.365477] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.697 [2024-11-20 10:04:23.365484] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.697 [2024-11-20 10:04:23.365498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.697 qpair failed and we were unable to recover it. 00:30:52.697 [2024-11-20 10:04:23.375302] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.697 [2024-11-20 10:04:23.375359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.697 [2024-11-20 10:04:23.375372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.697 [2024-11-20 10:04:23.375380] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.697 [2024-11-20 10:04:23.375387] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.697 [2024-11-20 10:04:23.375402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.697 qpair failed and we were unable to recover it. 00:30:52.697 [2024-11-20 10:04:23.385444] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.697 [2024-11-20 10:04:23.385493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.697 [2024-11-20 10:04:23.385506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.697 [2024-11-20 10:04:23.385513] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.697 [2024-11-20 10:04:23.385520] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.697 [2024-11-20 10:04:23.385535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.697 qpair failed and we were unable to recover it. 00:30:52.697 [2024-11-20 10:04:23.395460] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.697 [2024-11-20 10:04:23.395517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.697 [2024-11-20 10:04:23.395530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.697 [2024-11-20 10:04:23.395537] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.697 [2024-11-20 10:04:23.395544] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.697 [2024-11-20 10:04:23.395559] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.697 qpair failed and we were unable to recover it. 00:30:52.697 [2024-11-20 10:04:23.405503] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.697 [2024-11-20 10:04:23.405555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.697 [2024-11-20 10:04:23.405571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.697 [2024-11-20 10:04:23.405578] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.697 [2024-11-20 10:04:23.405585] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.697 [2024-11-20 10:04:23.405600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.697 qpair failed and we were unable to recover it. 00:30:52.697 [2024-11-20 10:04:23.415536] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.697 [2024-11-20 10:04:23.415587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.697 [2024-11-20 10:04:23.415600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.697 [2024-11-20 10:04:23.415608] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.697 [2024-11-20 10:04:23.415615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.697 [2024-11-20 10:04:23.415630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.697 qpair failed and we were unable to recover it. 00:30:52.697 [2024-11-20 10:04:23.425480] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.697 [2024-11-20 10:04:23.425527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.697 [2024-11-20 10:04:23.425540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.697 [2024-11-20 10:04:23.425547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.697 [2024-11-20 10:04:23.425554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.697 [2024-11-20 10:04:23.425568] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.697 qpair failed and we were unable to recover it. 00:30:52.697 [2024-11-20 10:04:23.435450] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.697 [2024-11-20 10:04:23.435503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.697 [2024-11-20 10:04:23.435516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.697 [2024-11-20 10:04:23.435524] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.697 [2024-11-20 10:04:23.435531] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.697 [2024-11-20 10:04:23.435545] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.697 qpair failed and we were unable to recover it. 00:30:52.697 [2024-11-20 10:04:23.445615] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.697 [2024-11-20 10:04:23.445670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.697 [2024-11-20 10:04:23.445683] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.697 [2024-11-20 10:04:23.445691] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.697 [2024-11-20 10:04:23.445701] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.697 [2024-11-20 10:04:23.445715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.697 qpair failed and we were unable to recover it. 00:30:52.697 [2024-11-20 10:04:23.455633] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.697 [2024-11-20 10:04:23.455690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.697 [2024-11-20 10:04:23.455703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.697 [2024-11-20 10:04:23.455710] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.697 [2024-11-20 10:04:23.455717] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.697 [2024-11-20 10:04:23.455731] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.697 qpair failed and we were unable to recover it. 00:30:52.697 [2024-11-20 10:04:23.465640] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.697 [2024-11-20 10:04:23.465698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.697 [2024-11-20 10:04:23.465711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.697 [2024-11-20 10:04:23.465719] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.697 [2024-11-20 10:04:23.465725] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.697 [2024-11-20 10:04:23.465739] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.697 qpair failed and we were unable to recover it. 00:30:52.697 [2024-11-20 10:04:23.475707] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.697 [2024-11-20 10:04:23.475767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.697 [2024-11-20 10:04:23.475780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.697 [2024-11-20 10:04:23.475787] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.697 [2024-11-20 10:04:23.475794] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.697 [2024-11-20 10:04:23.475808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.697 qpair failed and we were unable to recover it. 00:30:52.697 [2024-11-20 10:04:23.485730] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.697 [2024-11-20 10:04:23.485785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.697 [2024-11-20 10:04:23.485799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.697 [2024-11-20 10:04:23.485806] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.697 [2024-11-20 10:04:23.485812] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.697 [2024-11-20 10:04:23.485827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.697 qpair failed and we were unable to recover it. 00:30:52.697 [2024-11-20 10:04:23.495646] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.697 [2024-11-20 10:04:23.495695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.697 [2024-11-20 10:04:23.495708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.697 [2024-11-20 10:04:23.495716] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.698 [2024-11-20 10:04:23.495723] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.698 [2024-11-20 10:04:23.495737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.698 qpair failed and we were unable to recover it. 00:30:52.698 [2024-11-20 10:04:23.505728] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.698 [2024-11-20 10:04:23.505777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.698 [2024-11-20 10:04:23.505790] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.698 [2024-11-20 10:04:23.505797] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.698 [2024-11-20 10:04:23.505804] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.698 [2024-11-20 10:04:23.505818] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.698 qpair failed and we were unable to recover it. 00:30:52.698 [2024-11-20 10:04:23.515804] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.698 [2024-11-20 10:04:23.515859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.698 [2024-11-20 10:04:23.515872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.698 [2024-11-20 10:04:23.515879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.698 [2024-11-20 10:04:23.515886] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.698 [2024-11-20 10:04:23.515900] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.698 qpair failed and we were unable to recover it. 00:30:52.698 [2024-11-20 10:04:23.525835] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.698 [2024-11-20 10:04:23.525890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.698 [2024-11-20 10:04:23.525904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.698 [2024-11-20 10:04:23.525911] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.698 [2024-11-20 10:04:23.525918] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.698 [2024-11-20 10:04:23.525932] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.698 qpair failed and we were unable to recover it. 00:30:52.698 [2024-11-20 10:04:23.535780] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.698 [2024-11-20 10:04:23.535831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.698 [2024-11-20 10:04:23.535848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.698 [2024-11-20 10:04:23.535855] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.698 [2024-11-20 10:04:23.535862] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.698 [2024-11-20 10:04:23.535876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.698 qpair failed and we were unable to recover it. 00:30:52.698 [2024-11-20 10:04:23.545844] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.698 [2024-11-20 10:04:23.545898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.698 [2024-11-20 10:04:23.545922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.698 [2024-11-20 10:04:23.545931] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.698 [2024-11-20 10:04:23.545938] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.698 [2024-11-20 10:04:23.545958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.698 qpair failed and we were unable to recover it. 00:30:52.698 [2024-11-20 10:04:23.555926] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.698 [2024-11-20 10:04:23.555983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.698 [2024-11-20 10:04:23.555998] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.698 [2024-11-20 10:04:23.556005] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.698 [2024-11-20 10:04:23.556012] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.698 [2024-11-20 10:04:23.556028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.698 qpair failed and we were unable to recover it. 00:30:52.698 [2024-11-20 10:04:23.565938] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.698 [2024-11-20 10:04:23.565991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.698 [2024-11-20 10:04:23.566004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.698 [2024-11-20 10:04:23.566012] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.698 [2024-11-20 10:04:23.566019] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.698 [2024-11-20 10:04:23.566034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.698 qpair failed and we were unable to recover it. 00:30:52.698 [2024-11-20 10:04:23.575963] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.698 [2024-11-20 10:04:23.576018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.698 [2024-11-20 10:04:23.576031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.698 [2024-11-20 10:04:23.576038] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.698 [2024-11-20 10:04:23.576049] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.698 [2024-11-20 10:04:23.576065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.698 qpair failed and we were unable to recover it. 00:30:52.698 [2024-11-20 10:04:23.585945] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.698 [2024-11-20 10:04:23.585992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.698 [2024-11-20 10:04:23.586006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.698 [2024-11-20 10:04:23.586013] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.698 [2024-11-20 10:04:23.586020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.698 [2024-11-20 10:04:23.586035] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.698 qpair failed and we were unable to recover it. 00:30:52.698 [2024-11-20 10:04:23.595885] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.698 [2024-11-20 10:04:23.595936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.698 [2024-11-20 10:04:23.595949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.698 [2024-11-20 10:04:23.595956] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.698 [2024-11-20 10:04:23.595963] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.698 [2024-11-20 10:04:23.595978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.698 qpair failed and we were unable to recover it. 00:30:52.698 [2024-11-20 10:04:23.606061] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.698 [2024-11-20 10:04:23.606161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.698 [2024-11-20 10:04:23.606175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.698 [2024-11-20 10:04:23.606183] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.698 [2024-11-20 10:04:23.606190] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.698 [2024-11-20 10:04:23.606205] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.698 qpair failed and we were unable to recover it. 00:30:52.960 [2024-11-20 10:04:23.616060] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.960 [2024-11-20 10:04:23.616146] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.960 [2024-11-20 10:04:23.616167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.960 [2024-11-20 10:04:23.616176] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.960 [2024-11-20 10:04:23.616187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.960 [2024-11-20 10:04:23.616202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.960 qpair failed and we were unable to recover it. 00:30:52.960 [2024-11-20 10:04:23.626062] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.960 [2024-11-20 10:04:23.626109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.960 [2024-11-20 10:04:23.626122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.960 [2024-11-20 10:04:23.626130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.960 [2024-11-20 10:04:23.626137] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.960 [2024-11-20 10:04:23.626151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.960 qpair failed and we were unable to recover it. 00:30:52.960 [2024-11-20 10:04:23.636146] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.960 [2024-11-20 10:04:23.636204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.960 [2024-11-20 10:04:23.636218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.960 [2024-11-20 10:04:23.636225] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.960 [2024-11-20 10:04:23.636232] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.960 [2024-11-20 10:04:23.636247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.960 qpair failed and we were unable to recover it. 00:30:52.960 [2024-11-20 10:04:23.646187] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.960 [2024-11-20 10:04:23.646250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.960 [2024-11-20 10:04:23.646264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.960 [2024-11-20 10:04:23.646271] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.960 [2024-11-20 10:04:23.646278] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.960 [2024-11-20 10:04:23.646293] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.960 qpair failed and we were unable to recover it. 00:30:52.960 [2024-11-20 10:04:23.656191] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.960 [2024-11-20 10:04:23.656242] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.960 [2024-11-20 10:04:23.656255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.960 [2024-11-20 10:04:23.656262] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.960 [2024-11-20 10:04:23.656268] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.960 [2024-11-20 10:04:23.656283] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.960 qpair failed and we were unable to recover it. 00:30:52.960 [2024-11-20 10:04:23.666067] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.960 [2024-11-20 10:04:23.666117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.960 [2024-11-20 10:04:23.666130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.960 [2024-11-20 10:04:23.666137] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.960 [2024-11-20 10:04:23.666144] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.960 [2024-11-20 10:04:23.666162] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.960 qpair failed and we were unable to recover it. 00:30:52.960 [2024-11-20 10:04:23.676265] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.960 [2024-11-20 10:04:23.676357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.961 [2024-11-20 10:04:23.676370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.961 [2024-11-20 10:04:23.676378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.961 [2024-11-20 10:04:23.676384] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.961 [2024-11-20 10:04:23.676399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.961 qpair failed and we were unable to recover it. 00:30:52.961 [2024-11-20 10:04:23.686264] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.961 [2024-11-20 10:04:23.686316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.961 [2024-11-20 10:04:23.686329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.961 [2024-11-20 10:04:23.686336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.961 [2024-11-20 10:04:23.686343] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.961 [2024-11-20 10:04:23.686358] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.961 qpair failed and we were unable to recover it. 00:30:52.961 [2024-11-20 10:04:23.696179] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.961 [2024-11-20 10:04:23.696234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.961 [2024-11-20 10:04:23.696247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.961 [2024-11-20 10:04:23.696254] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.961 [2024-11-20 10:04:23.696262] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.961 [2024-11-20 10:04:23.696277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.961 qpair failed and we were unable to recover it. 00:30:52.961 [2024-11-20 10:04:23.706199] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.961 [2024-11-20 10:04:23.706250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.961 [2024-11-20 10:04:23.706263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.961 [2024-11-20 10:04:23.706274] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.961 [2024-11-20 10:04:23.706281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.961 [2024-11-20 10:04:23.706297] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.961 qpair failed and we were unable to recover it. 00:30:52.961 [2024-11-20 10:04:23.716375] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.961 [2024-11-20 10:04:23.716429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.961 [2024-11-20 10:04:23.716443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.961 [2024-11-20 10:04:23.716451] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.961 [2024-11-20 10:04:23.716457] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.961 [2024-11-20 10:04:23.716471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.961 qpair failed and we were unable to recover it. 00:30:52.961 [2024-11-20 10:04:23.726401] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.961 [2024-11-20 10:04:23.726454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.961 [2024-11-20 10:04:23.726468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.961 [2024-11-20 10:04:23.726475] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.961 [2024-11-20 10:04:23.726482] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.961 [2024-11-20 10:04:23.726497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.961 qpair failed and we were unable to recover it. 00:30:52.961 [2024-11-20 10:04:23.736414] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.961 [2024-11-20 10:04:23.736465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.961 [2024-11-20 10:04:23.736478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.961 [2024-11-20 10:04:23.736486] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.961 [2024-11-20 10:04:23.736492] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.961 [2024-11-20 10:04:23.736507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.961 qpair failed and we were unable to recover it. 00:30:52.961 [2024-11-20 10:04:23.746466] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.961 [2024-11-20 10:04:23.746518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.961 [2024-11-20 10:04:23.746534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.961 [2024-11-20 10:04:23.746541] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.961 [2024-11-20 10:04:23.746548] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.961 [2024-11-20 10:04:23.746569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.961 qpair failed and we were unable to recover it. 00:30:52.961 [2024-11-20 10:04:23.756475] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.961 [2024-11-20 10:04:23.756526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.961 [2024-11-20 10:04:23.756541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.961 [2024-11-20 10:04:23.756548] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.961 [2024-11-20 10:04:23.756555] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.961 [2024-11-20 10:04:23.756570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.961 qpair failed and we were unable to recover it. 00:30:52.961 [2024-11-20 10:04:23.766380] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.961 [2024-11-20 10:04:23.766486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.961 [2024-11-20 10:04:23.766500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.961 [2024-11-20 10:04:23.766508] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.961 [2024-11-20 10:04:23.766514] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.961 [2024-11-20 10:04:23.766528] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.961 qpair failed and we were unable to recover it. 00:30:52.961 [2024-11-20 10:04:23.776543] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.961 [2024-11-20 10:04:23.776595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.961 [2024-11-20 10:04:23.776608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.961 [2024-11-20 10:04:23.776616] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.961 [2024-11-20 10:04:23.776623] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.961 [2024-11-20 10:04:23.776638] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.961 qpair failed and we were unable to recover it. 00:30:52.961 [2024-11-20 10:04:23.786498] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.961 [2024-11-20 10:04:23.786553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.961 [2024-11-20 10:04:23.786566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.961 [2024-11-20 10:04:23.786573] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.961 [2024-11-20 10:04:23.786580] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.961 [2024-11-20 10:04:23.786594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.961 qpair failed and we were unable to recover it. 00:30:52.961 [2024-11-20 10:04:23.796595] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.961 [2024-11-20 10:04:23.796697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.961 [2024-11-20 10:04:23.796710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.961 [2024-11-20 10:04:23.796718] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.961 [2024-11-20 10:04:23.796725] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.961 [2024-11-20 10:04:23.796739] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.961 qpair failed and we were unable to recover it. 00:30:52.961 [2024-11-20 10:04:23.806596] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.961 [2024-11-20 10:04:23.806649] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.962 [2024-11-20 10:04:23.806662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.962 [2024-11-20 10:04:23.806669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.962 [2024-11-20 10:04:23.806676] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.962 [2024-11-20 10:04:23.806690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.962 qpair failed and we were unable to recover it. 00:30:52.962 [2024-11-20 10:04:23.816597] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.962 [2024-11-20 10:04:23.816656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.962 [2024-11-20 10:04:23.816669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.962 [2024-11-20 10:04:23.816676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.962 [2024-11-20 10:04:23.816683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.962 [2024-11-20 10:04:23.816697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.962 qpair failed and we were unable to recover it. 00:30:52.962 [2024-11-20 10:04:23.826609] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.962 [2024-11-20 10:04:23.826657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.962 [2024-11-20 10:04:23.826670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.962 [2024-11-20 10:04:23.826677] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.962 [2024-11-20 10:04:23.826684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.962 [2024-11-20 10:04:23.826698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.962 qpair failed and we were unable to recover it. 00:30:52.962 [2024-11-20 10:04:23.836711] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.962 [2024-11-20 10:04:23.836769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.962 [2024-11-20 10:04:23.836782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.962 [2024-11-20 10:04:23.836793] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.962 [2024-11-20 10:04:23.836800] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.962 [2024-11-20 10:04:23.836814] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.962 qpair failed and we were unable to recover it. 00:30:52.962 [2024-11-20 10:04:23.846654] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.962 [2024-11-20 10:04:23.846710] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.962 [2024-11-20 10:04:23.846724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.962 [2024-11-20 10:04:23.846731] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.962 [2024-11-20 10:04:23.846738] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.962 [2024-11-20 10:04:23.846758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.962 qpair failed and we were unable to recover it. 00:30:52.962 [2024-11-20 10:04:23.856750] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.962 [2024-11-20 10:04:23.856801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.962 [2024-11-20 10:04:23.856814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.962 [2024-11-20 10:04:23.856822] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.962 [2024-11-20 10:04:23.856828] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.962 [2024-11-20 10:04:23.856843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.962 qpair failed and we were unable to recover it. 00:30:52.962 [2024-11-20 10:04:23.866732] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.962 [2024-11-20 10:04:23.866792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.962 [2024-11-20 10:04:23.866805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.962 [2024-11-20 10:04:23.866813] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.962 [2024-11-20 10:04:23.866819] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:52.962 [2024-11-20 10:04:23.866834] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.962 qpair failed and we were unable to recover it. 00:30:53.223 [2024-11-20 10:04:23.876779] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.223 [2024-11-20 10:04:23.876833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.223 [2024-11-20 10:04:23.876846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.223 [2024-11-20 10:04:23.876854] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.223 [2024-11-20 10:04:23.876860] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:53.223 [2024-11-20 10:04:23.876879] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.223 qpair failed and we were unable to recover it. 00:30:53.223 [2024-11-20 10:04:23.886830] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.223 [2024-11-20 10:04:23.886883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.223 [2024-11-20 10:04:23.886896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.223 [2024-11-20 10:04:23.886903] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.223 [2024-11-20 10:04:23.886912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:53.223 [2024-11-20 10:04:23.886926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.224 qpair failed and we were unable to recover it. 00:30:53.224 [2024-11-20 10:04:23.896833] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.224 [2024-11-20 10:04:23.896883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.224 [2024-11-20 10:04:23.896896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.224 [2024-11-20 10:04:23.896904] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.224 [2024-11-20 10:04:23.896910] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:53.224 [2024-11-20 10:04:23.896924] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.224 qpair failed and we were unable to recover it. 00:30:53.224 [2024-11-20 10:04:23.906837] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.224 [2024-11-20 10:04:23.906901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.224 [2024-11-20 10:04:23.906915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.224 [2024-11-20 10:04:23.906922] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.224 [2024-11-20 10:04:23.906929] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:53.224 [2024-11-20 10:04:23.906944] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.224 qpair failed and we were unable to recover it. 00:30:53.224 [2024-11-20 10:04:23.916824] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.224 [2024-11-20 10:04:23.916919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.224 [2024-11-20 10:04:23.916933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.224 [2024-11-20 10:04:23.916941] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.224 [2024-11-20 10:04:23.916948] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:53.224 [2024-11-20 10:04:23.916963] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.224 qpair failed and we were unable to recover it. 00:30:53.224 [2024-11-20 10:04:23.926956] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.224 [2024-11-20 10:04:23.927006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.224 [2024-11-20 10:04:23.927019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.224 [2024-11-20 10:04:23.927027] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.224 [2024-11-20 10:04:23.927034] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:53.224 [2024-11-20 10:04:23.927048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.224 qpair failed and we were unable to recover it. 00:30:53.224 [2024-11-20 10:04:23.936978] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.224 [2024-11-20 10:04:23.937025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.224 [2024-11-20 10:04:23.937038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.224 [2024-11-20 10:04:23.937046] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.224 [2024-11-20 10:04:23.937053] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:53.224 [2024-11-20 10:04:23.937067] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.224 qpair failed and we were unable to recover it. 00:30:53.224 [2024-11-20 10:04:23.946956] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.224 [2024-11-20 10:04:23.947006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.224 [2024-11-20 10:04:23.947019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.224 [2024-11-20 10:04:23.947027] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.224 [2024-11-20 10:04:23.947034] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:53.224 [2024-11-20 10:04:23.947048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.224 qpair failed and we were unable to recover it. 00:30:53.224 [2024-11-20 10:04:23.957034] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.224 [2024-11-20 10:04:23.957088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.224 [2024-11-20 10:04:23.957101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.224 [2024-11-20 10:04:23.957108] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.224 [2024-11-20 10:04:23.957115] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:53.224 [2024-11-20 10:04:23.957129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.224 qpair failed and we were unable to recover it. 00:30:53.224 [2024-11-20 10:04:23.967074] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.224 [2024-11-20 10:04:23.967122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.224 [2024-11-20 10:04:23.967139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.224 [2024-11-20 10:04:23.967146] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.224 [2024-11-20 10:04:23.967153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:53.224 [2024-11-20 10:04:23.967172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.224 qpair failed and we were unable to recover it. 00:30:53.224 [2024-11-20 10:04:23.977034] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.224 [2024-11-20 10:04:23.977092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.224 [2024-11-20 10:04:23.977105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.224 [2024-11-20 10:04:23.977112] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.224 [2024-11-20 10:04:23.977119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:53.224 [2024-11-20 10:04:23.977133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.224 qpair failed and we were unable to recover it. 00:30:53.224 [2024-11-20 10:04:23.987064] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.224 [2024-11-20 10:04:23.987110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.224 [2024-11-20 10:04:23.987123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.224 [2024-11-20 10:04:23.987130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.224 [2024-11-20 10:04:23.987137] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:53.224 [2024-11-20 10:04:23.987151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.224 qpair failed and we were unable to recover it. 00:30:53.224 [2024-11-20 10:04:23.997152] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.224 [2024-11-20 10:04:23.997209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.224 [2024-11-20 10:04:23.997222] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.224 [2024-11-20 10:04:23.997230] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.224 [2024-11-20 10:04:23.997236] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:53.224 [2024-11-20 10:04:23.997251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.224 qpair failed and we were unable to recover it. 00:30:53.224 [2024-11-20 10:04:24.007189] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.224 [2024-11-20 10:04:24.007244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.224 [2024-11-20 10:04:24.007257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.224 [2024-11-20 10:04:24.007265] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.224 [2024-11-20 10:04:24.007278] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:53.224 [2024-11-20 10:04:24.007293] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.224 qpair failed and we were unable to recover it. 00:30:53.224 [2024-11-20 10:04:24.017086] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.224 [2024-11-20 10:04:24.017181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.224 [2024-11-20 10:04:24.017195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.225 [2024-11-20 10:04:24.017203] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.225 [2024-11-20 10:04:24.017210] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:53.225 [2024-11-20 10:04:24.017231] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.225 qpair failed and we were unable to recover it. 00:30:53.225 [2024-11-20 10:04:24.027103] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.225 [2024-11-20 10:04:24.027194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.225 [2024-11-20 10:04:24.027208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.225 [2024-11-20 10:04:24.027216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.225 [2024-11-20 10:04:24.027223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:53.225 [2024-11-20 10:04:24.027238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.225 qpair failed and we were unable to recover it. 00:30:53.225 [2024-11-20 10:04:24.037262] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.225 [2024-11-20 10:04:24.037321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.225 [2024-11-20 10:04:24.037335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.225 [2024-11-20 10:04:24.037342] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.225 [2024-11-20 10:04:24.037349] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:53.225 [2024-11-20 10:04:24.037364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.225 qpair failed and we were unable to recover it. 00:30:53.225 [2024-11-20 10:04:24.047263] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.225 [2024-11-20 10:04:24.047323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.225 [2024-11-20 10:04:24.047336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.225 [2024-11-20 10:04:24.047343] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.225 [2024-11-20 10:04:24.047350] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:53.225 [2024-11-20 10:04:24.047364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.225 qpair failed and we were unable to recover it. 00:30:53.225 [2024-11-20 10:04:24.057300] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.225 [2024-11-20 10:04:24.057349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.225 [2024-11-20 10:04:24.057362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.225 [2024-11-20 10:04:24.057370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.225 [2024-11-20 10:04:24.057376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:53.225 [2024-11-20 10:04:24.057391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.225 qpair failed and we were unable to recover it. 00:30:53.225 [2024-11-20 10:04:24.067293] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.225 [2024-11-20 10:04:24.067344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.225 [2024-11-20 10:04:24.067357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.225 [2024-11-20 10:04:24.067365] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.225 [2024-11-20 10:04:24.067371] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:53.225 [2024-11-20 10:04:24.067386] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.225 qpair failed and we were unable to recover it. 00:30:53.225 [2024-11-20 10:04:24.077339] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.225 [2024-11-20 10:04:24.077397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.225 [2024-11-20 10:04:24.077410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.225 [2024-11-20 10:04:24.077417] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.225 [2024-11-20 10:04:24.077424] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:53.225 [2024-11-20 10:04:24.077438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.225 qpair failed and we were unable to recover it. 00:30:53.225 [2024-11-20 10:04:24.087280] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.225 [2024-11-20 10:04:24.087337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.225 [2024-11-20 10:04:24.087353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.225 [2024-11-20 10:04:24.087361] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.225 [2024-11-20 10:04:24.087368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:53.225 [2024-11-20 10:04:24.087383] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.225 qpair failed and we were unable to recover it. 00:30:53.225 [2024-11-20 10:04:24.097401] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.225 [2024-11-20 10:04:24.097457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.225 [2024-11-20 10:04:24.097474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.225 [2024-11-20 10:04:24.097482] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.225 [2024-11-20 10:04:24.097488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:53.225 [2024-11-20 10:04:24.097503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.225 qpair failed and we were unable to recover it. 00:30:53.225 [2024-11-20 10:04:24.107392] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.225 [2024-11-20 10:04:24.107460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.225 [2024-11-20 10:04:24.107474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.225 [2024-11-20 10:04:24.107481] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.225 [2024-11-20 10:04:24.107488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:53.225 [2024-11-20 10:04:24.107502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.225 qpair failed and we were unable to recover it. 00:30:53.225 [2024-11-20 10:04:24.117489] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.225 [2024-11-20 10:04:24.117547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.225 [2024-11-20 10:04:24.117560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.225 [2024-11-20 10:04:24.117567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.225 [2024-11-20 10:04:24.117574] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:53.225 [2024-11-20 10:04:24.117588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.225 qpair failed and we were unable to recover it. 00:30:53.225 [2024-11-20 10:04:24.127485] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.225 [2024-11-20 10:04:24.127540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.225 [2024-11-20 10:04:24.127553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.225 [2024-11-20 10:04:24.127560] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.225 [2024-11-20 10:04:24.127567] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:53.225 [2024-11-20 10:04:24.127581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.225 qpair failed and we were unable to recover it. 00:30:53.486 [2024-11-20 10:04:24.137531] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.486 [2024-11-20 10:04:24.137585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.486 [2024-11-20 10:04:24.137598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.486 [2024-11-20 10:04:24.137605] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.486 [2024-11-20 10:04:24.137616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:53.486 [2024-11-20 10:04:24.137631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.486 qpair failed and we were unable to recover it. 00:30:53.486 [2024-11-20 10:04:24.147509] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.486 [2024-11-20 10:04:24.147554] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.486 [2024-11-20 10:04:24.147567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.486 [2024-11-20 10:04:24.147575] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.486 [2024-11-20 10:04:24.147582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:53.486 [2024-11-20 10:04:24.147596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.486 qpair failed and we were unable to recover it. 00:30:53.486 [2024-11-20 10:04:24.157679] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.486 [2024-11-20 10:04:24.157738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.486 [2024-11-20 10:04:24.157751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.486 [2024-11-20 10:04:24.157759] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.486 [2024-11-20 10:04:24.157766] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:53.486 [2024-11-20 10:04:24.157780] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.486 qpair failed and we were unable to recover it. 00:30:53.487 [2024-11-20 10:04:24.167563] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.487 [2024-11-20 10:04:24.167616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.487 [2024-11-20 10:04:24.167629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.487 [2024-11-20 10:04:24.167637] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.487 [2024-11-20 10:04:24.167643] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:53.487 [2024-11-20 10:04:24.167659] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.487 qpair failed and we were unable to recover it. 00:30:53.487 [2024-11-20 10:04:24.177601] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.487 [2024-11-20 10:04:24.177657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.487 [2024-11-20 10:04:24.177670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.487 [2024-11-20 10:04:24.177677] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.487 [2024-11-20 10:04:24.177683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:53.487 [2024-11-20 10:04:24.177698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.487 qpair failed and we were unable to recover it. 00:30:53.487 [2024-11-20 10:04:24.187509] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.487 [2024-11-20 10:04:24.187559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.487 [2024-11-20 10:04:24.187573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.487 [2024-11-20 10:04:24.187580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.487 [2024-11-20 10:04:24.187587] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:53.487 [2024-11-20 10:04:24.187602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.487 qpair failed and we were unable to recover it. 00:30:53.487 [2024-11-20 10:04:24.197705] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.487 [2024-11-20 10:04:24.197763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.487 [2024-11-20 10:04:24.197777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.487 [2024-11-20 10:04:24.197784] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.487 [2024-11-20 10:04:24.197791] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:53.487 [2024-11-20 10:04:24.197806] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.487 qpair failed and we were unable to recover it. 00:30:53.487 [2024-11-20 10:04:24.207734] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.487 [2024-11-20 10:04:24.207788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.487 [2024-11-20 10:04:24.207801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.487 [2024-11-20 10:04:24.207808] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.487 [2024-11-20 10:04:24.207815] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:53.487 [2024-11-20 10:04:24.207831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.487 qpair failed and we were unable to recover it. 00:30:53.487 [2024-11-20 10:04:24.217746] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.487 [2024-11-20 10:04:24.217795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.487 [2024-11-20 10:04:24.217808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.487 [2024-11-20 10:04:24.217816] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.487 [2024-11-20 10:04:24.217822] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:53.487 [2024-11-20 10:04:24.217837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.487 qpair failed and we were unable to recover it. 00:30:53.487 [2024-11-20 10:04:24.227741] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.487 [2024-11-20 10:04:24.227795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.487 [2024-11-20 10:04:24.227808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.487 [2024-11-20 10:04:24.227816] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.487 [2024-11-20 10:04:24.227822] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:53.487 [2024-11-20 10:04:24.227837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.487 qpair failed and we were unable to recover it. 00:30:53.487 [2024-11-20 10:04:24.237706] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.487 [2024-11-20 10:04:24.237764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.487 [2024-11-20 10:04:24.237778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.487 [2024-11-20 10:04:24.237786] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.487 [2024-11-20 10:04:24.237792] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:53.487 [2024-11-20 10:04:24.237812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.487 qpair failed and we were unable to recover it. 00:30:53.487 [2024-11-20 10:04:24.247841] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.487 [2024-11-20 10:04:24.247892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.487 [2024-11-20 10:04:24.247906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.487 [2024-11-20 10:04:24.247913] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.487 [2024-11-20 10:04:24.247920] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:53.487 [2024-11-20 10:04:24.247934] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.487 qpair failed and we were unable to recover it. 00:30:53.487 [2024-11-20 10:04:24.257832] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.487 [2024-11-20 10:04:24.257886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.487 [2024-11-20 10:04:24.257899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.487 [2024-11-20 10:04:24.257906] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.487 [2024-11-20 10:04:24.257913] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:53.487 [2024-11-20 10:04:24.257927] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.487 qpair failed and we were unable to recover it. 00:30:53.487 [2024-11-20 10:04:24.267855] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.487 [2024-11-20 10:04:24.267926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.487 [2024-11-20 10:04:24.267940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.487 [2024-11-20 10:04:24.267951] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.487 [2024-11-20 10:04:24.267960] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:53.487 [2024-11-20 10:04:24.267976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.487 qpair failed and we were unable to recover it. 00:30:53.487 [2024-11-20 10:04:24.277903] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.487 [2024-11-20 10:04:24.277964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.487 [2024-11-20 10:04:24.277978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.487 [2024-11-20 10:04:24.277985] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.487 [2024-11-20 10:04:24.277991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:53.487 [2024-11-20 10:04:24.278006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.487 qpair failed and we were unable to recover it. 00:30:53.487 [2024-11-20 10:04:24.287952] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.487 [2024-11-20 10:04:24.288002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.487 [2024-11-20 10:04:24.288015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.487 [2024-11-20 10:04:24.288023] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.488 [2024-11-20 10:04:24.288029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:53.488 [2024-11-20 10:04:24.288044] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.488 qpair failed and we were unable to recover it. 00:30:53.488 [2024-11-20 10:04:24.297870] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.488 [2024-11-20 10:04:24.297938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.488 [2024-11-20 10:04:24.297951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.488 [2024-11-20 10:04:24.297958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.488 [2024-11-20 10:04:24.297965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:53.488 [2024-11-20 10:04:24.297979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.488 qpair failed and we were unable to recover it. 00:30:53.488 [2024-11-20 10:04:24.307940] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.488 [2024-11-20 10:04:24.307988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.488 [2024-11-20 10:04:24.308001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.488 [2024-11-20 10:04:24.308009] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.488 [2024-11-20 10:04:24.308015] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:53.488 [2024-11-20 10:04:24.308034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.488 qpair failed and we were unable to recover it. 00:30:53.488 [2024-11-20 10:04:24.318016] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.488 [2024-11-20 10:04:24.318073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.488 [2024-11-20 10:04:24.318086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.488 [2024-11-20 10:04:24.318094] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.488 [2024-11-20 10:04:24.318101] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:53.488 [2024-11-20 10:04:24.318116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.488 qpair failed and we were unable to recover it. 00:30:53.488 [2024-11-20 10:04:24.328063] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.488 [2024-11-20 10:04:24.328118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.488 [2024-11-20 10:04:24.328131] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.488 [2024-11-20 10:04:24.328139] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.488 [2024-11-20 10:04:24.328146] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:53.488 [2024-11-20 10:04:24.328165] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.488 qpair failed and we were unable to recover it. 00:30:53.488 [2024-11-20 10:04:24.338073] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.488 [2024-11-20 10:04:24.338164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.488 [2024-11-20 10:04:24.338178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.488 [2024-11-20 10:04:24.338186] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.488 [2024-11-20 10:04:24.338192] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:53.488 [2024-11-20 10:04:24.338209] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.488 qpair failed and we were unable to recover it. 00:30:53.488 [2024-11-20 10:04:24.348050] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.488 [2024-11-20 10:04:24.348092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.488 [2024-11-20 10:04:24.348105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.488 [2024-11-20 10:04:24.348113] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.488 [2024-11-20 10:04:24.348119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:53.488 [2024-11-20 10:04:24.348134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.488 qpair failed and we were unable to recover it. 00:30:53.488 [2024-11-20 10:04:24.358239] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.488 [2024-11-20 10:04:24.358313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.488 [2024-11-20 10:04:24.358326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.488 [2024-11-20 10:04:24.358334] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.488 [2024-11-20 10:04:24.358340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:53.488 [2024-11-20 10:04:24.358355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.488 qpair failed and we were unable to recover it. 00:30:53.488 [2024-11-20 10:04:24.368229] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.488 [2024-11-20 10:04:24.368320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.488 [2024-11-20 10:04:24.368333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.488 [2024-11-20 10:04:24.368340] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.488 [2024-11-20 10:04:24.368347] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:53.488 [2024-11-20 10:04:24.368361] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.488 qpair failed and we were unable to recover it. 00:30:53.488 [2024-11-20 10:04:24.378243] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.488 [2024-11-20 10:04:24.378298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.488 [2024-11-20 10:04:24.378311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.488 [2024-11-20 10:04:24.378318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.488 [2024-11-20 10:04:24.378325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:53.488 [2024-11-20 10:04:24.378339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.488 qpair failed and we were unable to recover it. 00:30:53.488 [2024-11-20 10:04:24.388212] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.488 [2024-11-20 10:04:24.388258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.488 [2024-11-20 10:04:24.388271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.488 [2024-11-20 10:04:24.388278] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.488 [2024-11-20 10:04:24.388285] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:53.488 [2024-11-20 10:04:24.388300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.488 qpair failed and we were unable to recover it. 00:30:53.488 [2024-11-20 10:04:24.398236] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.488 [2024-11-20 10:04:24.398313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.488 [2024-11-20 10:04:24.398329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.488 [2024-11-20 10:04:24.398336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.488 [2024-11-20 10:04:24.398343] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:53.488 [2024-11-20 10:04:24.398358] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.488 qpair failed and we were unable to recover it. 00:30:53.750 [2024-11-20 10:04:24.408263] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.751 [2024-11-20 10:04:24.408317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.751 [2024-11-20 10:04:24.408331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.751 [2024-11-20 10:04:24.408338] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.751 [2024-11-20 10:04:24.408345] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:53.751 [2024-11-20 10:04:24.408360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.751 qpair failed and we were unable to recover it. 00:30:53.751 [2024-11-20 10:04:24.418216] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.751 [2024-11-20 10:04:24.418299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.751 [2024-11-20 10:04:24.418312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.751 [2024-11-20 10:04:24.418320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.751 [2024-11-20 10:04:24.418327] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:53.751 [2024-11-20 10:04:24.418342] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.751 qpair failed and we were unable to recover it. 00:30:53.751 [2024-11-20 10:04:24.428270] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.751 [2024-11-20 10:04:24.428320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.751 [2024-11-20 10:04:24.428333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.751 [2024-11-20 10:04:24.428340] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.751 [2024-11-20 10:04:24.428346] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:53.751 [2024-11-20 10:04:24.428361] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.751 qpair failed and we were unable to recover it. 00:30:53.751 [2024-11-20 10:04:24.438235] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.751 [2024-11-20 10:04:24.438289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.751 [2024-11-20 10:04:24.438303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.751 [2024-11-20 10:04:24.438310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.751 [2024-11-20 10:04:24.438317] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:53.751 [2024-11-20 10:04:24.438335] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.751 qpair failed and we were unable to recover it. 00:30:53.751 [2024-11-20 10:04:24.448371] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.751 [2024-11-20 10:04:24.448454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.751 [2024-11-20 10:04:24.448467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.751 [2024-11-20 10:04:24.448475] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.751 [2024-11-20 10:04:24.448483] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:53.751 [2024-11-20 10:04:24.448498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.751 qpair failed and we were unable to recover it. 00:30:53.751 [2024-11-20 10:04:24.458415] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.751 [2024-11-20 10:04:24.458465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.751 [2024-11-20 10:04:24.458479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.751 [2024-11-20 10:04:24.458487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.751 [2024-11-20 10:04:24.458493] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:53.751 [2024-11-20 10:04:24.458508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.751 qpair failed and we were unable to recover it. 00:30:53.751 [2024-11-20 10:04:24.468262] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.751 [2024-11-20 10:04:24.468309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.751 [2024-11-20 10:04:24.468322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.751 [2024-11-20 10:04:24.468329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.751 [2024-11-20 10:04:24.468336] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:53.751 [2024-11-20 10:04:24.468351] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.751 qpair failed and we were unable to recover it. 00:30:53.751 [2024-11-20 10:04:24.478513] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.751 [2024-11-20 10:04:24.478617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.751 [2024-11-20 10:04:24.478632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.751 [2024-11-20 10:04:24.478640] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.751 [2024-11-20 10:04:24.478646] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:53.751 [2024-11-20 10:04:24.478666] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.751 qpair failed and we were unable to recover it. 00:30:53.751 [2024-11-20 10:04:24.488554] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.751 [2024-11-20 10:04:24.488623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.751 [2024-11-20 10:04:24.488637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.751 [2024-11-20 10:04:24.488644] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.751 [2024-11-20 10:04:24.488651] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:53.751 [2024-11-20 10:04:24.488665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.751 qpair failed and we were unable to recover it. 00:30:53.751 [2024-11-20 10:04:24.498518] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.751 [2024-11-20 10:04:24.498571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.751 [2024-11-20 10:04:24.498585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.751 [2024-11-20 10:04:24.498592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.751 [2024-11-20 10:04:24.498599] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:53.751 [2024-11-20 10:04:24.498613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.751 qpair failed and we were unable to recover it. 00:30:53.751 [2024-11-20 10:04:24.508506] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.751 [2024-11-20 10:04:24.508553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.751 [2024-11-20 10:04:24.508566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.751 [2024-11-20 10:04:24.508573] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.751 [2024-11-20 10:04:24.508580] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:53.751 [2024-11-20 10:04:24.508594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.751 qpair failed and we were unable to recover it. 00:30:53.751 [2024-11-20 10:04:24.518588] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.751 [2024-11-20 10:04:24.518641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.751 [2024-11-20 10:04:24.518654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.751 [2024-11-20 10:04:24.518662] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.751 [2024-11-20 10:04:24.518668] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:53.751 [2024-11-20 10:04:24.518683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.751 qpair failed and we were unable to recover it. 00:30:53.751 [2024-11-20 10:04:24.528622] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.751 [2024-11-20 10:04:24.528677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.751 [2024-11-20 10:04:24.528697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.751 [2024-11-20 10:04:24.528705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.751 [2024-11-20 10:04:24.528712] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:53.752 [2024-11-20 10:04:24.528726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.752 qpair failed and we were unable to recover it. 00:30:53.752 [2024-11-20 10:04:24.538549] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.752 [2024-11-20 10:04:24.538600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.752 [2024-11-20 10:04:24.538614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.752 [2024-11-20 10:04:24.538621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.752 [2024-11-20 10:04:24.538628] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:53.752 [2024-11-20 10:04:24.538643] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.752 qpair failed and we were unable to recover it. 00:30:53.752 [2024-11-20 10:04:24.548618] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.752 [2024-11-20 10:04:24.548668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.752 [2024-11-20 10:04:24.548681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.752 [2024-11-20 10:04:24.548689] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.752 [2024-11-20 10:04:24.548695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:53.752 [2024-11-20 10:04:24.548709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.752 qpair failed and we were unable to recover it. 00:30:53.752 [2024-11-20 10:04:24.558676] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.752 [2024-11-20 10:04:24.558729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.752 [2024-11-20 10:04:24.558742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.752 [2024-11-20 10:04:24.558750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.752 [2024-11-20 10:04:24.558756] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:53.752 [2024-11-20 10:04:24.558771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.752 qpair failed and we were unable to recover it. 00:30:53.752 [2024-11-20 10:04:24.568737] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.752 [2024-11-20 10:04:24.568797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.752 [2024-11-20 10:04:24.568810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.752 [2024-11-20 10:04:24.568817] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.752 [2024-11-20 10:04:24.568827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:53.752 [2024-11-20 10:04:24.568842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.752 qpair failed and we were unable to recover it. 00:30:53.752 [2024-11-20 10:04:24.578748] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.752 [2024-11-20 10:04:24.578807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.752 [2024-11-20 10:04:24.578821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.752 [2024-11-20 10:04:24.578828] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.752 [2024-11-20 10:04:24.578835] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:53.752 [2024-11-20 10:04:24.578849] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.752 qpair failed and we were unable to recover it. 00:30:53.752 [2024-11-20 10:04:24.588734] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.752 [2024-11-20 10:04:24.588780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.752 [2024-11-20 10:04:24.588793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.752 [2024-11-20 10:04:24.588801] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.752 [2024-11-20 10:04:24.588808] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:53.752 [2024-11-20 10:04:24.588822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.752 qpair failed and we were unable to recover it. 00:30:53.752 [2024-11-20 10:04:24.598821] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.752 [2024-11-20 10:04:24.598873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.752 [2024-11-20 10:04:24.598886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.752 [2024-11-20 10:04:24.598893] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.752 [2024-11-20 10:04:24.598900] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:53.752 [2024-11-20 10:04:24.598914] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.752 qpair failed and we were unable to recover it. 00:30:53.752 [2024-11-20 10:04:24.608854] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.752 [2024-11-20 10:04:24.608913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.752 [2024-11-20 10:04:24.608937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.752 [2024-11-20 10:04:24.608946] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.752 [2024-11-20 10:04:24.608953] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:53.752 [2024-11-20 10:04:24.608973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.752 qpair failed and we were unable to recover it. 00:30:53.752 [2024-11-20 10:04:24.618869] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.752 [2024-11-20 10:04:24.618931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.752 [2024-11-20 10:04:24.618958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.752 [2024-11-20 10:04:24.618968] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.752 [2024-11-20 10:04:24.618976] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:53.752 [2024-11-20 10:04:24.618996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.752 qpair failed and we were unable to recover it. 00:30:53.752 [2024-11-20 10:04:24.628854] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.752 [2024-11-20 10:04:24.628902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.752 [2024-11-20 10:04:24.628918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.752 [2024-11-20 10:04:24.628926] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.752 [2024-11-20 10:04:24.628932] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:53.752 [2024-11-20 10:04:24.628949] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.752 qpair failed and we were unable to recover it. 00:30:53.752 [2024-11-20 10:04:24.638921] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.752 [2024-11-20 10:04:24.638984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.752 [2024-11-20 10:04:24.639008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.752 [2024-11-20 10:04:24.639017] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.752 [2024-11-20 10:04:24.639024] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:53.752 [2024-11-20 10:04:24.639045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.752 qpair failed and we were unable to recover it. 00:30:53.752 [2024-11-20 10:04:24.648951] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.752 [2024-11-20 10:04:24.649010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.752 [2024-11-20 10:04:24.649025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.752 [2024-11-20 10:04:24.649032] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.752 [2024-11-20 10:04:24.649039] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:53.752 [2024-11-20 10:04:24.649055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.752 qpair failed and we were unable to recover it. 00:30:53.752 [2024-11-20 10:04:24.658979] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.752 [2024-11-20 10:04:24.659031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.752 [2024-11-20 10:04:24.659050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.752 [2024-11-20 10:04:24.659057] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.753 [2024-11-20 10:04:24.659064] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:53.753 [2024-11-20 10:04:24.659079] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.753 qpair failed and we were unable to recover it. 00:30:54.016 [2024-11-20 10:04:24.668943] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.016 [2024-11-20 10:04:24.668990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.016 [2024-11-20 10:04:24.669004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.016 [2024-11-20 10:04:24.669012] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.017 [2024-11-20 10:04:24.669019] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.017 [2024-11-20 10:04:24.669034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.017 qpair failed and we were unable to recover it. 00:30:54.017 [2024-11-20 10:04:24.678899] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.017 [2024-11-20 10:04:24.678960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.017 [2024-11-20 10:04:24.678975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.017 [2024-11-20 10:04:24.678983] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.017 [2024-11-20 10:04:24.678989] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.017 [2024-11-20 10:04:24.679005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.017 qpair failed and we were unable to recover it. 00:30:54.017 [2024-11-20 10:04:24.689057] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.017 [2024-11-20 10:04:24.689113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.017 [2024-11-20 10:04:24.689127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.017 [2024-11-20 10:04:24.689134] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.017 [2024-11-20 10:04:24.689141] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.017 [2024-11-20 10:04:24.689156] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.017 qpair failed and we were unable to recover it. 00:30:54.017 [2024-11-20 10:04:24.698958] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.017 [2024-11-20 10:04:24.699016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.017 [2024-11-20 10:04:24.699030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.017 [2024-11-20 10:04:24.699041] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.017 [2024-11-20 10:04:24.699048] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.017 [2024-11-20 10:04:24.699063] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.017 qpair failed and we were unable to recover it. 00:30:54.017 [2024-11-20 10:04:24.709056] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.017 [2024-11-20 10:04:24.709104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.017 [2024-11-20 10:04:24.709117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.017 [2024-11-20 10:04:24.709125] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.017 [2024-11-20 10:04:24.709131] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.017 [2024-11-20 10:04:24.709146] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.017 qpair failed and we were unable to recover it. 00:30:54.017 [2024-11-20 10:04:24.719111] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.017 [2024-11-20 10:04:24.719167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.017 [2024-11-20 10:04:24.719180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.017 [2024-11-20 10:04:24.719188] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.017 [2024-11-20 10:04:24.719194] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.017 [2024-11-20 10:04:24.719209] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.017 qpair failed and we were unable to recover it. 00:30:54.017 [2024-11-20 10:04:24.729123] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.017 [2024-11-20 10:04:24.729177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.017 [2024-11-20 10:04:24.729191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.017 [2024-11-20 10:04:24.729199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.017 [2024-11-20 10:04:24.729206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.017 [2024-11-20 10:04:24.729221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.017 qpair failed and we were unable to recover it. 00:30:54.017 [2024-11-20 10:04:24.739062] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.017 [2024-11-20 10:04:24.739128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.017 [2024-11-20 10:04:24.739141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.017 [2024-11-20 10:04:24.739149] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.017 [2024-11-20 10:04:24.739155] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.017 [2024-11-20 10:04:24.739174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.017 qpair failed and we were unable to recover it. 00:30:54.017 [2024-11-20 10:04:24.749175] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.017 [2024-11-20 10:04:24.749220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.017 [2024-11-20 10:04:24.749234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.017 [2024-11-20 10:04:24.749241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.017 [2024-11-20 10:04:24.749248] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.017 [2024-11-20 10:04:24.749263] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.017 qpair failed and we were unable to recover it. 00:30:54.017 [2024-11-20 10:04:24.759256] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.017 [2024-11-20 10:04:24.759311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.017 [2024-11-20 10:04:24.759324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.017 [2024-11-20 10:04:24.759331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.017 [2024-11-20 10:04:24.759338] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.017 [2024-11-20 10:04:24.759353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.017 qpair failed and we were unable to recover it. 00:30:54.017 [2024-11-20 10:04:24.769261] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.017 [2024-11-20 10:04:24.769315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.017 [2024-11-20 10:04:24.769328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.017 [2024-11-20 10:04:24.769336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.017 [2024-11-20 10:04:24.769342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.017 [2024-11-20 10:04:24.769357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.017 qpair failed and we were unable to recover it. 00:30:54.017 [2024-11-20 10:04:24.779300] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.017 [2024-11-20 10:04:24.779358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.017 [2024-11-20 10:04:24.779370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.017 [2024-11-20 10:04:24.779378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.017 [2024-11-20 10:04:24.779384] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.017 [2024-11-20 10:04:24.779399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.017 qpair failed and we were unable to recover it. 00:30:54.017 [2024-11-20 10:04:24.789302] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.017 [2024-11-20 10:04:24.789350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.017 [2024-11-20 10:04:24.789363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.017 [2024-11-20 10:04:24.789370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.017 [2024-11-20 10:04:24.789377] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.017 [2024-11-20 10:04:24.789392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.017 qpair failed and we were unable to recover it. 00:30:54.017 [2024-11-20 10:04:24.799380] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.017 [2024-11-20 10:04:24.799433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.017 [2024-11-20 10:04:24.799446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.018 [2024-11-20 10:04:24.799454] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.018 [2024-11-20 10:04:24.799461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.018 [2024-11-20 10:04:24.799475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.018 qpair failed and we were unable to recover it. 00:30:54.018 [2024-11-20 10:04:24.809400] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.018 [2024-11-20 10:04:24.809453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.018 [2024-11-20 10:04:24.809466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.018 [2024-11-20 10:04:24.809474] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.018 [2024-11-20 10:04:24.809481] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.018 [2024-11-20 10:04:24.809495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.018 qpair failed and we were unable to recover it. 00:30:54.018 [2024-11-20 10:04:24.819282] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.018 [2024-11-20 10:04:24.819347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.018 [2024-11-20 10:04:24.819361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.018 [2024-11-20 10:04:24.819369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.018 [2024-11-20 10:04:24.819375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.018 [2024-11-20 10:04:24.819391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.018 qpair failed and we were unable to recover it. 00:30:54.018 [2024-11-20 10:04:24.829399] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.018 [2024-11-20 10:04:24.829450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.018 [2024-11-20 10:04:24.829463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.018 [2024-11-20 10:04:24.829475] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.018 [2024-11-20 10:04:24.829481] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.018 [2024-11-20 10:04:24.829496] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.018 qpair failed and we were unable to recover it. 00:30:54.018 [2024-11-20 10:04:24.839471] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.018 [2024-11-20 10:04:24.839540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.018 [2024-11-20 10:04:24.839554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.018 [2024-11-20 10:04:24.839562] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.018 [2024-11-20 10:04:24.839568] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.018 [2024-11-20 10:04:24.839583] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.018 qpair failed and we were unable to recover it. 00:30:54.018 [2024-11-20 10:04:24.849497] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.018 [2024-11-20 10:04:24.849599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.018 [2024-11-20 10:04:24.849613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.018 [2024-11-20 10:04:24.849620] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.018 [2024-11-20 10:04:24.849626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.018 [2024-11-20 10:04:24.849641] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.018 qpair failed and we were unable to recover it. 00:30:54.018 [2024-11-20 10:04:24.859526] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.018 [2024-11-20 10:04:24.859624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.018 [2024-11-20 10:04:24.859637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.018 [2024-11-20 10:04:24.859644] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.018 [2024-11-20 10:04:24.859651] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.018 [2024-11-20 10:04:24.859666] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.018 qpair failed and we were unable to recover it. 00:30:54.018 [2024-11-20 10:04:24.869496] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.018 [2024-11-20 10:04:24.869547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.018 [2024-11-20 10:04:24.869560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.018 [2024-11-20 10:04:24.869567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.018 [2024-11-20 10:04:24.869574] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.018 [2024-11-20 10:04:24.869593] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.018 qpair failed and we were unable to recover it. 00:30:54.018 [2024-11-20 10:04:24.879558] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.018 [2024-11-20 10:04:24.879608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.018 [2024-11-20 10:04:24.879621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.018 [2024-11-20 10:04:24.879628] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.018 [2024-11-20 10:04:24.879634] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.018 [2024-11-20 10:04:24.879649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.018 qpair failed and we were unable to recover it. 00:30:54.018 [2024-11-20 10:04:24.889614] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.018 [2024-11-20 10:04:24.889669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.018 [2024-11-20 10:04:24.889682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.018 [2024-11-20 10:04:24.889690] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.018 [2024-11-20 10:04:24.889697] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.018 [2024-11-20 10:04:24.889711] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.018 qpair failed and we were unable to recover it. 00:30:54.018 [2024-11-20 10:04:24.899596] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.018 [2024-11-20 10:04:24.899690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.018 [2024-11-20 10:04:24.899703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.018 [2024-11-20 10:04:24.899710] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.018 [2024-11-20 10:04:24.899717] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.018 [2024-11-20 10:04:24.899732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.018 qpair failed and we were unable to recover it. 00:30:54.018 [2024-11-20 10:04:24.909505] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.018 [2024-11-20 10:04:24.909559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.018 [2024-11-20 10:04:24.909572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.018 [2024-11-20 10:04:24.909580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.018 [2024-11-20 10:04:24.909586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.018 [2024-11-20 10:04:24.909600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.018 qpair failed and we were unable to recover it. 00:30:54.018 [2024-11-20 10:04:24.919679] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.018 [2024-11-20 10:04:24.919736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.018 [2024-11-20 10:04:24.919750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.018 [2024-11-20 10:04:24.919757] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.018 [2024-11-20 10:04:24.919764] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.018 [2024-11-20 10:04:24.919779] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.018 qpair failed and we were unable to recover it. 00:30:54.281 [2024-11-20 10:04:24.929617] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.281 [2024-11-20 10:04:24.929718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.281 [2024-11-20 10:04:24.929731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.281 [2024-11-20 10:04:24.929738] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.281 [2024-11-20 10:04:24.929745] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.281 [2024-11-20 10:04:24.929759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.281 qpair failed and we were unable to recover it. 00:30:54.281 [2024-11-20 10:04:24.939728] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.281 [2024-11-20 10:04:24.939781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.281 [2024-11-20 10:04:24.939794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.281 [2024-11-20 10:04:24.939801] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.281 [2024-11-20 10:04:24.939808] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.281 [2024-11-20 10:04:24.939823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.281 qpair failed and we were unable to recover it. 00:30:54.281 [2024-11-20 10:04:24.949683] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.281 [2024-11-20 10:04:24.949728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.281 [2024-11-20 10:04:24.949742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.281 [2024-11-20 10:04:24.949749] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.281 [2024-11-20 10:04:24.949756] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.281 [2024-11-20 10:04:24.949770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.281 qpair failed and we were unable to recover it. 00:30:54.281 [2024-11-20 10:04:24.959803] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.281 [2024-11-20 10:04:24.959894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.281 [2024-11-20 10:04:24.959911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.281 [2024-11-20 10:04:24.959919] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.281 [2024-11-20 10:04:24.959926] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.281 [2024-11-20 10:04:24.959940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.281 qpair failed and we were unable to recover it. 00:30:54.281 [2024-11-20 10:04:24.969830] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.281 [2024-11-20 10:04:24.969895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.281 [2024-11-20 10:04:24.969907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.281 [2024-11-20 10:04:24.969915] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.281 [2024-11-20 10:04:24.969921] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.281 [2024-11-20 10:04:24.969936] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.281 qpair failed and we were unable to recover it. 00:30:54.281 [2024-11-20 10:04:24.979831] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.281 [2024-11-20 10:04:24.979890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.281 [2024-11-20 10:04:24.979915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.281 [2024-11-20 10:04:24.979924] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.281 [2024-11-20 10:04:24.979931] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.281 [2024-11-20 10:04:24.979951] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.281 qpair failed and we were unable to recover it. 00:30:54.281 [2024-11-20 10:04:24.989834] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.281 [2024-11-20 10:04:24.989883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.281 [2024-11-20 10:04:24.989898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.281 [2024-11-20 10:04:24.989906] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.281 [2024-11-20 10:04:24.989912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.281 [2024-11-20 10:04:24.989928] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.281 qpair failed and we were unable to recover it. 00:30:54.281 [2024-11-20 10:04:24.999910] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.281 [2024-11-20 10:04:24.999960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.281 [2024-11-20 10:04:24.999974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.281 [2024-11-20 10:04:24.999981] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.281 [2024-11-20 10:04:24.999988] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.281 [2024-11-20 10:04:25.000008] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.281 qpair failed and we were unable to recover it. 00:30:54.281 [2024-11-20 10:04:25.009904] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.281 [2024-11-20 10:04:25.009957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.281 [2024-11-20 10:04:25.009970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.281 [2024-11-20 10:04:25.009978] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.281 [2024-11-20 10:04:25.009984] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.281 [2024-11-20 10:04:25.009999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.281 qpair failed and we were unable to recover it. 00:30:54.281 [2024-11-20 10:04:25.019962] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.281 [2024-11-20 10:04:25.020013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.281 [2024-11-20 10:04:25.020026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.281 [2024-11-20 10:04:25.020034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.281 [2024-11-20 10:04:25.020040] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.281 [2024-11-20 10:04:25.020055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.281 qpair failed and we were unable to recover it. 00:30:54.281 [2024-11-20 10:04:25.029834] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.281 [2024-11-20 10:04:25.029919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.281 [2024-11-20 10:04:25.029932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.281 [2024-11-20 10:04:25.029940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.281 [2024-11-20 10:04:25.029947] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.281 [2024-11-20 10:04:25.029962] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.281 qpair failed and we were unable to recover it. 00:30:54.281 [2024-11-20 10:04:25.039998] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.281 [2024-11-20 10:04:25.040053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.281 [2024-11-20 10:04:25.040066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.281 [2024-11-20 10:04:25.040074] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.281 [2024-11-20 10:04:25.040081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.281 [2024-11-20 10:04:25.040097] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.281 qpair failed and we were unable to recover it. 00:30:54.281 [2024-11-20 10:04:25.050020] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.281 [2024-11-20 10:04:25.050076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.282 [2024-11-20 10:04:25.050089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.282 [2024-11-20 10:04:25.050097] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.282 [2024-11-20 10:04:25.050103] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.282 [2024-11-20 10:04:25.050118] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.282 qpair failed and we were unable to recover it. 00:30:54.282 [2024-11-20 10:04:25.060065] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.282 [2024-11-20 10:04:25.060116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.282 [2024-11-20 10:04:25.060129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.282 [2024-11-20 10:04:25.060136] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.282 [2024-11-20 10:04:25.060143] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.282 [2024-11-20 10:04:25.060163] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.282 qpair failed and we were unable to recover it. 00:30:54.282 [2024-11-20 10:04:25.070037] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.282 [2024-11-20 10:04:25.070101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.282 [2024-11-20 10:04:25.070114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.282 [2024-11-20 10:04:25.070121] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.282 [2024-11-20 10:04:25.070128] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.282 [2024-11-20 10:04:25.070143] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.282 qpair failed and we were unable to recover it. 00:30:54.282 [2024-11-20 10:04:25.080092] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.282 [2024-11-20 10:04:25.080149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.282 [2024-11-20 10:04:25.080166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.282 [2024-11-20 10:04:25.080174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.282 [2024-11-20 10:04:25.080180] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.282 [2024-11-20 10:04:25.080195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.282 qpair failed and we were unable to recover it. 00:30:54.282 [2024-11-20 10:04:25.090143] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.282 [2024-11-20 10:04:25.090210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.282 [2024-11-20 10:04:25.090227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.282 [2024-11-20 10:04:25.090235] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.282 [2024-11-20 10:04:25.090242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.282 [2024-11-20 10:04:25.090257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.282 qpair failed and we were unable to recover it. 00:30:54.282 [2024-11-20 10:04:25.100073] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.282 [2024-11-20 10:04:25.100122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.282 [2024-11-20 10:04:25.100135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.282 [2024-11-20 10:04:25.100143] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.282 [2024-11-20 10:04:25.100150] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.282 [2024-11-20 10:04:25.100170] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.282 qpair failed and we were unable to recover it. 00:30:54.282 [2024-11-20 10:04:25.110167] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.282 [2024-11-20 10:04:25.110214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.282 [2024-11-20 10:04:25.110227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.282 [2024-11-20 10:04:25.110235] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.282 [2024-11-20 10:04:25.110241] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.282 [2024-11-20 10:04:25.110256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.282 qpair failed and we were unable to recover it. 00:30:54.282 [2024-11-20 10:04:25.120275] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.282 [2024-11-20 10:04:25.120379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.282 [2024-11-20 10:04:25.120393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.282 [2024-11-20 10:04:25.120400] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.282 [2024-11-20 10:04:25.120407] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.282 [2024-11-20 10:04:25.120422] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.282 qpair failed and we were unable to recover it. 00:30:54.282 [2024-11-20 10:04:25.130258] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.282 [2024-11-20 10:04:25.130312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.282 [2024-11-20 10:04:25.130325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.282 [2024-11-20 10:04:25.130333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.282 [2024-11-20 10:04:25.130342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.282 [2024-11-20 10:04:25.130357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.282 qpair failed and we were unable to recover it. 00:30:54.282 [2024-11-20 10:04:25.140247] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.282 [2024-11-20 10:04:25.140343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.282 [2024-11-20 10:04:25.140356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.282 [2024-11-20 10:04:25.140363] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.282 [2024-11-20 10:04:25.140370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.282 [2024-11-20 10:04:25.140385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.282 qpair failed and we were unable to recover it. 00:30:54.282 [2024-11-20 10:04:25.150141] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.282 [2024-11-20 10:04:25.150192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.282 [2024-11-20 10:04:25.150206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.282 [2024-11-20 10:04:25.150213] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.282 [2024-11-20 10:04:25.150220] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.282 [2024-11-20 10:04:25.150242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.282 qpair failed and we were unable to recover it. 00:30:54.282 [2024-11-20 10:04:25.160315] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.282 [2024-11-20 10:04:25.160370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.282 [2024-11-20 10:04:25.160384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.282 [2024-11-20 10:04:25.160391] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.282 [2024-11-20 10:04:25.160397] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.282 [2024-11-20 10:04:25.160412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.282 qpair failed and we were unable to recover it. 00:30:54.282 [2024-11-20 10:04:25.170383] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.282 [2024-11-20 10:04:25.170432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.282 [2024-11-20 10:04:25.170445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.282 [2024-11-20 10:04:25.170453] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.282 [2024-11-20 10:04:25.170459] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.282 [2024-11-20 10:04:25.170474] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.282 qpair failed and we were unable to recover it. 00:30:54.282 [2024-11-20 10:04:25.180419] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.282 [2024-11-20 10:04:25.180473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.283 [2024-11-20 10:04:25.180486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.283 [2024-11-20 10:04:25.180493] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.283 [2024-11-20 10:04:25.180500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.283 [2024-11-20 10:04:25.180514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.283 qpair failed and we were unable to recover it. 00:30:54.283 [2024-11-20 10:04:25.190296] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.283 [2024-11-20 10:04:25.190343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.283 [2024-11-20 10:04:25.190356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.283 [2024-11-20 10:04:25.190363] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.283 [2024-11-20 10:04:25.190370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.283 [2024-11-20 10:04:25.190384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.283 qpair failed and we were unable to recover it. 00:30:54.545 [2024-11-20 10:04:25.200448] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.545 [2024-11-20 10:04:25.200506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.545 [2024-11-20 10:04:25.200520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.545 [2024-11-20 10:04:25.200527] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.545 [2024-11-20 10:04:25.200534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.545 [2024-11-20 10:04:25.200548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.545 qpair failed and we were unable to recover it. 00:30:54.545 [2024-11-20 10:04:25.210481] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.545 [2024-11-20 10:04:25.210539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.545 [2024-11-20 10:04:25.210552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.545 [2024-11-20 10:04:25.210559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.545 [2024-11-20 10:04:25.210566] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.545 [2024-11-20 10:04:25.210580] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.545 qpair failed and we were unable to recover it. 00:30:54.545 [2024-11-20 10:04:25.220387] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.545 [2024-11-20 10:04:25.220441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.546 [2024-11-20 10:04:25.220458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.546 [2024-11-20 10:04:25.220466] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.546 [2024-11-20 10:04:25.220472] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.546 [2024-11-20 10:04:25.220487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.546 qpair failed and we were unable to recover it. 00:30:54.546 [2024-11-20 10:04:25.230496] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.546 [2024-11-20 10:04:25.230543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.546 [2024-11-20 10:04:25.230556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.546 [2024-11-20 10:04:25.230563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.546 [2024-11-20 10:04:25.230570] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.546 [2024-11-20 10:04:25.230584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.546 qpair failed and we were unable to recover it. 00:30:54.546 [2024-11-20 10:04:25.240544] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.546 [2024-11-20 10:04:25.240598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.546 [2024-11-20 10:04:25.240611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.546 [2024-11-20 10:04:25.240619] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.546 [2024-11-20 10:04:25.240625] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.546 [2024-11-20 10:04:25.240639] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.546 qpair failed and we were unable to recover it. 00:30:54.546 [2024-11-20 10:04:25.250602] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.546 [2024-11-20 10:04:25.250656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.546 [2024-11-20 10:04:25.250669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.546 [2024-11-20 10:04:25.250676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.546 [2024-11-20 10:04:25.250683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.546 [2024-11-20 10:04:25.250697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.546 qpair failed and we were unable to recover it. 00:30:54.546 [2024-11-20 10:04:25.260576] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.546 [2024-11-20 10:04:25.260629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.546 [2024-11-20 10:04:25.260641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.546 [2024-11-20 10:04:25.260656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.546 [2024-11-20 10:04:25.260663] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.546 [2024-11-20 10:04:25.260678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.546 qpair failed and we were unable to recover it. 00:30:54.546 [2024-11-20 10:04:25.270592] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.546 [2024-11-20 10:04:25.270641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.546 [2024-11-20 10:04:25.270654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.546 [2024-11-20 10:04:25.270662] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.546 [2024-11-20 10:04:25.270668] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.546 [2024-11-20 10:04:25.270683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.546 qpair failed and we were unable to recover it. 00:30:54.546 [2024-11-20 10:04:25.280686] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.546 [2024-11-20 10:04:25.280739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.546 [2024-11-20 10:04:25.280752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.546 [2024-11-20 10:04:25.280759] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.546 [2024-11-20 10:04:25.280766] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.546 [2024-11-20 10:04:25.280780] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.546 qpair failed and we were unable to recover it. 00:30:54.546 [2024-11-20 10:04:25.290582] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.546 [2024-11-20 10:04:25.290641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.546 [2024-11-20 10:04:25.290654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.546 [2024-11-20 10:04:25.290662] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.546 [2024-11-20 10:04:25.290668] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.546 [2024-11-20 10:04:25.290682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.546 qpair failed and we were unable to recover it. 00:30:54.546 [2024-11-20 10:04:25.300711] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.546 [2024-11-20 10:04:25.300760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.546 [2024-11-20 10:04:25.300772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.546 [2024-11-20 10:04:25.300779] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.546 [2024-11-20 10:04:25.300786] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.546 [2024-11-20 10:04:25.300800] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.546 qpair failed and we were unable to recover it. 00:30:54.546 [2024-11-20 10:04:25.310724] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.546 [2024-11-20 10:04:25.310775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.546 [2024-11-20 10:04:25.310789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.546 [2024-11-20 10:04:25.310796] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.546 [2024-11-20 10:04:25.310803] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.546 [2024-11-20 10:04:25.310818] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.546 qpair failed and we were unable to recover it. 00:30:54.546 [2024-11-20 10:04:25.320786] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.546 [2024-11-20 10:04:25.320846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.546 [2024-11-20 10:04:25.320860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.546 [2024-11-20 10:04:25.320867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.546 [2024-11-20 10:04:25.320874] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.546 [2024-11-20 10:04:25.320888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.546 qpair failed and we were unable to recover it. 00:30:54.546 [2024-11-20 10:04:25.330826] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.546 [2024-11-20 10:04:25.330883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.546 [2024-11-20 10:04:25.330896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.546 [2024-11-20 10:04:25.330903] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.546 [2024-11-20 10:04:25.330909] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.546 [2024-11-20 10:04:25.330923] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.546 qpair failed and we were unable to recover it. 00:30:54.546 [2024-11-20 10:04:25.340840] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.546 [2024-11-20 10:04:25.340890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.546 [2024-11-20 10:04:25.340904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.546 [2024-11-20 10:04:25.340911] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.546 [2024-11-20 10:04:25.340917] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.546 [2024-11-20 10:04:25.340932] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.546 qpair failed and we were unable to recover it. 00:30:54.546 [2024-11-20 10:04:25.350816] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.547 [2024-11-20 10:04:25.350865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.547 [2024-11-20 10:04:25.350878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.547 [2024-11-20 10:04:25.350885] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.547 [2024-11-20 10:04:25.350892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.547 [2024-11-20 10:04:25.350906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.547 qpair failed and we were unable to recover it. 00:30:54.547 [2024-11-20 10:04:25.360912] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.547 [2024-11-20 10:04:25.360968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.547 [2024-11-20 10:04:25.360980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.547 [2024-11-20 10:04:25.360988] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.547 [2024-11-20 10:04:25.360994] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.547 [2024-11-20 10:04:25.361009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.547 qpair failed and we were unable to recover it. 00:30:54.547 [2024-11-20 10:04:25.370920] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.547 [2024-11-20 10:04:25.370979] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.547 [2024-11-20 10:04:25.370992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.547 [2024-11-20 10:04:25.370999] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.547 [2024-11-20 10:04:25.371005] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.547 [2024-11-20 10:04:25.371020] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.547 qpair failed and we were unable to recover it. 00:30:54.547 [2024-11-20 10:04:25.380943] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.547 [2024-11-20 10:04:25.381001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.547 [2024-11-20 10:04:25.381014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.547 [2024-11-20 10:04:25.381021] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.547 [2024-11-20 10:04:25.381028] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.547 [2024-11-20 10:04:25.381042] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.547 qpair failed and we were unable to recover it. 00:30:54.547 [2024-11-20 10:04:25.390947] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.547 [2024-11-20 10:04:25.390997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.547 [2024-11-20 10:04:25.391010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.547 [2024-11-20 10:04:25.391021] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.547 [2024-11-20 10:04:25.391028] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.547 [2024-11-20 10:04:25.391042] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.547 qpair failed and we were unable to recover it. 00:30:54.547 [2024-11-20 10:04:25.401020] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.547 [2024-11-20 10:04:25.401077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.547 [2024-11-20 10:04:25.401090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.547 [2024-11-20 10:04:25.401098] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.547 [2024-11-20 10:04:25.401105] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.547 [2024-11-20 10:04:25.401119] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.547 qpair failed and we were unable to recover it. 00:30:54.547 [2024-11-20 10:04:25.411046] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.547 [2024-11-20 10:04:25.411101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.547 [2024-11-20 10:04:25.411114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.547 [2024-11-20 10:04:25.411122] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.547 [2024-11-20 10:04:25.411128] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.547 [2024-11-20 10:04:25.411143] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.547 qpair failed and we were unable to recover it. 00:30:54.547 [2024-11-20 10:04:25.421063] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.547 [2024-11-20 10:04:25.421119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.547 [2024-11-20 10:04:25.421132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.547 [2024-11-20 10:04:25.421139] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.547 [2024-11-20 10:04:25.421145] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.547 [2024-11-20 10:04:25.421163] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.547 qpair failed and we were unable to recover it. 00:30:54.547 [2024-11-20 10:04:25.431045] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.547 [2024-11-20 10:04:25.431122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.547 [2024-11-20 10:04:25.431135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.547 [2024-11-20 10:04:25.431142] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.547 [2024-11-20 10:04:25.431149] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.547 [2024-11-20 10:04:25.431171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.547 qpair failed and we were unable to recover it. 00:30:54.547 [2024-11-20 10:04:25.441115] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.547 [2024-11-20 10:04:25.441172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.547 [2024-11-20 10:04:25.441185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.547 [2024-11-20 10:04:25.441193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.547 [2024-11-20 10:04:25.441199] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.547 [2024-11-20 10:04:25.441214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.547 qpair failed and we were unable to recover it. 00:30:54.547 [2024-11-20 10:04:25.451148] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.547 [2024-11-20 10:04:25.451202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.547 [2024-11-20 10:04:25.451215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.547 [2024-11-20 10:04:25.451222] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.547 [2024-11-20 10:04:25.451229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.547 [2024-11-20 10:04:25.451243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.547 qpair failed and we were unable to recover it. 00:30:54.809 [2024-11-20 10:04:25.461156] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.809 [2024-11-20 10:04:25.461208] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.809 [2024-11-20 10:04:25.461221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.809 [2024-11-20 10:04:25.461229] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.809 [2024-11-20 10:04:25.461235] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.809 [2024-11-20 10:04:25.461250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.809 qpair failed and we were unable to recover it. 00:30:54.809 [2024-11-20 10:04:25.471137] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.809 [2024-11-20 10:04:25.471190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.809 [2024-11-20 10:04:25.471204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.809 [2024-11-20 10:04:25.471211] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.809 [2024-11-20 10:04:25.471217] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.809 [2024-11-20 10:04:25.471232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.809 qpair failed and we were unable to recover it. 00:30:54.809 [2024-11-20 10:04:25.481215] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.809 [2024-11-20 10:04:25.481273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.809 [2024-11-20 10:04:25.481286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.809 [2024-11-20 10:04:25.481293] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.809 [2024-11-20 10:04:25.481300] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.809 [2024-11-20 10:04:25.481314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.809 qpair failed and we were unable to recover it. 00:30:54.809 [2024-11-20 10:04:25.491261] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.809 [2024-11-20 10:04:25.491323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.809 [2024-11-20 10:04:25.491336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.809 [2024-11-20 10:04:25.491344] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.809 [2024-11-20 10:04:25.491350] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.810 [2024-11-20 10:04:25.491365] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.810 qpair failed and we were unable to recover it. 00:30:54.810 [2024-11-20 10:04:25.501255] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.810 [2024-11-20 10:04:25.501330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.810 [2024-11-20 10:04:25.501343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.810 [2024-11-20 10:04:25.501350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.810 [2024-11-20 10:04:25.501356] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.810 [2024-11-20 10:04:25.501371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.810 qpair failed and we were unable to recover it. 00:30:54.810 [2024-11-20 10:04:25.511264] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.810 [2024-11-20 10:04:25.511315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.810 [2024-11-20 10:04:25.511328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.810 [2024-11-20 10:04:25.511335] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.810 [2024-11-20 10:04:25.511341] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.810 [2024-11-20 10:04:25.511355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.810 qpair failed and we were unable to recover it. 00:30:54.810 [2024-11-20 10:04:25.521369] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.810 [2024-11-20 10:04:25.521425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.810 [2024-11-20 10:04:25.521441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.810 [2024-11-20 10:04:25.521448] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.810 [2024-11-20 10:04:25.521455] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.810 [2024-11-20 10:04:25.521470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.810 qpair failed and we were unable to recover it. 00:30:54.810 [2024-11-20 10:04:25.531245] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.810 [2024-11-20 10:04:25.531300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.810 [2024-11-20 10:04:25.531315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.810 [2024-11-20 10:04:25.531323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.810 [2024-11-20 10:04:25.531329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.810 [2024-11-20 10:04:25.531345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.810 qpair failed and we were unable to recover it. 00:30:54.810 [2024-11-20 10:04:25.541384] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.810 [2024-11-20 10:04:25.541435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.810 [2024-11-20 10:04:25.541449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.810 [2024-11-20 10:04:25.541456] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.810 [2024-11-20 10:04:25.541462] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.810 [2024-11-20 10:04:25.541477] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.810 qpair failed and we were unable to recover it. 00:30:54.810 [2024-11-20 10:04:25.551375] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.810 [2024-11-20 10:04:25.551422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.810 [2024-11-20 10:04:25.551436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.810 [2024-11-20 10:04:25.551443] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.810 [2024-11-20 10:04:25.551450] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.810 [2024-11-20 10:04:25.551464] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.810 qpair failed and we were unable to recover it. 00:30:54.810 [2024-11-20 10:04:25.561458] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.810 [2024-11-20 10:04:25.561514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.810 [2024-11-20 10:04:25.561527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.810 [2024-11-20 10:04:25.561534] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.810 [2024-11-20 10:04:25.561543] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.810 [2024-11-20 10:04:25.561558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.810 qpair failed and we were unable to recover it. 00:30:54.810 [2024-11-20 10:04:25.571358] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.810 [2024-11-20 10:04:25.571417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.810 [2024-11-20 10:04:25.571431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.810 [2024-11-20 10:04:25.571438] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.810 [2024-11-20 10:04:25.571445] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.810 [2024-11-20 10:04:25.571466] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.810 qpair failed and we were unable to recover it. 00:30:54.810 [2024-11-20 10:04:25.581496] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.810 [2024-11-20 10:04:25.581555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.810 [2024-11-20 10:04:25.581568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.810 [2024-11-20 10:04:25.581576] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.810 [2024-11-20 10:04:25.581582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.810 [2024-11-20 10:04:25.581596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.810 qpair failed and we were unable to recover it. 00:30:54.810 [2024-11-20 10:04:25.591471] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.810 [2024-11-20 10:04:25.591522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.810 [2024-11-20 10:04:25.591536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.810 [2024-11-20 10:04:25.591543] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.810 [2024-11-20 10:04:25.591549] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.810 [2024-11-20 10:04:25.591564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.810 qpair failed and we were unable to recover it. 00:30:54.810 [2024-11-20 10:04:25.601545] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.810 [2024-11-20 10:04:25.601601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.810 [2024-11-20 10:04:25.601614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.810 [2024-11-20 10:04:25.601622] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.810 [2024-11-20 10:04:25.601628] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.810 [2024-11-20 10:04:25.601642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.810 qpair failed and we were unable to recover it. 00:30:54.810 [2024-11-20 10:04:25.611585] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.810 [2024-11-20 10:04:25.611651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.810 [2024-11-20 10:04:25.611664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.810 [2024-11-20 10:04:25.611672] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.810 [2024-11-20 10:04:25.611678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.811 [2024-11-20 10:04:25.611693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.811 qpair failed and we were unable to recover it. 00:30:54.811 [2024-11-20 10:04:25.621482] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.811 [2024-11-20 10:04:25.621534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.811 [2024-11-20 10:04:25.621548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.811 [2024-11-20 10:04:25.621555] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.811 [2024-11-20 10:04:25.621561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.811 [2024-11-20 10:04:25.621576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.811 qpair failed and we were unable to recover it. 00:30:54.811 [2024-11-20 10:04:25.631587] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.811 [2024-11-20 10:04:25.631636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.811 [2024-11-20 10:04:25.631649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.811 [2024-11-20 10:04:25.631656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.811 [2024-11-20 10:04:25.631663] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.811 [2024-11-20 10:04:25.631677] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.811 qpair failed and we were unable to recover it. 00:30:54.811 [2024-11-20 10:04:25.641660] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.811 [2024-11-20 10:04:25.641721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.811 [2024-11-20 10:04:25.641734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.811 [2024-11-20 10:04:25.641742] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.811 [2024-11-20 10:04:25.641748] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.811 [2024-11-20 10:04:25.641762] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.811 qpair failed and we were unable to recover it. 00:30:54.811 [2024-11-20 10:04:25.651677] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.811 [2024-11-20 10:04:25.651768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.811 [2024-11-20 10:04:25.651784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.811 [2024-11-20 10:04:25.651791] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.811 [2024-11-20 10:04:25.651798] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.811 [2024-11-20 10:04:25.651813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.811 qpair failed and we were unable to recover it. 00:30:54.811 [2024-11-20 10:04:25.661700] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.811 [2024-11-20 10:04:25.661776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.811 [2024-11-20 10:04:25.661789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.811 [2024-11-20 10:04:25.661797] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.811 [2024-11-20 10:04:25.661803] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.811 [2024-11-20 10:04:25.661817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.811 qpair failed and we were unable to recover it. 00:30:54.811 [2024-11-20 10:04:25.671689] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.811 [2024-11-20 10:04:25.671736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.811 [2024-11-20 10:04:25.671749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.811 [2024-11-20 10:04:25.671756] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.811 [2024-11-20 10:04:25.671763] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.811 [2024-11-20 10:04:25.671777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.811 qpair failed and we were unable to recover it. 00:30:54.811 [2024-11-20 10:04:25.681772] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.811 [2024-11-20 10:04:25.681828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.811 [2024-11-20 10:04:25.681841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.811 [2024-11-20 10:04:25.681848] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.811 [2024-11-20 10:04:25.681855] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.811 [2024-11-20 10:04:25.681870] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.811 qpair failed and we were unable to recover it. 00:30:54.811 [2024-11-20 10:04:25.691859] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.811 [2024-11-20 10:04:25.691922] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.811 [2024-11-20 10:04:25.691935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.811 [2024-11-20 10:04:25.691942] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.811 [2024-11-20 10:04:25.691952] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.811 [2024-11-20 10:04:25.691967] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.811 qpair failed and we were unable to recover it. 00:30:54.811 [2024-11-20 10:04:25.701820] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.811 [2024-11-20 10:04:25.701872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.811 [2024-11-20 10:04:25.701886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.811 [2024-11-20 10:04:25.701893] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.811 [2024-11-20 10:04:25.701900] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.811 [2024-11-20 10:04:25.701914] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.811 qpair failed and we were unable to recover it. 00:30:54.811 [2024-11-20 10:04:25.711818] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.811 [2024-11-20 10:04:25.711865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.811 [2024-11-20 10:04:25.711878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.811 [2024-11-20 10:04:25.711885] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.811 [2024-11-20 10:04:25.711892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:54.811 [2024-11-20 10:04:25.711906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.811 qpair failed and we were unable to recover it. 00:30:55.074 [2024-11-20 10:04:25.721789] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.074 [2024-11-20 10:04:25.721848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.074 [2024-11-20 10:04:25.721861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.074 [2024-11-20 10:04:25.721869] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.074 [2024-11-20 10:04:25.721876] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.074 [2024-11-20 10:04:25.721891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.074 qpair failed and we were unable to recover it. 00:30:55.074 [2024-11-20 10:04:25.731953] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.074 [2024-11-20 10:04:25.732030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.074 [2024-11-20 10:04:25.732044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.074 [2024-11-20 10:04:25.732051] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.074 [2024-11-20 10:04:25.732058] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.074 [2024-11-20 10:04:25.732073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.074 qpair failed and we were unable to recover it. 00:30:55.074 [2024-11-20 10:04:25.741900] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.074 [2024-11-20 10:04:25.741953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.074 [2024-11-20 10:04:25.741967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.074 [2024-11-20 10:04:25.741974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.074 [2024-11-20 10:04:25.741980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.074 [2024-11-20 10:04:25.741995] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.074 qpair failed and we were unable to recover it. 00:30:55.074 [2024-11-20 10:04:25.751972] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.074 [2024-11-20 10:04:25.752041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.074 [2024-11-20 10:04:25.752055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.074 [2024-11-20 10:04:25.752062] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.074 [2024-11-20 10:04:25.752068] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.074 [2024-11-20 10:04:25.752083] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.074 qpair failed and we were unable to recover it. 00:30:55.074 [2024-11-20 10:04:25.761966] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.074 [2024-11-20 10:04:25.762034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.074 [2024-11-20 10:04:25.762047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.074 [2024-11-20 10:04:25.762055] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.074 [2024-11-20 10:04:25.762062] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.074 [2024-11-20 10:04:25.762076] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.074 qpair failed and we were unable to recover it. 00:30:55.074 [2024-11-20 10:04:25.772035] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.074 [2024-11-20 10:04:25.772092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.075 [2024-11-20 10:04:25.772105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.075 [2024-11-20 10:04:25.772112] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.075 [2024-11-20 10:04:25.772119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.075 [2024-11-20 10:04:25.772134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.075 qpair failed and we were unable to recover it. 00:30:55.075 [2024-11-20 10:04:25.782044] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.075 [2024-11-20 10:04:25.782101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.075 [2024-11-20 10:04:25.782117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.075 [2024-11-20 10:04:25.782125] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.075 [2024-11-20 10:04:25.782132] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.075 [2024-11-20 10:04:25.782146] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.075 qpair failed and we were unable to recover it. 00:30:55.075 [2024-11-20 10:04:25.791908] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.075 [2024-11-20 10:04:25.791958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.075 [2024-11-20 10:04:25.791972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.075 [2024-11-20 10:04:25.791979] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.075 [2024-11-20 10:04:25.791986] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.075 [2024-11-20 10:04:25.792006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.075 qpair failed and we were unable to recover it. 00:30:55.075 [2024-11-20 10:04:25.802099] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.075 [2024-11-20 10:04:25.802157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.075 [2024-11-20 10:04:25.802173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.075 [2024-11-20 10:04:25.802181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.075 [2024-11-20 10:04:25.802188] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.075 [2024-11-20 10:04:25.802203] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.075 qpair failed and we were unable to recover it. 00:30:55.075 [2024-11-20 10:04:25.812142] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.075 [2024-11-20 10:04:25.812233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.075 [2024-11-20 10:04:25.812247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.075 [2024-11-20 10:04:25.812255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.075 [2024-11-20 10:04:25.812261] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.075 [2024-11-20 10:04:25.812276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.075 qpair failed and we were unable to recover it. 00:30:55.075 [2024-11-20 10:04:25.822160] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.075 [2024-11-20 10:04:25.822253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.075 [2024-11-20 10:04:25.822266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.075 [2024-11-20 10:04:25.822277] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.075 [2024-11-20 10:04:25.822284] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.075 [2024-11-20 10:04:25.822299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.075 qpair failed and we were unable to recover it. 00:30:55.075 [2024-11-20 10:04:25.832031] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.075 [2024-11-20 10:04:25.832109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.075 [2024-11-20 10:04:25.832123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.075 [2024-11-20 10:04:25.832130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.075 [2024-11-20 10:04:25.832137] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.075 [2024-11-20 10:04:25.832153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.075 qpair failed and we were unable to recover it. 00:30:55.075 [2024-11-20 10:04:25.842216] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.075 [2024-11-20 10:04:25.842273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.075 [2024-11-20 10:04:25.842287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.075 [2024-11-20 10:04:25.842294] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.075 [2024-11-20 10:04:25.842301] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.075 [2024-11-20 10:04:25.842316] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.075 qpair failed and we were unable to recover it. 00:30:55.075 [2024-11-20 10:04:25.852257] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.075 [2024-11-20 10:04:25.852315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.075 [2024-11-20 10:04:25.852328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.075 [2024-11-20 10:04:25.852336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.075 [2024-11-20 10:04:25.852343] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.075 [2024-11-20 10:04:25.852358] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.075 qpair failed and we were unable to recover it. 00:30:55.075 [2024-11-20 10:04:25.862260] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.075 [2024-11-20 10:04:25.862313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.075 [2024-11-20 10:04:25.862326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.075 [2024-11-20 10:04:25.862334] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.075 [2024-11-20 10:04:25.862340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.075 [2024-11-20 10:04:25.862355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.075 qpair failed and we were unable to recover it. 00:30:55.075 [2024-11-20 10:04:25.872242] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.075 [2024-11-20 10:04:25.872288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.075 [2024-11-20 10:04:25.872301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.075 [2024-11-20 10:04:25.872309] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.075 [2024-11-20 10:04:25.872316] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.075 [2024-11-20 10:04:25.872330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.075 qpair failed and we were unable to recover it. 00:30:55.075 [2024-11-20 10:04:25.882282] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.075 [2024-11-20 10:04:25.882340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.075 [2024-11-20 10:04:25.882353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.075 [2024-11-20 10:04:25.882361] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.075 [2024-11-20 10:04:25.882368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.075 [2024-11-20 10:04:25.882383] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.075 qpair failed and we were unable to recover it. 00:30:55.075 [2024-11-20 10:04:25.892356] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.075 [2024-11-20 10:04:25.892410] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.075 [2024-11-20 10:04:25.892424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.075 [2024-11-20 10:04:25.892431] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.075 [2024-11-20 10:04:25.892438] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.075 [2024-11-20 10:04:25.892453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.075 qpair failed and we were unable to recover it. 00:30:55.075 [2024-11-20 10:04:25.902393] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.075 [2024-11-20 10:04:25.902468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.076 [2024-11-20 10:04:25.902481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.076 [2024-11-20 10:04:25.902488] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.076 [2024-11-20 10:04:25.902495] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.076 [2024-11-20 10:04:25.902511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.076 qpair failed and we were unable to recover it. 00:30:55.076 [2024-11-20 10:04:25.912348] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.076 [2024-11-20 10:04:25.912400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.076 [2024-11-20 10:04:25.912414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.076 [2024-11-20 10:04:25.912421] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.076 [2024-11-20 10:04:25.912428] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.076 [2024-11-20 10:04:25.912443] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.076 qpair failed and we were unable to recover it. 00:30:55.076 [2024-11-20 10:04:25.922421] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.076 [2024-11-20 10:04:25.922513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.076 [2024-11-20 10:04:25.922526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.076 [2024-11-20 10:04:25.922533] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.076 [2024-11-20 10:04:25.922540] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.076 [2024-11-20 10:04:25.922555] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.076 qpair failed and we were unable to recover it. 00:30:55.076 [2024-11-20 10:04:25.932423] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.076 [2024-11-20 10:04:25.932495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.076 [2024-11-20 10:04:25.932509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.076 [2024-11-20 10:04:25.932516] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.076 [2024-11-20 10:04:25.932523] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.076 [2024-11-20 10:04:25.932537] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.076 qpair failed and we were unable to recover it. 00:30:55.076 [2024-11-20 10:04:25.942483] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.076 [2024-11-20 10:04:25.942539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.076 [2024-11-20 10:04:25.942552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.076 [2024-11-20 10:04:25.942559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.076 [2024-11-20 10:04:25.942566] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.076 [2024-11-20 10:04:25.942580] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.076 qpair failed and we were unable to recover it. 00:30:55.076 [2024-11-20 10:04:25.952350] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.076 [2024-11-20 10:04:25.952397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.076 [2024-11-20 10:04:25.952411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.076 [2024-11-20 10:04:25.952422] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.076 [2024-11-20 10:04:25.952429] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.076 [2024-11-20 10:04:25.952454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.076 qpair failed and we were unable to recover it. 00:30:55.076 [2024-11-20 10:04:25.962538] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.076 [2024-11-20 10:04:25.962593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.076 [2024-11-20 10:04:25.962606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.076 [2024-11-20 10:04:25.962613] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.076 [2024-11-20 10:04:25.962620] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.076 [2024-11-20 10:04:25.962634] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.076 qpair failed and we were unable to recover it. 00:30:55.076 [2024-11-20 10:04:25.972543] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.076 [2024-11-20 10:04:25.972593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.076 [2024-11-20 10:04:25.972606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.076 [2024-11-20 10:04:25.972613] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.076 [2024-11-20 10:04:25.972620] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.076 [2024-11-20 10:04:25.972635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.076 qpair failed and we were unable to recover it. 00:30:55.076 [2024-11-20 10:04:25.982577] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.076 [2024-11-20 10:04:25.982627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.076 [2024-11-20 10:04:25.982640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.076 [2024-11-20 10:04:25.982647] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.076 [2024-11-20 10:04:25.982654] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.076 [2024-11-20 10:04:25.982669] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.076 qpair failed and we were unable to recover it. 00:30:55.339 [2024-11-20 10:04:25.992552] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.339 [2024-11-20 10:04:25.992603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.339 [2024-11-20 10:04:25.992616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.339 [2024-11-20 10:04:25.992624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.339 [2024-11-20 10:04:25.992631] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.339 [2024-11-20 10:04:25.992653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.339 qpair failed and we were unable to recover it. 00:30:55.339 [2024-11-20 10:04:26.002655] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.339 [2024-11-20 10:04:26.002709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.339 [2024-11-20 10:04:26.002722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.339 [2024-11-20 10:04:26.002730] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.339 [2024-11-20 10:04:26.002736] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.339 [2024-11-20 10:04:26.002751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.339 qpair failed and we were unable to recover it. 00:30:55.339 [2024-11-20 10:04:26.012658] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.339 [2024-11-20 10:04:26.012706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.339 [2024-11-20 10:04:26.012719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.339 [2024-11-20 10:04:26.012727] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.339 [2024-11-20 10:04:26.012733] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.339 [2024-11-20 10:04:26.012748] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.339 qpair failed and we were unable to recover it. 00:30:55.339 [2024-11-20 10:04:26.022698] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.339 [2024-11-20 10:04:26.022780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.339 [2024-11-20 10:04:26.022793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.339 [2024-11-20 10:04:26.022801] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.339 [2024-11-20 10:04:26.022809] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.339 [2024-11-20 10:04:26.022823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.339 qpair failed and we were unable to recover it. 00:30:55.339 [2024-11-20 10:04:26.032707] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.339 [2024-11-20 10:04:26.032774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.339 [2024-11-20 10:04:26.032787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.339 [2024-11-20 10:04:26.032794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.339 [2024-11-20 10:04:26.032801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.339 [2024-11-20 10:04:26.032815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.339 qpair failed and we were unable to recover it. 00:30:55.339 [2024-11-20 10:04:26.042777] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.339 [2024-11-20 10:04:26.042860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.339 [2024-11-20 10:04:26.042873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.339 [2024-11-20 10:04:26.042881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.339 [2024-11-20 10:04:26.042888] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.339 [2024-11-20 10:04:26.042903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.339 qpair failed and we were unable to recover it. 00:30:55.339 [2024-11-20 10:04:26.052774] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.339 [2024-11-20 10:04:26.052822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.339 [2024-11-20 10:04:26.052835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.339 [2024-11-20 10:04:26.052842] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.339 [2024-11-20 10:04:26.052849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.339 [2024-11-20 10:04:26.052864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.339 qpair failed and we were unable to recover it. 00:30:55.339 [2024-11-20 10:04:26.062812] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.339 [2024-11-20 10:04:26.062860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.339 [2024-11-20 10:04:26.062873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.339 [2024-11-20 10:04:26.062881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.339 [2024-11-20 10:04:26.062888] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.339 [2024-11-20 10:04:26.062906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.339 qpair failed and we were unable to recover it. 00:30:55.339 [2024-11-20 10:04:26.072808] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.339 [2024-11-20 10:04:26.072860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.339 [2024-11-20 10:04:26.072873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.339 [2024-11-20 10:04:26.072881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.339 [2024-11-20 10:04:26.072887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.339 [2024-11-20 10:04:26.072902] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.339 qpair failed and we were unable to recover it. 00:30:55.339 [2024-11-20 10:04:26.082750] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.339 [2024-11-20 10:04:26.082806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.339 [2024-11-20 10:04:26.082823] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.339 [2024-11-20 10:04:26.082831] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.339 [2024-11-20 10:04:26.082838] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.339 [2024-11-20 10:04:26.082859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.339 qpair failed and we were unable to recover it. 00:30:55.339 [2024-11-20 10:04:26.092837] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.339 [2024-11-20 10:04:26.092891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.339 [2024-11-20 10:04:26.092904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.339 [2024-11-20 10:04:26.092912] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.339 [2024-11-20 10:04:26.092918] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.339 [2024-11-20 10:04:26.092933] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.339 qpair failed and we were unable to recover it. 00:30:55.340 [2024-11-20 10:04:26.102891] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.340 [2024-11-20 10:04:26.102958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.340 [2024-11-20 10:04:26.102971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.340 [2024-11-20 10:04:26.102978] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.340 [2024-11-20 10:04:26.102985] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.340 [2024-11-20 10:04:26.102999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.340 qpair failed and we were unable to recover it. 00:30:55.340 [2024-11-20 10:04:26.112880] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.340 [2024-11-20 10:04:26.112929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.340 [2024-11-20 10:04:26.112942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.340 [2024-11-20 10:04:26.112950] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.340 [2024-11-20 10:04:26.112956] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.340 [2024-11-20 10:04:26.112971] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.340 qpair failed and we were unable to recover it. 00:30:55.340 [2024-11-20 10:04:26.122977] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.340 [2024-11-20 10:04:26.123031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.340 [2024-11-20 10:04:26.123044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.340 [2024-11-20 10:04:26.123051] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.340 [2024-11-20 10:04:26.123061] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.340 [2024-11-20 10:04:26.123076] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.340 qpair failed and we were unable to recover it. 00:30:55.340 [2024-11-20 10:04:26.132974] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.340 [2024-11-20 10:04:26.133023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.340 [2024-11-20 10:04:26.133036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.340 [2024-11-20 10:04:26.133043] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.340 [2024-11-20 10:04:26.133050] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.340 [2024-11-20 10:04:26.133065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.340 qpair failed and we were unable to recover it. 00:30:55.340 [2024-11-20 10:04:26.142899] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.340 [2024-11-20 10:04:26.142958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.340 [2024-11-20 10:04:26.142971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.340 [2024-11-20 10:04:26.142978] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.340 [2024-11-20 10:04:26.142985] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.340 [2024-11-20 10:04:26.142999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.340 qpair failed and we were unable to recover it. 00:30:55.340 [2024-11-20 10:04:26.153019] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.340 [2024-11-20 10:04:26.153074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.340 [2024-11-20 10:04:26.153087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.340 [2024-11-20 10:04:26.153094] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.340 [2024-11-20 10:04:26.153101] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.340 [2024-11-20 10:04:26.153116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.340 qpair failed and we were unable to recover it. 00:30:55.340 [2024-11-20 10:04:26.163078] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.340 [2024-11-20 10:04:26.163130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.340 [2024-11-20 10:04:26.163144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.340 [2024-11-20 10:04:26.163151] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.340 [2024-11-20 10:04:26.163162] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.340 [2024-11-20 10:04:26.163177] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.340 qpair failed and we were unable to recover it. 00:30:55.340 [2024-11-20 10:04:26.173092] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.340 [2024-11-20 10:04:26.173142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.340 [2024-11-20 10:04:26.173155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.340 [2024-11-20 10:04:26.173167] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.340 [2024-11-20 10:04:26.173175] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.340 [2024-11-20 10:04:26.173189] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.340 qpair failed and we were unable to recover it. 00:30:55.340 [2024-11-20 10:04:26.183119] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.340 [2024-11-20 10:04:26.183177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.340 [2024-11-20 10:04:26.183190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.340 [2024-11-20 10:04:26.183198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.340 [2024-11-20 10:04:26.183204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.340 [2024-11-20 10:04:26.183219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.340 qpair failed and we were unable to recover it. 00:30:55.340 [2024-11-20 10:04:26.193135] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.340 [2024-11-20 10:04:26.193189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.340 [2024-11-20 10:04:26.193203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.340 [2024-11-20 10:04:26.193211] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.340 [2024-11-20 10:04:26.193218] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.340 [2024-11-20 10:04:26.193233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.340 qpair failed and we were unable to recover it. 00:30:55.340 [2024-11-20 10:04:26.203202] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.340 [2024-11-20 10:04:26.203253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.340 [2024-11-20 10:04:26.203266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.340 [2024-11-20 10:04:26.203274] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.340 [2024-11-20 10:04:26.203280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.340 [2024-11-20 10:04:26.203295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.340 qpair failed and we were unable to recover it. 00:30:55.340 [2024-11-20 10:04:26.213155] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.340 [2024-11-20 10:04:26.213256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.340 [2024-11-20 10:04:26.213273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.340 [2024-11-20 10:04:26.213280] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.340 [2024-11-20 10:04:26.213287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.340 [2024-11-20 10:04:26.213301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.340 qpair failed and we were unable to recover it. 00:30:55.341 [2024-11-20 10:04:26.223235] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.341 [2024-11-20 10:04:26.223285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.341 [2024-11-20 10:04:26.223298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.341 [2024-11-20 10:04:26.223305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.341 [2024-11-20 10:04:26.223312] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.341 [2024-11-20 10:04:26.223326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.341 qpair failed and we were unable to recover it. 00:30:55.341 [2024-11-20 10:04:26.233235] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.341 [2024-11-20 10:04:26.233287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.341 [2024-11-20 10:04:26.233300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.341 [2024-11-20 10:04:26.233308] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.341 [2024-11-20 10:04:26.233315] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.341 [2024-11-20 10:04:26.233329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.341 qpair failed and we were unable to recover it. 00:30:55.341 [2024-11-20 10:04:26.243237] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.341 [2024-11-20 10:04:26.243340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.341 [2024-11-20 10:04:26.243355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.341 [2024-11-20 10:04:26.243362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.341 [2024-11-20 10:04:26.243371] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.341 [2024-11-20 10:04:26.243390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.341 qpair failed and we were unable to recover it. 00:30:55.603 [2024-11-20 10:04:26.253300] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.603 [2024-11-20 10:04:26.253352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.603 [2024-11-20 10:04:26.253366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.603 [2024-11-20 10:04:26.253374] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.603 [2024-11-20 10:04:26.253385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.603 [2024-11-20 10:04:26.253400] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.603 qpair failed and we were unable to recover it. 00:30:55.603 [2024-11-20 10:04:26.263351] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.603 [2024-11-20 10:04:26.263406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.603 [2024-11-20 10:04:26.263419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.603 [2024-11-20 10:04:26.263426] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.603 [2024-11-20 10:04:26.263433] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.603 [2024-11-20 10:04:26.263448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.603 qpair failed and we were unable to recover it. 00:30:55.603 [2024-11-20 10:04:26.273346] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.603 [2024-11-20 10:04:26.273392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.603 [2024-11-20 10:04:26.273406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.603 [2024-11-20 10:04:26.273414] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.603 [2024-11-20 10:04:26.273420] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.603 [2024-11-20 10:04:26.273435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.603 qpair failed and we were unable to recover it. 00:30:55.603 [2024-11-20 10:04:26.283417] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.603 [2024-11-20 10:04:26.283474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.604 [2024-11-20 10:04:26.283488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.604 [2024-11-20 10:04:26.283495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.604 [2024-11-20 10:04:26.283502] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.604 [2024-11-20 10:04:26.283517] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.604 qpair failed and we were unable to recover it. 00:30:55.604 [2024-11-20 10:04:26.293429] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.604 [2024-11-20 10:04:26.293480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.604 [2024-11-20 10:04:26.293494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.604 [2024-11-20 10:04:26.293501] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.604 [2024-11-20 10:04:26.293508] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.604 [2024-11-20 10:04:26.293523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.604 qpair failed and we were unable to recover it. 00:30:55.604 [2024-11-20 10:04:26.303325] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.604 [2024-11-20 10:04:26.303374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.604 [2024-11-20 10:04:26.303387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.604 [2024-11-20 10:04:26.303395] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.604 [2024-11-20 10:04:26.303402] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.604 [2024-11-20 10:04:26.303416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.604 qpair failed and we were unable to recover it. 00:30:55.604 [2024-11-20 10:04:26.313463] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.604 [2024-11-20 10:04:26.313512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.604 [2024-11-20 10:04:26.313526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.604 [2024-11-20 10:04:26.313533] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.604 [2024-11-20 10:04:26.313540] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.604 [2024-11-20 10:04:26.313554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.604 qpair failed and we were unable to recover it. 00:30:55.604 [2024-11-20 10:04:26.323530] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.604 [2024-11-20 10:04:26.323588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.604 [2024-11-20 10:04:26.323601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.604 [2024-11-20 10:04:26.323608] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.604 [2024-11-20 10:04:26.323615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.604 [2024-11-20 10:04:26.323629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.604 qpair failed and we were unable to recover it. 00:30:55.604 [2024-11-20 10:04:26.333501] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.604 [2024-11-20 10:04:26.333549] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.604 [2024-11-20 10:04:26.333562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.604 [2024-11-20 10:04:26.333569] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.604 [2024-11-20 10:04:26.333576] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.604 [2024-11-20 10:04:26.333590] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.604 qpair failed and we were unable to recover it. 00:30:55.604 [2024-11-20 10:04:26.343570] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.604 [2024-11-20 10:04:26.343625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.604 [2024-11-20 10:04:26.343642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.604 [2024-11-20 10:04:26.343650] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.604 [2024-11-20 10:04:26.343656] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.604 [2024-11-20 10:04:26.343671] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.604 qpair failed and we were unable to recover it. 00:30:55.604 [2024-11-20 10:04:26.353559] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.604 [2024-11-20 10:04:26.353607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.604 [2024-11-20 10:04:26.353619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.604 [2024-11-20 10:04:26.353627] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.604 [2024-11-20 10:04:26.353634] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.604 [2024-11-20 10:04:26.353648] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.604 qpair failed and we were unable to recover it. 00:30:55.604 [2024-11-20 10:04:26.363627] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.604 [2024-11-20 10:04:26.363681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.604 [2024-11-20 10:04:26.363694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.604 [2024-11-20 10:04:26.363701] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.604 [2024-11-20 10:04:26.363707] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.604 [2024-11-20 10:04:26.363722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.604 qpair failed and we were unable to recover it. 00:30:55.604 [2024-11-20 10:04:26.373658] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.604 [2024-11-20 10:04:26.373715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.604 [2024-11-20 10:04:26.373727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.604 [2024-11-20 10:04:26.373735] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.604 [2024-11-20 10:04:26.373741] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.604 [2024-11-20 10:04:26.373755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.604 qpair failed and we were unable to recover it. 00:30:55.604 [2024-11-20 10:04:26.383694] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.604 [2024-11-20 10:04:26.383751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.604 [2024-11-20 10:04:26.383764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.604 [2024-11-20 10:04:26.383775] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.604 [2024-11-20 10:04:26.383782] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.604 [2024-11-20 10:04:26.383796] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.604 qpair failed and we were unable to recover it. 00:30:55.604 [2024-11-20 10:04:26.393717] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.604 [2024-11-20 10:04:26.393775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.604 [2024-11-20 10:04:26.393788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.604 [2024-11-20 10:04:26.393795] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.604 [2024-11-20 10:04:26.393802] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.604 [2024-11-20 10:04:26.393816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.604 qpair failed and we were unable to recover it. 00:30:55.604 [2024-11-20 10:04:26.403760] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.604 [2024-11-20 10:04:26.403816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.604 [2024-11-20 10:04:26.403828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.604 [2024-11-20 10:04:26.403836] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.604 [2024-11-20 10:04:26.403842] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.604 [2024-11-20 10:04:26.403856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.604 qpair failed and we were unable to recover it. 00:30:55.604 [2024-11-20 10:04:26.413755] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.604 [2024-11-20 10:04:26.413810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.605 [2024-11-20 10:04:26.413823] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.605 [2024-11-20 10:04:26.413830] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.605 [2024-11-20 10:04:26.413837] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.605 [2024-11-20 10:04:26.413851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.605 qpair failed and we were unable to recover it. 00:30:55.605 [2024-11-20 10:04:26.423824] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.605 [2024-11-20 10:04:26.423883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.605 [2024-11-20 10:04:26.423899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.605 [2024-11-20 10:04:26.423907] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.605 [2024-11-20 10:04:26.423914] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.605 [2024-11-20 10:04:26.423930] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.605 qpair failed and we were unable to recover it. 00:30:55.605 [2024-11-20 10:04:26.433779] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.605 [2024-11-20 10:04:26.433823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.605 [2024-11-20 10:04:26.433836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.605 [2024-11-20 10:04:26.433844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.605 [2024-11-20 10:04:26.433851] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.605 [2024-11-20 10:04:26.433865] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.605 qpair failed and we were unable to recover it. 00:30:55.605 [2024-11-20 10:04:26.443856] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.605 [2024-11-20 10:04:26.443909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.605 [2024-11-20 10:04:26.443922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.605 [2024-11-20 10:04:26.443929] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.605 [2024-11-20 10:04:26.443936] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.605 [2024-11-20 10:04:26.443950] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.605 qpair failed and we were unable to recover it. 00:30:55.605 [2024-11-20 10:04:26.453849] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.605 [2024-11-20 10:04:26.453898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.605 [2024-11-20 10:04:26.453911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.605 [2024-11-20 10:04:26.453918] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.605 [2024-11-20 10:04:26.453925] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.605 [2024-11-20 10:04:26.453940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.605 qpair failed and we were unable to recover it. 00:30:55.605 [2024-11-20 10:04:26.463872] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.605 [2024-11-20 10:04:26.463919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.605 [2024-11-20 10:04:26.463932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.605 [2024-11-20 10:04:26.463940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.605 [2024-11-20 10:04:26.463946] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.605 [2024-11-20 10:04:26.463962] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.605 qpair failed and we were unable to recover it. 00:30:55.605 [2024-11-20 10:04:26.473898] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.605 [2024-11-20 10:04:26.473950] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.605 [2024-11-20 10:04:26.473964] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.605 [2024-11-20 10:04:26.473971] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.605 [2024-11-20 10:04:26.473978] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.605 [2024-11-20 10:04:26.473992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.605 qpair failed and we were unable to recover it. 00:30:55.605 [2024-11-20 10:04:26.483941] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.605 [2024-11-20 10:04:26.483996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.605 [2024-11-20 10:04:26.484009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.605 [2024-11-20 10:04:26.484016] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.605 [2024-11-20 10:04:26.484023] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.605 [2024-11-20 10:04:26.484037] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.605 qpair failed and we were unable to recover it. 00:30:55.605 [2024-11-20 10:04:26.493958] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.605 [2024-11-20 10:04:26.494015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.605 [2024-11-20 10:04:26.494029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.605 [2024-11-20 10:04:26.494037] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.605 [2024-11-20 10:04:26.494043] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.605 [2024-11-20 10:04:26.494058] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.605 qpair failed and we were unable to recover it. 00:30:55.605 [2024-11-20 10:04:26.503983] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.605 [2024-11-20 10:04:26.504064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.605 [2024-11-20 10:04:26.504078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.605 [2024-11-20 10:04:26.504085] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.605 [2024-11-20 10:04:26.504093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.605 [2024-11-20 10:04:26.504107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.605 qpair failed and we were unable to recover it. 00:30:55.605 [2024-11-20 10:04:26.513972] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.605 [2024-11-20 10:04:26.514015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.605 [2024-11-20 10:04:26.514028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.605 [2024-11-20 10:04:26.514039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.605 [2024-11-20 10:04:26.514046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.605 [2024-11-20 10:04:26.514061] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.605 qpair failed and we were unable to recover it. 00:30:55.868 [2024-11-20 10:04:26.524091] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.868 [2024-11-20 10:04:26.524153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.868 [2024-11-20 10:04:26.524170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.868 [2024-11-20 10:04:26.524177] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.868 [2024-11-20 10:04:26.524185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.868 [2024-11-20 10:04:26.524199] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.868 qpair failed and we were unable to recover it. 00:30:55.868 [2024-11-20 10:04:26.534068] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.868 [2024-11-20 10:04:26.534116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.868 [2024-11-20 10:04:26.534129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.868 [2024-11-20 10:04:26.534137] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.868 [2024-11-20 10:04:26.534143] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.868 [2024-11-20 10:04:26.534167] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.868 qpair failed and we were unable to recover it. 00:30:55.868 [2024-11-20 10:04:26.544081] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.868 [2024-11-20 10:04:26.544129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.868 [2024-11-20 10:04:26.544142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.868 [2024-11-20 10:04:26.544150] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.868 [2024-11-20 10:04:26.544156] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.868 [2024-11-20 10:04:26.544175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.868 qpair failed and we were unable to recover it. 00:30:55.868 [2024-11-20 10:04:26.554110] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.868 [2024-11-20 10:04:26.554152] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.868 [2024-11-20 10:04:26.554169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.868 [2024-11-20 10:04:26.554177] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.868 [2024-11-20 10:04:26.554184] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.868 [2024-11-20 10:04:26.554202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.868 qpair failed and we were unable to recover it. 00:30:55.868 [2024-11-20 10:04:26.564055] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.868 [2024-11-20 10:04:26.564116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.868 [2024-11-20 10:04:26.564129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.868 [2024-11-20 10:04:26.564136] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.868 [2024-11-20 10:04:26.564143] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.868 [2024-11-20 10:04:26.564157] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.868 qpair failed and we were unable to recover it. 00:30:55.868 [2024-11-20 10:04:26.574173] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.868 [2024-11-20 10:04:26.574223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.868 [2024-11-20 10:04:26.574236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.868 [2024-11-20 10:04:26.574243] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.868 [2024-11-20 10:04:26.574250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.868 [2024-11-20 10:04:26.574264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.868 qpair failed and we were unable to recover it. 00:30:55.868 [2024-11-20 10:04:26.584190] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.868 [2024-11-20 10:04:26.584238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.868 [2024-11-20 10:04:26.584250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.868 [2024-11-20 10:04:26.584258] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.868 [2024-11-20 10:04:26.584264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.868 [2024-11-20 10:04:26.584279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.868 qpair failed and we were unable to recover it. 00:30:55.869 [2024-11-20 10:04:26.594099] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.869 [2024-11-20 10:04:26.594145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.869 [2024-11-20 10:04:26.594165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.869 [2024-11-20 10:04:26.594173] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.869 [2024-11-20 10:04:26.594180] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.869 [2024-11-20 10:04:26.594195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.869 qpair failed and we were unable to recover it. 00:30:55.869 [2024-11-20 10:04:26.604266] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.869 [2024-11-20 10:04:26.604323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.869 [2024-11-20 10:04:26.604337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.869 [2024-11-20 10:04:26.604344] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.869 [2024-11-20 10:04:26.604351] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.869 [2024-11-20 10:04:26.604365] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.869 qpair failed and we were unable to recover it. 00:30:55.869 [2024-11-20 10:04:26.614306] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.869 [2024-11-20 10:04:26.614362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.869 [2024-11-20 10:04:26.614376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.869 [2024-11-20 10:04:26.614384] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.869 [2024-11-20 10:04:26.614391] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.869 [2024-11-20 10:04:26.614405] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.869 qpair failed and we were unable to recover it. 00:30:55.869 [2024-11-20 10:04:26.624270] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.869 [2024-11-20 10:04:26.624316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.869 [2024-11-20 10:04:26.624330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.869 [2024-11-20 10:04:26.624337] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.869 [2024-11-20 10:04:26.624344] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.869 [2024-11-20 10:04:26.624359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.869 qpair failed and we were unable to recover it. 00:30:55.869 [2024-11-20 10:04:26.634322] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.869 [2024-11-20 10:04:26.634402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.869 [2024-11-20 10:04:26.634416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.869 [2024-11-20 10:04:26.634423] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.869 [2024-11-20 10:04:26.634430] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.869 [2024-11-20 10:04:26.634445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.869 qpair failed and we were unable to recover it. 00:30:55.869 [2024-11-20 10:04:26.644395] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.869 [2024-11-20 10:04:26.644447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.869 [2024-11-20 10:04:26.644464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.869 [2024-11-20 10:04:26.644471] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.869 [2024-11-20 10:04:26.644478] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.869 [2024-11-20 10:04:26.644493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.869 qpair failed and we were unable to recover it. 00:30:55.869 [2024-11-20 10:04:26.654406] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.869 [2024-11-20 10:04:26.654458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.869 [2024-11-20 10:04:26.654471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.869 [2024-11-20 10:04:26.654478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.869 [2024-11-20 10:04:26.654485] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.869 [2024-11-20 10:04:26.654499] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.869 qpair failed and we were unable to recover it. 00:30:55.869 [2024-11-20 10:04:26.664407] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.869 [2024-11-20 10:04:26.664457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.869 [2024-11-20 10:04:26.664470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.869 [2024-11-20 10:04:26.664478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.869 [2024-11-20 10:04:26.664484] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.869 [2024-11-20 10:04:26.664499] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.869 qpair failed and we were unable to recover it. 00:30:55.869 [2024-11-20 10:04:26.674450] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.869 [2024-11-20 10:04:26.674498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.869 [2024-11-20 10:04:26.674512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.869 [2024-11-20 10:04:26.674519] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.869 [2024-11-20 10:04:26.674526] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.869 [2024-11-20 10:04:26.674540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.869 qpair failed and we were unable to recover it. 00:30:55.869 [2024-11-20 10:04:26.684491] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.869 [2024-11-20 10:04:26.684546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.869 [2024-11-20 10:04:26.684559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.869 [2024-11-20 10:04:26.684566] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.869 [2024-11-20 10:04:26.684579] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.869 [2024-11-20 10:04:26.684594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.869 qpair failed and we were unable to recover it. 00:30:55.869 [2024-11-20 10:04:26.694390] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.869 [2024-11-20 10:04:26.694442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.869 [2024-11-20 10:04:26.694455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.869 [2024-11-20 10:04:26.694463] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.869 [2024-11-20 10:04:26.694470] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.869 [2024-11-20 10:04:26.694484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.869 qpair failed and we were unable to recover it. 00:30:55.869 [2024-11-20 10:04:26.704525] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.869 [2024-11-20 10:04:26.704573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.869 [2024-11-20 10:04:26.704586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.869 [2024-11-20 10:04:26.704594] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.869 [2024-11-20 10:04:26.704600] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.869 [2024-11-20 10:04:26.704615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.869 qpair failed and we were unable to recover it. 00:30:55.869 [2024-11-20 10:04:26.714599] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.869 [2024-11-20 10:04:26.714658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.869 [2024-11-20 10:04:26.714671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.869 [2024-11-20 10:04:26.714678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.869 [2024-11-20 10:04:26.714685] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.869 [2024-11-20 10:04:26.714699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.869 qpair failed and we were unable to recover it. 00:30:55.869 [2024-11-20 10:04:26.724619] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.870 [2024-11-20 10:04:26.724673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.870 [2024-11-20 10:04:26.724686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.870 [2024-11-20 10:04:26.724693] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.870 [2024-11-20 10:04:26.724700] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.870 [2024-11-20 10:04:26.724715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.870 qpair failed and we were unable to recover it. 00:30:55.870 [2024-11-20 10:04:26.734609] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.870 [2024-11-20 10:04:26.734659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.870 [2024-11-20 10:04:26.734673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.870 [2024-11-20 10:04:26.734680] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.870 [2024-11-20 10:04:26.734687] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.870 [2024-11-20 10:04:26.734701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.870 qpair failed and we were unable to recover it. 00:30:55.870 [2024-11-20 10:04:26.744619] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.870 [2024-11-20 10:04:26.744668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.870 [2024-11-20 10:04:26.744681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.870 [2024-11-20 10:04:26.744688] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.870 [2024-11-20 10:04:26.744695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.870 [2024-11-20 10:04:26.744709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.870 qpair failed and we were unable to recover it. 00:30:55.870 [2024-11-20 10:04:26.754607] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.870 [2024-11-20 10:04:26.754702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.870 [2024-11-20 10:04:26.754716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.870 [2024-11-20 10:04:26.754724] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.870 [2024-11-20 10:04:26.754730] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.870 [2024-11-20 10:04:26.754745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.870 qpair failed and we were unable to recover it. 00:30:55.870 [2024-11-20 10:04:26.764714] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.870 [2024-11-20 10:04:26.764769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.870 [2024-11-20 10:04:26.764782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.870 [2024-11-20 10:04:26.764790] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.870 [2024-11-20 10:04:26.764796] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.870 [2024-11-20 10:04:26.764811] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.870 qpair failed and we were unable to recover it. 00:30:55.870 [2024-11-20 10:04:26.774729] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.870 [2024-11-20 10:04:26.774776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.870 [2024-11-20 10:04:26.774792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.870 [2024-11-20 10:04:26.774800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.870 [2024-11-20 10:04:26.774807] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:55.870 [2024-11-20 10:04:26.774822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.870 qpair failed and we were unable to recover it. 00:30:56.132 [2024-11-20 10:04:26.784716] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.132 [2024-11-20 10:04:26.784767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.132 [2024-11-20 10:04:26.784780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.132 [2024-11-20 10:04:26.784788] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.132 [2024-11-20 10:04:26.784795] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:56.132 [2024-11-20 10:04:26.784809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.132 qpair failed and we were unable to recover it. 00:30:56.132 [2024-11-20 10:04:26.794754] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.132 [2024-11-20 10:04:26.794799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.132 [2024-11-20 10:04:26.794813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.132 [2024-11-20 10:04:26.794820] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.132 [2024-11-20 10:04:26.794826] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:56.132 [2024-11-20 10:04:26.794841] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.132 qpair failed and we were unable to recover it. 00:30:56.132 [2024-11-20 10:04:26.804791] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.132 [2024-11-20 10:04:26.804845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.132 [2024-11-20 10:04:26.804858] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.132 [2024-11-20 10:04:26.804865] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.132 [2024-11-20 10:04:26.804871] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:56.132 [2024-11-20 10:04:26.804886] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.132 qpair failed and we were unable to recover it. 00:30:56.132 [2024-11-20 10:04:26.814826] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.132 [2024-11-20 10:04:26.814883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.132 [2024-11-20 10:04:26.814896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.132 [2024-11-20 10:04:26.814903] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.132 [2024-11-20 10:04:26.814913] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:56.132 [2024-11-20 10:04:26.814927] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.132 qpair failed and we were unable to recover it. 00:30:56.132 [2024-11-20 10:04:26.824826] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.132 [2024-11-20 10:04:26.824879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.132 [2024-11-20 10:04:26.824903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.132 [2024-11-20 10:04:26.824912] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.132 [2024-11-20 10:04:26.824919] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:56.132 [2024-11-20 10:04:26.824939] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.132 qpair failed and we were unable to recover it. 00:30:56.132 [2024-11-20 10:04:26.834872] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.132 [2024-11-20 10:04:26.834965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.132 [2024-11-20 10:04:26.834991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.132 [2024-11-20 10:04:26.835000] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.132 [2024-11-20 10:04:26.835007] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:56.132 [2024-11-20 10:04:26.835028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.132 qpair failed and we were unable to recover it. 00:30:56.132 [2024-11-20 10:04:26.844941] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.132 [2024-11-20 10:04:26.844998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.133 [2024-11-20 10:04:26.845013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.133 [2024-11-20 10:04:26.845021] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.133 [2024-11-20 10:04:26.845028] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:56.133 [2024-11-20 10:04:26.845044] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.133 qpair failed and we were unable to recover it. 00:30:56.133 [2024-11-20 10:04:26.854931] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.133 [2024-11-20 10:04:26.854983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.133 [2024-11-20 10:04:26.854996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.133 [2024-11-20 10:04:26.855004] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.133 [2024-11-20 10:04:26.855010] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:56.133 [2024-11-20 10:04:26.855025] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.133 qpair failed and we were unable to recover it. 00:30:56.133 [2024-11-20 10:04:26.864912] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.133 [2024-11-20 10:04:26.864954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.133 [2024-11-20 10:04:26.864968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.133 [2024-11-20 10:04:26.864975] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.133 [2024-11-20 10:04:26.864981] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:56.133 [2024-11-20 10:04:26.864996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.133 qpair failed and we were unable to recover it. 00:30:56.133 [2024-11-20 10:04:26.874966] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.133 [2024-11-20 10:04:26.875061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.133 [2024-11-20 10:04:26.875075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.133 [2024-11-20 10:04:26.875083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.133 [2024-11-20 10:04:26.875089] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:56.133 [2024-11-20 10:04:26.875104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.133 qpair failed and we were unable to recover it. 00:30:56.133 [2024-11-20 10:04:26.884909] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.133 [2024-11-20 10:04:26.885004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.133 [2024-11-20 10:04:26.885017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.133 [2024-11-20 10:04:26.885024] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.133 [2024-11-20 10:04:26.885031] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:56.133 [2024-11-20 10:04:26.885046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.133 qpair failed and we were unable to recover it. 00:30:56.133 [2024-11-20 10:04:26.895050] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.133 [2024-11-20 10:04:26.895133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.133 [2024-11-20 10:04:26.895147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.133 [2024-11-20 10:04:26.895154] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.133 [2024-11-20 10:04:26.895165] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:56.133 [2024-11-20 10:04:26.895181] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.133 qpair failed and we were unable to recover it. 00:30:56.133 [2024-11-20 10:04:26.905056] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.133 [2024-11-20 10:04:26.905123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.133 [2024-11-20 10:04:26.905139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.133 [2024-11-20 10:04:26.905147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.133 [2024-11-20 10:04:26.905153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:56.133 [2024-11-20 10:04:26.905171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.133 qpair failed and we were unable to recover it. 00:30:56.133 [2024-11-20 10:04:26.914959] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.133 [2024-11-20 10:04:26.915012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.133 [2024-11-20 10:04:26.915025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.133 [2024-11-20 10:04:26.915032] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.133 [2024-11-20 10:04:26.915039] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:56.133 [2024-11-20 10:04:26.915053] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.133 qpair failed and we were unable to recover it. 00:30:56.133 [2024-11-20 10:04:26.925166] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.133 [2024-11-20 10:04:26.925221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.133 [2024-11-20 10:04:26.925234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.133 [2024-11-20 10:04:26.925241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.133 [2024-11-20 10:04:26.925248] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:56.133 [2024-11-20 10:04:26.925264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.133 qpair failed and we were unable to recover it. 00:30:56.133 [2024-11-20 10:04:26.935164] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.133 [2024-11-20 10:04:26.935216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.133 [2024-11-20 10:04:26.935231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.133 [2024-11-20 10:04:26.935238] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.133 [2024-11-20 10:04:26.935245] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:56.133 [2024-11-20 10:04:26.935260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.133 qpair failed and we were unable to recover it. 00:30:56.133 [2024-11-20 10:04:26.945171] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.133 [2024-11-20 10:04:26.945223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.133 [2024-11-20 10:04:26.945236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.133 [2024-11-20 10:04:26.945247] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.133 [2024-11-20 10:04:26.945253] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:56.133 [2024-11-20 10:04:26.945268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.133 qpair failed and we were unable to recover it. 00:30:56.133 [2024-11-20 10:04:26.955191] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.133 [2024-11-20 10:04:26.955238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.133 [2024-11-20 10:04:26.955251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.133 [2024-11-20 10:04:26.955258] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.133 [2024-11-20 10:04:26.955265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:56.133 [2024-11-20 10:04:26.955280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.133 qpair failed and we were unable to recover it. 00:30:56.133 [2024-11-20 10:04:26.965325] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.133 [2024-11-20 10:04:26.965380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.133 [2024-11-20 10:04:26.965393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.133 [2024-11-20 10:04:26.965401] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.133 [2024-11-20 10:04:26.965407] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:56.133 [2024-11-20 10:04:26.965423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.133 qpair failed and we were unable to recover it. 00:30:56.133 [2024-11-20 10:04:26.975254] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.133 [2024-11-20 10:04:26.975308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.134 [2024-11-20 10:04:26.975322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.134 [2024-11-20 10:04:26.975329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.134 [2024-11-20 10:04:26.975336] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:56.134 [2024-11-20 10:04:26.975351] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.134 qpair failed and we were unable to recover it. 00:30:56.134 [2024-11-20 10:04:26.985311] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.134 [2024-11-20 10:04:26.985362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.134 [2024-11-20 10:04:26.985375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.134 [2024-11-20 10:04:26.985382] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.134 [2024-11-20 10:04:26.985389] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:56.134 [2024-11-20 10:04:26.985408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.134 qpair failed and we were unable to recover it. 00:30:56.134 [2024-11-20 10:04:26.995177] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.134 [2024-11-20 10:04:26.995227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.134 [2024-11-20 10:04:26.995240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.134 [2024-11-20 10:04:26.995247] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.134 [2024-11-20 10:04:26.995255] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:56.134 [2024-11-20 10:04:26.995269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.134 qpair failed and we were unable to recover it. 00:30:56.134 [2024-11-20 10:04:27.005415] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.134 [2024-11-20 10:04:27.005488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.134 [2024-11-20 10:04:27.005502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.134 [2024-11-20 10:04:27.005509] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.134 [2024-11-20 10:04:27.005516] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:56.134 [2024-11-20 10:04:27.005531] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.134 qpair failed and we were unable to recover it. 00:30:56.134 [2024-11-20 10:04:27.015325] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.134 [2024-11-20 10:04:27.015408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.134 [2024-11-20 10:04:27.015421] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.134 [2024-11-20 10:04:27.015428] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.134 [2024-11-20 10:04:27.015435] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:56.134 [2024-11-20 10:04:27.015450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.134 qpair failed and we were unable to recover it. 00:30:56.134 [2024-11-20 10:04:27.025373] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.134 [2024-11-20 10:04:27.025418] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.134 [2024-11-20 10:04:27.025431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.134 [2024-11-20 10:04:27.025438] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.134 [2024-11-20 10:04:27.025445] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:56.134 [2024-11-20 10:04:27.025459] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.134 qpair failed and we were unable to recover it. 00:30:56.134 [2024-11-20 10:04:27.035442] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.134 [2024-11-20 10:04:27.035525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.134 [2024-11-20 10:04:27.035538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.134 [2024-11-20 10:04:27.035545] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.134 [2024-11-20 10:04:27.035552] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:56.134 [2024-11-20 10:04:27.035567] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.134 qpair failed and we were unable to recover it. 00:30:56.397 [2024-11-20 10:04:27.045488] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.397 [2024-11-20 10:04:27.045545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.397 [2024-11-20 10:04:27.045558] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.397 [2024-11-20 10:04:27.045565] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.397 [2024-11-20 10:04:27.045572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:56.397 [2024-11-20 10:04:27.045587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.397 qpair failed and we were unable to recover it. 00:30:56.397 [2024-11-20 10:04:27.055487] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.397 [2024-11-20 10:04:27.055575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.397 [2024-11-20 10:04:27.055588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.397 [2024-11-20 10:04:27.055597] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.397 [2024-11-20 10:04:27.055603] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:56.397 [2024-11-20 10:04:27.055618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.397 qpair failed and we were unable to recover it. 00:30:56.397 [2024-11-20 10:04:27.065482] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.397 [2024-11-20 10:04:27.065579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.397 [2024-11-20 10:04:27.065592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.397 [2024-11-20 10:04:27.065600] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.397 [2024-11-20 10:04:27.065607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:56.397 [2024-11-20 10:04:27.065621] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.397 qpair failed and we were unable to recover it. 00:30:56.397 [2024-11-20 10:04:27.075509] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.397 [2024-11-20 10:04:27.075558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.397 [2024-11-20 10:04:27.075571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.397 [2024-11-20 10:04:27.075582] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.397 [2024-11-20 10:04:27.075589] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:56.397 [2024-11-20 10:04:27.075604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.397 qpair failed and we were unable to recover it. 00:30:56.397 [2024-11-20 10:04:27.085594] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.397 [2024-11-20 10:04:27.085650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.397 [2024-11-20 10:04:27.085665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.397 [2024-11-20 10:04:27.085672] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.397 [2024-11-20 10:04:27.085679] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:56.397 [2024-11-20 10:04:27.085698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.397 qpair failed and we were unable to recover it. 00:30:56.397 [2024-11-20 10:04:27.095464] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.397 [2024-11-20 10:04:27.095513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.397 [2024-11-20 10:04:27.095529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.397 [2024-11-20 10:04:27.095536] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.397 [2024-11-20 10:04:27.095543] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:56.397 [2024-11-20 10:04:27.095558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.397 qpair failed and we were unable to recover it. 00:30:56.397 [2024-11-20 10:04:27.105608] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.397 [2024-11-20 10:04:27.105651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.397 [2024-11-20 10:04:27.105664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.397 [2024-11-20 10:04:27.105671] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.397 [2024-11-20 10:04:27.105678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:56.397 [2024-11-20 10:04:27.105693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.397 qpair failed and we were unable to recover it. 00:30:56.397 [2024-11-20 10:04:27.115626] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.397 [2024-11-20 10:04:27.115680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.397 [2024-11-20 10:04:27.115693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.397 [2024-11-20 10:04:27.115701] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.397 [2024-11-20 10:04:27.115707] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:56.397 [2024-11-20 10:04:27.115726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.397 qpair failed and we were unable to recover it. 00:30:56.397 [2024-11-20 10:04:27.125693] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.397 [2024-11-20 10:04:27.125747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.397 [2024-11-20 10:04:27.125760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.397 [2024-11-20 10:04:27.125767] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.397 [2024-11-20 10:04:27.125774] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:56.397 [2024-11-20 10:04:27.125788] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.397 qpair failed and we were unable to recover it. 00:30:56.397 [2024-11-20 10:04:27.135663] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.397 [2024-11-20 10:04:27.135717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.397 [2024-11-20 10:04:27.135730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.397 [2024-11-20 10:04:27.135738] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.397 [2024-11-20 10:04:27.135745] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:56.397 [2024-11-20 10:04:27.135759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.397 qpair failed and we were unable to recover it. 00:30:56.397 [2024-11-20 10:04:27.145713] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.397 [2024-11-20 10:04:27.145762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.397 [2024-11-20 10:04:27.145775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.397 [2024-11-20 10:04:27.145782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.397 [2024-11-20 10:04:27.145789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:56.397 [2024-11-20 10:04:27.145804] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.397 qpair failed and we were unable to recover it. 00:30:56.397 [2024-11-20 10:04:27.155729] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.397 [2024-11-20 10:04:27.155786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.397 [2024-11-20 10:04:27.155800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.397 [2024-11-20 10:04:27.155807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.397 [2024-11-20 10:04:27.155814] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:56.397 [2024-11-20 10:04:27.155828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.397 qpair failed and we were unable to recover it. 00:30:56.397 [2024-11-20 10:04:27.165678] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.397 [2024-11-20 10:04:27.165743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.398 [2024-11-20 10:04:27.165756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.398 [2024-11-20 10:04:27.165764] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.398 [2024-11-20 10:04:27.165770] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:56.398 [2024-11-20 10:04:27.165785] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.398 qpair failed and we were unable to recover it. 00:30:56.398 [2024-11-20 10:04:27.175853] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.398 [2024-11-20 10:04:27.175904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.398 [2024-11-20 10:04:27.175917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.398 [2024-11-20 10:04:27.175924] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.398 [2024-11-20 10:04:27.175931] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:56.398 [2024-11-20 10:04:27.175946] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.398 qpair failed and we were unable to recover it. 00:30:56.398 [2024-11-20 10:04:27.185823] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.398 [2024-11-20 10:04:27.185866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.398 [2024-11-20 10:04:27.185880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.398 [2024-11-20 10:04:27.185887] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.398 [2024-11-20 10:04:27.185894] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:56.398 [2024-11-20 10:04:27.185908] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.398 qpair failed and we were unable to recover it. 00:30:56.398 [2024-11-20 10:04:27.195845] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.398 [2024-11-20 10:04:27.195935] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.398 [2024-11-20 10:04:27.195949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.398 [2024-11-20 10:04:27.195956] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.398 [2024-11-20 10:04:27.195963] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:56.398 [2024-11-20 10:04:27.195978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.398 qpair failed and we were unable to recover it. 00:30:56.398 [2024-11-20 10:04:27.205906] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.398 [2024-11-20 10:04:27.205955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.398 [2024-11-20 10:04:27.205972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.398 [2024-11-20 10:04:27.205980] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.398 [2024-11-20 10:04:27.205987] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:56.398 [2024-11-20 10:04:27.206001] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.398 qpair failed and we were unable to recover it. 00:30:56.398 [2024-11-20 10:04:27.215929] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.398 [2024-11-20 10:04:27.215997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.398 [2024-11-20 10:04:27.216011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.398 [2024-11-20 10:04:27.216018] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.398 [2024-11-20 10:04:27.216025] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:56.398 [2024-11-20 10:04:27.216039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.398 qpair failed and we were unable to recover it. 00:30:56.398 [2024-11-20 10:04:27.225928] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.398 [2024-11-20 10:04:27.225976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.398 [2024-11-20 10:04:27.225989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.398 [2024-11-20 10:04:27.225996] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.398 [2024-11-20 10:04:27.226003] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:56.398 [2024-11-20 10:04:27.226017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.398 qpair failed and we were unable to recover it. 00:30:56.398 [2024-11-20 10:04:27.235956] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.398 [2024-11-20 10:04:27.236001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.398 [2024-11-20 10:04:27.236014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.398 [2024-11-20 10:04:27.236021] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.398 [2024-11-20 10:04:27.236028] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:56.398 [2024-11-20 10:04:27.236043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.398 qpair failed and we were unable to recover it. 00:30:56.398 [2024-11-20 10:04:27.245924] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.398 [2024-11-20 10:04:27.245978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.398 [2024-11-20 10:04:27.245991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.398 [2024-11-20 10:04:27.245998] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.398 [2024-11-20 10:04:27.246008] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:56.398 [2024-11-20 10:04:27.246023] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.398 qpair failed and we were unable to recover it. 00:30:56.398 [2024-11-20 10:04:27.256009] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.398 [2024-11-20 10:04:27.256058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.398 [2024-11-20 10:04:27.256071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.398 [2024-11-20 10:04:27.256078] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.398 [2024-11-20 10:04:27.256085] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:56.398 [2024-11-20 10:04:27.256099] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.398 qpair failed and we were unable to recover it. 00:30:56.398 [2024-11-20 10:04:27.265922] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.398 [2024-11-20 10:04:27.265973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.398 [2024-11-20 10:04:27.265986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.398 [2024-11-20 10:04:27.265994] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.398 [2024-11-20 10:04:27.266001] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:56.398 [2024-11-20 10:04:27.266015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.398 qpair failed and we were unable to recover it. 00:30:56.398 [2024-11-20 10:04:27.276050] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.398 [2024-11-20 10:04:27.276100] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.398 [2024-11-20 10:04:27.276113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.398 [2024-11-20 10:04:27.276120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.398 [2024-11-20 10:04:27.276127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:56.398 [2024-11-20 10:04:27.276141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.398 qpair failed and we were unable to recover it. 00:30:56.398 [2024-11-20 10:04:27.286099] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.398 [2024-11-20 10:04:27.286156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.398 [2024-11-20 10:04:27.286172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.398 [2024-11-20 10:04:27.286180] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.398 [2024-11-20 10:04:27.286186] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:56.398 [2024-11-20 10:04:27.286201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.398 qpair failed and we were unable to recover it. 00:30:56.398 [2024-11-20 10:04:27.296110] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.399 [2024-11-20 10:04:27.296161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.399 [2024-11-20 10:04:27.296174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.399 [2024-11-20 10:04:27.296182] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.399 [2024-11-20 10:04:27.296189] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:56.399 [2024-11-20 10:04:27.296203] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.399 qpair failed and we were unable to recover it. 00:30:56.399 [2024-11-20 10:04:27.306027] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.399 [2024-11-20 10:04:27.306078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.399 [2024-11-20 10:04:27.306091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.399 [2024-11-20 10:04:27.306098] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.399 [2024-11-20 10:04:27.306105] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:56.399 [2024-11-20 10:04:27.306120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.399 qpair failed and we were unable to recover it. 00:30:56.661 [2024-11-20 10:04:27.316141] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.661 [2024-11-20 10:04:27.316191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.661 [2024-11-20 10:04:27.316204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.661 [2024-11-20 10:04:27.316212] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.661 [2024-11-20 10:04:27.316218] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f389c000b90 00:30:56.661 [2024-11-20 10:04:27.316233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.661 qpair failed and we were unable to recover it. 00:30:56.661 [2024-11-20 10:04:27.326303] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.661 [2024-11-20 10:04:27.326444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.661 [2024-11-20 10:04:27.326508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.661 [2024-11-20 10:04:27.326533] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.661 [2024-11-20 10:04:27.326555] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9410c0 00:30:56.661 [2024-11-20 10:04:27.326609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:56.661 qpair failed and we were unable to recover it. 00:30:56.661 [2024-11-20 10:04:27.336259] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.661 [2024-11-20 10:04:27.336335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.661 [2024-11-20 10:04:27.336375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.661 [2024-11-20 10:04:27.336392] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.661 [2024-11-20 10:04:27.336406] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9410c0 00:30:56.661 [2024-11-20 10:04:27.336438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:56.661 qpair failed and we were unable to recover it. 00:30:56.661 [2024-11-20 10:04:27.346253] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.661 [2024-11-20 10:04:27.346358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.661 [2024-11-20 10:04:27.346421] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.661 [2024-11-20 10:04:27.346447] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.661 [2024-11-20 10:04:27.346468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3894000b90 00:30:56.661 [2024-11-20 10:04:27.346523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:56.661 qpair failed and we were unable to recover it. 00:30:56.661 [2024-11-20 10:04:27.356260] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.661 [2024-11-20 10:04:27.356330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.661 [2024-11-20 10:04:27.356357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.661 [2024-11-20 10:04:27.356371] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.661 [2024-11-20 10:04:27.356384] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3894000b90 00:30:56.661 [2024-11-20 10:04:27.356415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:56.661 qpair failed and we were unable to recover it. 00:30:56.661 [2024-11-20 10:04:27.366383] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.661 [2024-11-20 10:04:27.366489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.661 [2024-11-20 10:04:27.366553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.661 [2024-11-20 10:04:27.366578] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.661 [2024-11-20 10:04:27.366599] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3890000b90 00:30:56.661 [2024-11-20 10:04:27.366654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:56.661 qpair failed and we were unable to recover it. 00:30:56.661 [2024-11-20 10:04:27.376286] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.661 [2024-11-20 10:04:27.376366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.661 [2024-11-20 10:04:27.376394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.661 [2024-11-20 10:04:27.376409] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.661 [2024-11-20 10:04:27.376430] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3890000b90 00:30:56.661 [2024-11-20 10:04:27.376461] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:56.661 qpair failed and we were unable to recover it. 00:30:56.661 [2024-11-20 10:04:27.376652] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:30:56.661 A controller has encountered a failure and is being reset. 00:30:56.661 Controller properly reset. 00:30:56.661 Initializing NVMe Controllers 00:30:56.662 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:56.662 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:56.662 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:30:56.662 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:30:56.662 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:30:56.662 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:30:56.662 Initialization complete. Launching workers. 00:30:56.662 Starting thread on core 1 00:30:56.662 Starting thread on core 2 00:30:56.662 Starting thread on core 3 00:30:56.662 Starting thread on core 0 00:30:56.662 10:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:30:56.662 00:30:56.662 real 0m11.361s 00:30:56.662 user 0m22.196s 00:30:56.662 sys 0m3.642s 00:30:56.662 10:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:56.662 10:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:56.662 ************************************ 00:30:56.662 END TEST nvmf_target_disconnect_tc2 00:30:56.662 ************************************ 00:30:56.662 10:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:30:56.662 10:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:30:56.662 10:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:30:56.662 10:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:56.662 10:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:30:56.662 10:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:56.662 10:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:30:56.662 10:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:56.662 10:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:56.662 rmmod nvme_tcp 00:30:56.662 rmmod nvme_fabrics 00:30:56.662 rmmod nvme_keyring 00:30:56.662 10:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:56.662 10:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:30:56.662 10:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:30:56.662 10:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 1565536 ']' 00:30:56.662 10:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 1565536 00:30:56.662 10:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 1565536 ']' 00:30:56.662 10:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 1565536 00:30:56.662 10:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:30:56.662 10:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:56.662 10:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1565536 00:30:56.925 10:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:30:56.925 10:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:30:56.925 10:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1565536' 00:30:56.925 killing process with pid 1565536 00:30:56.925 10:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 1565536 00:30:56.925 10:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 1565536 00:30:56.925 10:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:56.925 10:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:56.925 10:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:56.925 10:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:30:56.925 10:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:56.925 10:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:30:56.925 10:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:30:56.925 10:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:56.925 10:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:56.925 10:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:56.925 10:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:56.925 10:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:59.474 10:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:59.474 00:30:59.474 real 0m21.713s 00:30:59.474 user 0m49.785s 00:30:59.474 sys 0m9.791s 00:30:59.474 10:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:59.474 10:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:59.474 ************************************ 00:30:59.474 END TEST nvmf_target_disconnect 00:30:59.474 ************************************ 00:30:59.474 10:04:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:30:59.474 00:30:59.474 real 6m33.760s 00:30:59.474 user 11m36.306s 00:30:59.474 sys 2m15.441s 00:30:59.474 10:04:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:59.474 10:04:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:59.474 ************************************ 00:30:59.474 END TEST nvmf_host 00:30:59.474 ************************************ 00:30:59.474 10:04:29 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:30:59.474 10:04:29 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:30:59.474 10:04:29 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:30:59.474 10:04:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:59.474 10:04:29 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:59.474 10:04:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:59.474 ************************************ 00:30:59.474 START TEST nvmf_target_core_interrupt_mode 00:30:59.474 ************************************ 00:30:59.474 10:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:30:59.474 * Looking for test storage... 00:30:59.474 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:30:59.474 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:59.474 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lcov --version 00:30:59.474 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:59.474 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:59.474 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:59.474 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:59.474 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:59.474 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:30:59.474 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:30:59.474 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:30:59.474 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:30:59.474 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:30:59.474 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:30:59.474 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:30:59.474 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:59.474 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:30:59.474 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:30:59.474 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:59.474 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:59.474 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:30:59.474 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:30:59.474 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:59.474 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:30:59.474 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:30:59.474 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:30:59.474 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:30:59.474 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:59.474 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:30:59.474 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:30:59.474 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:59.474 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:59.474 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:30:59.474 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:59.474 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:59.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:59.474 --rc genhtml_branch_coverage=1 00:30:59.474 --rc genhtml_function_coverage=1 00:30:59.474 --rc genhtml_legend=1 00:30:59.474 --rc geninfo_all_blocks=1 00:30:59.474 --rc geninfo_unexecuted_blocks=1 00:30:59.474 00:30:59.474 ' 00:30:59.474 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:59.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:59.474 --rc genhtml_branch_coverage=1 00:30:59.474 --rc genhtml_function_coverage=1 00:30:59.474 --rc genhtml_legend=1 00:30:59.474 --rc geninfo_all_blocks=1 00:30:59.474 --rc geninfo_unexecuted_blocks=1 00:30:59.474 00:30:59.474 ' 00:30:59.474 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:59.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:59.474 --rc genhtml_branch_coverage=1 00:30:59.474 --rc genhtml_function_coverage=1 00:30:59.474 --rc genhtml_legend=1 00:30:59.474 --rc geninfo_all_blocks=1 00:30:59.474 --rc geninfo_unexecuted_blocks=1 00:30:59.474 00:30:59.474 ' 00:30:59.474 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:59.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:59.474 --rc genhtml_branch_coverage=1 00:30:59.474 --rc genhtml_function_coverage=1 00:30:59.474 --rc genhtml_legend=1 00:30:59.474 --rc geninfo_all_blocks=1 00:30:59.474 --rc geninfo_unexecuted_blocks=1 00:30:59.474 00:30:59.474 ' 00:30:59.474 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:30:59.474 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:30:59.474 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:59.474 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:30:59.474 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:59.474 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:59.474 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:59.475 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:59.475 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:59.475 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:59.475 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:59.475 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:59.475 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:59.475 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:59.475 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:59.475 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:59.475 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:59.475 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:59.475 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:59.475 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:59.475 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:59.475 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:30:59.475 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:59.475 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:59.475 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:59.475 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.475 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.475 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.475 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:30:59.475 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.475 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:30:59.475 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:59.475 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:59.475 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:59.475 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:59.475 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:59.475 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:59.475 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:59.475 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:59.475 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:59.475 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:59.475 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:30:59.475 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:30:59.475 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:30:59.475 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:30:59.475 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:59.475 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:59.475 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:59.475 ************************************ 00:30:59.475 START TEST nvmf_abort 00:30:59.475 ************************************ 00:30:59.475 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:30:59.475 * Looking for test storage... 00:30:59.475 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:59.475 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:59.475 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:30:59.475 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:59.738 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:59.738 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:59.738 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:59.738 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:59.738 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:30:59.738 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:30:59.738 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:30:59.738 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:30:59.738 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:30:59.738 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:30:59.738 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:30:59.738 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:59.738 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:30:59.738 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:30:59.738 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:59.738 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:59.738 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:30:59.738 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:30:59.738 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:59.738 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:30:59.738 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:30:59.738 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:30:59.738 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:30:59.738 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:59.738 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:30:59.738 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:30:59.738 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:59.738 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:59.738 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:30:59.738 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:59.738 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:59.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:59.738 --rc genhtml_branch_coverage=1 00:30:59.738 --rc genhtml_function_coverage=1 00:30:59.738 --rc genhtml_legend=1 00:30:59.738 --rc geninfo_all_blocks=1 00:30:59.738 --rc geninfo_unexecuted_blocks=1 00:30:59.738 00:30:59.738 ' 00:30:59.738 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:59.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:59.738 --rc genhtml_branch_coverage=1 00:30:59.738 --rc genhtml_function_coverage=1 00:30:59.738 --rc genhtml_legend=1 00:30:59.738 --rc geninfo_all_blocks=1 00:30:59.738 --rc geninfo_unexecuted_blocks=1 00:30:59.738 00:30:59.738 ' 00:30:59.738 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:59.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:59.738 --rc genhtml_branch_coverage=1 00:30:59.738 --rc genhtml_function_coverage=1 00:30:59.738 --rc genhtml_legend=1 00:30:59.738 --rc geninfo_all_blocks=1 00:30:59.738 --rc geninfo_unexecuted_blocks=1 00:30:59.738 00:30:59.738 ' 00:30:59.738 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:59.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:59.738 --rc genhtml_branch_coverage=1 00:30:59.738 --rc genhtml_function_coverage=1 00:30:59.738 --rc genhtml_legend=1 00:30:59.738 --rc geninfo_all_blocks=1 00:30:59.738 --rc geninfo_unexecuted_blocks=1 00:30:59.738 00:30:59.738 ' 00:30:59.738 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:59.738 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:30:59.738 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:59.738 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:59.738 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:59.738 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:59.738 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:59.738 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:59.738 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:59.738 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:59.738 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:59.738 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:59.738 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:59.738 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:59.738 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:59.738 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:59.738 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:59.738 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:59.738 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:59.738 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:30:59.738 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:59.738 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:59.738 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:59.738 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.738 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.738 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.738 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:30:59.738 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.738 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:30:59.738 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:59.738 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:59.738 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:59.738 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:59.738 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:59.738 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:59.738 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:59.738 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:59.738 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:59.738 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:59.738 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:59.738 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:30:59.738 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:30:59.738 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:59.738 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:59.738 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:59.738 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:59.738 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:59.738 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:59.738 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:59.738 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:59.738 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:59.738 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:59.738 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:30:59.738 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:07.885 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:07.885 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:31:07.885 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:07.885 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:07.885 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:07.885 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:07.885 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:07.885 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:31:07.885 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:07.885 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:31:07.885 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:31:07.885 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:31:07.885 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:31:07.885 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:31:07.885 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:31:07.885 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:07.885 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:07.885 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:07.885 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:07.885 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:07.885 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:07.885 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:07.885 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:07.885 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:07.885 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:07.885 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:07.885 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:07.885 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:07.885 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:07.885 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:07.885 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:07.885 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:07.885 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:07.885 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:07.885 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:07.885 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:07.885 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:07.885 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:07.885 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:07.885 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:07.885 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:07.886 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:07.886 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:07.886 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:07.886 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:07.886 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:07.886 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:07.886 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:07.886 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:07.886 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:07.886 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:07.886 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:07.886 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:07.886 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:07.886 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:07.886 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:07.886 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:07.886 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:07.886 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:07.886 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:07.886 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:07.886 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:07.886 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:07.886 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:07.886 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:07.886 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:07.886 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:07.886 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:07.886 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:07.886 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:07.886 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:07.886 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:07.886 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:07.886 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:31:07.886 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:07.886 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:07.886 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:07.886 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:07.886 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:07.886 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:07.886 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:07.886 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:07.886 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:07.886 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:07.886 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:07.886 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:07.886 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:07.886 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:07.886 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:07.886 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:07.886 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:07.886 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:07.886 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:07.886 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:07.886 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:07.886 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:07.886 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:07.886 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:07.886 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:07.886 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:07.886 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:07.886 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.696 ms 00:31:07.886 00:31:07.886 --- 10.0.0.2 ping statistics --- 00:31:07.886 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:07.886 rtt min/avg/max/mdev = 0.696/0.696/0.696/0.000 ms 00:31:07.886 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:07.886 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:07.886 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:31:07.886 00:31:07.886 --- 10.0.0.1 ping statistics --- 00:31:07.886 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:07.886 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:31:07.886 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:07.886 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:31:07.886 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:07.886 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:07.886 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:07.886 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:07.886 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:07.886 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:07.886 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:07.886 10:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:31:07.886 10:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:07.886 10:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:07.886 10:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:07.886 10:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=1570969 00:31:07.886 10:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 1570969 00:31:07.886 10:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:31:07.886 10:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 1570969 ']' 00:31:07.886 10:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:07.886 10:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:07.886 10:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:07.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:07.886 10:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:07.886 10:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:07.886 [2024-11-20 10:04:38.081027] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:07.886 [2024-11-20 10:04:38.082182] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:31:07.886 [2024-11-20 10:04:38.082235] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:07.886 [2024-11-20 10:04:38.180996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:07.886 [2024-11-20 10:04:38.232024] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:07.886 [2024-11-20 10:04:38.232073] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:07.886 [2024-11-20 10:04:38.232082] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:07.886 [2024-11-20 10:04:38.232090] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:07.886 [2024-11-20 10:04:38.232101] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:07.886 [2024-11-20 10:04:38.234187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:07.886 [2024-11-20 10:04:38.234335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:07.887 [2024-11-20 10:04:38.234480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:07.887 [2024-11-20 10:04:38.310792] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:07.887 [2024-11-20 10:04:38.311669] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:07.887 [2024-11-20 10:04:38.312072] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:07.887 [2024-11-20 10:04:38.312268] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:08.148 10:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:08.148 10:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:31:08.149 10:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:08.149 10:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:08.149 10:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:08.149 10:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:08.149 10:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:31:08.149 10:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:08.149 10:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:08.149 [2024-11-20 10:04:38.939432] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:08.149 10:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:08.149 10:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:31:08.149 10:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:08.149 10:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:08.149 Malloc0 00:31:08.149 10:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:08.149 10:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:31:08.149 10:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:08.149 10:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:08.149 Delay0 00:31:08.149 10:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:08.149 10:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:31:08.149 10:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:08.149 10:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:08.149 10:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:08.149 10:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:31:08.149 10:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:08.149 10:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:08.149 10:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:08.149 10:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:08.149 10:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:08.149 10:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:08.149 [2024-11-20 10:04:39.047368] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:08.149 10:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:08.149 10:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:08.149 10:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:08.149 10:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:08.410 10:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:08.410 10:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:31:08.410 [2024-11-20 10:04:39.232237] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:31:10.959 Initializing NVMe Controllers 00:31:10.959 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:31:10.959 controller IO queue size 128 less than required 00:31:10.959 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:31:10.959 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:31:10.959 Initialization complete. Launching workers. 00:31:10.959 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28476 00:31:10.959 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28533, failed to submit 66 00:31:10.959 success 28476, unsuccessful 57, failed 0 00:31:10.959 10:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:10.959 10:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:10.959 10:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:10.959 10:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:10.959 10:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:31:10.959 10:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:31:10.959 10:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:10.959 10:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:31:10.959 10:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:10.959 10:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:31:10.959 10:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:10.959 10:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:10.959 rmmod nvme_tcp 00:31:10.959 rmmod nvme_fabrics 00:31:10.959 rmmod nvme_keyring 00:31:10.959 10:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:10.959 10:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:31:10.959 10:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:31:10.959 10:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 1570969 ']' 00:31:10.959 10:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 1570969 00:31:10.959 10:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 1570969 ']' 00:31:10.959 10:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 1570969 00:31:10.959 10:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:31:10.959 10:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:10.959 10:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1570969 00:31:10.959 10:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:10.959 10:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:10.959 10:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1570969' 00:31:10.959 killing process with pid 1570969 00:31:10.959 10:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 1570969 00:31:10.959 10:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 1570969 00:31:10.959 10:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:10.959 10:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:10.959 10:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:10.959 10:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:31:10.959 10:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:31:10.959 10:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:10.959 10:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:31:10.959 10:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:10.959 10:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:10.959 10:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:10.959 10:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:10.959 10:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:13.509 10:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:13.509 00:31:13.509 real 0m13.558s 00:31:13.509 user 0m11.379s 00:31:13.509 sys 0m7.074s 00:31:13.509 10:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:13.509 10:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:13.509 ************************************ 00:31:13.509 END TEST nvmf_abort 00:31:13.509 ************************************ 00:31:13.509 10:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:31:13.509 10:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:13.509 10:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:13.509 10:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:13.509 ************************************ 00:31:13.509 START TEST nvmf_ns_hotplug_stress 00:31:13.509 ************************************ 00:31:13.509 10:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:31:13.509 * Looking for test storage... 00:31:13.509 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:13.509 10:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:13.509 10:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:31:13.509 10:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:13.509 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:13.509 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:13.509 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:13.509 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:13.509 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:31:13.509 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:31:13.509 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:31:13.509 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:31:13.509 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:31:13.509 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:31:13.509 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:31:13.509 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:13.509 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:31:13.509 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:31:13.509 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:13.509 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:13.509 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:31:13.509 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:31:13.509 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:13.509 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:31:13.509 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:31:13.509 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:31:13.509 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:31:13.509 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:13.509 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:31:13.509 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:31:13.509 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:13.509 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:13.509 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:31:13.509 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:13.509 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:13.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:13.509 --rc genhtml_branch_coverage=1 00:31:13.509 --rc genhtml_function_coverage=1 00:31:13.509 --rc genhtml_legend=1 00:31:13.509 --rc geninfo_all_blocks=1 00:31:13.509 --rc geninfo_unexecuted_blocks=1 00:31:13.509 00:31:13.509 ' 00:31:13.509 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:13.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:13.509 --rc genhtml_branch_coverage=1 00:31:13.509 --rc genhtml_function_coverage=1 00:31:13.509 --rc genhtml_legend=1 00:31:13.509 --rc geninfo_all_blocks=1 00:31:13.509 --rc geninfo_unexecuted_blocks=1 00:31:13.509 00:31:13.509 ' 00:31:13.509 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:13.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:13.509 --rc genhtml_branch_coverage=1 00:31:13.509 --rc genhtml_function_coverage=1 00:31:13.509 --rc genhtml_legend=1 00:31:13.509 --rc geninfo_all_blocks=1 00:31:13.509 --rc geninfo_unexecuted_blocks=1 00:31:13.509 00:31:13.509 ' 00:31:13.509 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:13.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:13.509 --rc genhtml_branch_coverage=1 00:31:13.509 --rc genhtml_function_coverage=1 00:31:13.509 --rc genhtml_legend=1 00:31:13.509 --rc geninfo_all_blocks=1 00:31:13.509 --rc geninfo_unexecuted_blocks=1 00:31:13.509 00:31:13.509 ' 00:31:13.509 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:13.509 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:31:13.509 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:13.509 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:13.509 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:13.510 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:13.510 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:13.510 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:13.510 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:13.510 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:13.510 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:13.510 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:13.510 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:13.510 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:13.510 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:13.510 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:13.510 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:13.510 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:13.510 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:13.510 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:31:13.510 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:13.510 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:13.510 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:13.510 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:13.510 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:13.510 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:13.510 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:31:13.510 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:13.510 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:31:13.510 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:13.510 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:13.510 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:13.510 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:13.510 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:13.510 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:13.510 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:13.510 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:13.510 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:13.510 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:13.510 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:13.510 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:31:13.510 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:13.510 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:13.510 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:13.510 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:13.510 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:13.510 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:13.510 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:13.510 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:13.510 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:13.510 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:13.510 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:31:13.510 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:31:21.656 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:21.656 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:31:21.656 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:21.656 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:21.656 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:21.656 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:21.656 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:21.656 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:31:21.656 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:21.656 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:31:21.656 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:31:21.656 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:31:21.656 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:31:21.656 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:31:21.656 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:31:21.656 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:21.656 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:21.656 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:21.656 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:21.656 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:21.656 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:21.656 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:21.656 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:21.656 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:21.656 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:21.656 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:21.656 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:21.656 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:21.656 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:21.656 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:21.656 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:21.656 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:21.656 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:21.656 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:21.656 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:21.656 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:21.656 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:21.656 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:21.656 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:21.656 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:21.656 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:21.656 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:21.656 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:21.656 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:21.656 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:21.656 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:21.656 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:21.656 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:21.656 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:21.656 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:21.656 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:21.656 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:21.656 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:21.656 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:21.656 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:21.656 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:21.656 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:21.656 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:21.656 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:21.656 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:21.656 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:21.656 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:21.656 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:21.656 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:21.656 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:21.656 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:21.656 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:21.656 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:21.656 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:21.656 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:21.656 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:21.656 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:21.656 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:21.656 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:31:21.656 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:21.656 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:21.656 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:21.656 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:21.656 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:21.656 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:21.656 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:21.656 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:21.656 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:21.656 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:21.656 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:21.656 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:21.656 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:21.657 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:21.657 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:21.657 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:21.657 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:21.657 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:21.657 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:21.657 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:21.657 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:21.657 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:21.657 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:21.657 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:21.657 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:21.657 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:21.657 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:21.657 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.653 ms 00:31:21.657 00:31:21.657 --- 10.0.0.2 ping statistics --- 00:31:21.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:21.657 rtt min/avg/max/mdev = 0.653/0.653/0.653/0.000 ms 00:31:21.657 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:21.657 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:21.657 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:31:21.657 00:31:21.657 --- 10.0.0.1 ping statistics --- 00:31:21.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:21.657 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:31:21.657 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:21.657 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:31:21.657 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:21.657 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:21.657 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:21.657 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:21.657 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:21.657 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:21.657 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:21.657 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:31:21.657 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:21.657 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:21.657 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:31:21.657 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=1575750 00:31:21.657 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 1575750 00:31:21.657 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:31:21.657 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 1575750 ']' 00:31:21.657 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:21.657 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:21.657 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:21.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:21.657 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:21.657 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:31:21.657 [2024-11-20 10:04:51.695210] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:21.657 [2024-11-20 10:04:51.696371] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:31:21.657 [2024-11-20 10:04:51.696423] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:21.657 [2024-11-20 10:04:51.797322] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:21.657 [2024-11-20 10:04:51.848720] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:21.657 [2024-11-20 10:04:51.848769] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:21.657 [2024-11-20 10:04:51.848778] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:21.657 [2024-11-20 10:04:51.848785] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:21.657 [2024-11-20 10:04:51.848792] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:21.657 [2024-11-20 10:04:51.850658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:21.657 [2024-11-20 10:04:51.850825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:21.657 [2024-11-20 10:04:51.850824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:21.657 [2024-11-20 10:04:51.927588] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:21.657 [2024-11-20 10:04:51.928632] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:21.657 [2024-11-20 10:04:51.929241] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:21.657 [2024-11-20 10:04:51.929363] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:21.657 10:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:21.657 10:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:31:21.657 10:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:21.657 10:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:21.657 10:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:31:21.657 10:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:21.657 10:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:31:21.657 10:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:21.918 [2024-11-20 10:04:52.707732] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:21.918 10:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:22.180 10:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:22.180 [2024-11-20 10:04:53.088432] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:22.441 10:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:22.441 10:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:31:22.703 Malloc0 00:31:22.703 10:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:31:22.965 Delay0 00:31:22.965 10:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:22.965 10:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:31:23.226 NULL1 00:31:23.226 10:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:31:23.487 10:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:31:23.487 10:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1576350 00:31:23.487 10:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1576350 00:31:23.487 10:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:23.748 10:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:23.748 10:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:31:23.748 10:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:31:24.010 true 00:31:24.010 10:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1576350 00:31:24.010 10:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:24.271 10:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:24.533 10:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:31:24.533 10:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:31:24.533 true 00:31:24.795 10:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1576350 00:31:24.795 10:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:24.795 10:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:25.056 10:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:31:25.056 10:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:31:25.317 true 00:31:25.317 10:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1576350 00:31:25.317 10:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:25.577 10:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:25.577 10:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:31:25.577 10:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:31:25.838 true 00:31:25.838 10:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1576350 00:31:25.838 10:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:26.099 10:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:26.361 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:31:26.361 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:31:26.361 true 00:31:26.361 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1576350 00:31:26.361 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:26.622 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:26.884 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:31:26.884 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:31:26.884 true 00:31:26.884 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1576350 00:31:26.884 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:27.145 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:27.405 10:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:31:27.405 10:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:31:27.405 true 00:31:27.666 10:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1576350 00:31:27.667 10:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:27.667 10:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:27.929 10:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:31:27.929 10:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:31:28.190 true 00:31:28.190 10:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1576350 00:31:28.190 10:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:28.190 10:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:28.453 10:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:31:28.453 10:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:31:28.715 true 00:31:28.715 10:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1576350 00:31:28.715 10:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:28.715 10:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:28.975 10:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:31:28.975 10:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:31:29.236 true 00:31:29.236 10:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1576350 00:31:29.236 10:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:29.496 10:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:29.496 10:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:31:29.496 10:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:31:29.756 true 00:31:29.756 10:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1576350 00:31:29.756 10:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:30.017 10:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:30.017 10:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:31:30.017 10:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:31:30.277 true 00:31:30.277 10:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1576350 00:31:30.277 10:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:30.538 10:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:30.798 10:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:31:30.798 10:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:31:30.798 true 00:31:30.798 10:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1576350 00:31:30.798 10:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:31.059 10:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:31.320 10:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:31:31.320 10:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:31:31.320 true 00:31:31.320 10:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1576350 00:31:31.320 10:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:31.583 10:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:31.844 10:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:31:31.844 10:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:31:31.844 true 00:31:32.105 10:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1576350 00:31:32.105 10:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:32.105 10:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:32.366 10:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:31:32.366 10:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:31:32.627 true 00:31:32.627 10:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1576350 00:31:32.627 10:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:32.627 10:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:32.888 10:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:31:32.888 10:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:31:33.148 true 00:31:33.148 10:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1576350 00:31:33.148 10:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:33.148 10:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:33.409 10:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:31:33.409 10:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:31:33.670 true 00:31:33.670 10:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1576350 00:31:33.670 10:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:33.931 10:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:33.931 10:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:31:33.931 10:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:31:34.191 true 00:31:34.191 10:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1576350 00:31:34.191 10:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:34.451 10:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:34.451 10:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:31:34.451 10:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:31:34.711 true 00:31:34.711 10:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1576350 00:31:34.711 10:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:34.971 10:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:34.971 10:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:31:34.971 10:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:31:35.233 true 00:31:35.233 10:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1576350 00:31:35.233 10:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:35.494 10:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:35.755 10:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:31:35.755 10:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:31:35.755 true 00:31:35.755 10:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1576350 00:31:35.755 10:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:36.017 10:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:36.279 10:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:31:36.279 10:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:31:36.279 true 00:31:36.279 10:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1576350 00:31:36.279 10:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:36.540 10:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:36.802 10:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:31:36.802 10:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:31:36.802 true 00:31:37.064 10:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1576350 00:31:37.064 10:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:37.064 10:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:37.395 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:31:37.395 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:31:37.395 true 00:31:37.692 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1576350 00:31:37.692 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:37.692 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:37.971 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:31:37.971 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:31:37.971 true 00:31:37.971 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1576350 00:31:37.971 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:38.231 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:38.491 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:31:38.491 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:31:38.491 true 00:31:38.491 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1576350 00:31:38.491 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:38.751 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:39.012 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:31:39.012 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:31:39.012 true 00:31:39.272 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1576350 00:31:39.272 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:39.272 10:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:39.532 10:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:31:39.532 10:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:31:39.792 true 00:31:39.792 10:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1576350 00:31:39.792 10:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:39.792 10:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:40.052 10:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:31:40.052 10:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:31:40.313 true 00:31:40.313 10:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1576350 00:31:40.313 10:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:40.573 10:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:40.573 10:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:31:40.573 10:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:31:40.833 true 00:31:40.833 10:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1576350 00:31:40.833 10:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:41.093 10:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:41.093 10:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:31:41.093 10:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:31:41.353 true 00:31:41.353 10:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1576350 00:31:41.353 10:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:41.614 10:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:41.874 10:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:31:41.874 10:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:31:41.874 true 00:31:41.874 10:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1576350 00:31:41.874 10:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:42.134 10:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:42.394 10:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:31:42.394 10:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:31:42.394 true 00:31:42.394 10:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1576350 00:31:42.394 10:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:42.653 10:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:42.913 10:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:31:42.913 10:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:31:43.173 true 00:31:43.173 10:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1576350 00:31:43.173 10:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:43.173 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:43.435 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:31:43.435 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:31:43.696 true 00:31:43.696 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1576350 00:31:43.696 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:43.696 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:43.957 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:31:43.957 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:31:44.218 true 00:31:44.218 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1576350 00:31:44.218 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:44.479 10:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:44.479 10:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:31:44.479 10:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:31:44.739 true 00:31:44.739 10:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1576350 00:31:44.739 10:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:45.000 10:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:45.000 10:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:31:45.000 10:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:31:45.260 true 00:31:45.260 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1576350 00:31:45.260 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:45.521 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:45.781 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:31:45.781 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:31:45.781 true 00:31:45.781 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1576350 00:31:45.781 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:46.040 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:46.300 10:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:31:46.300 10:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:31:46.300 true 00:31:46.300 10:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1576350 00:31:46.300 10:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:46.559 10:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:46.819 10:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:31:46.819 10:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:31:47.079 true 00:31:47.079 10:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1576350 00:31:47.079 10:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:47.079 10:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:47.339 10:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:31:47.339 10:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:31:47.599 true 00:31:47.599 10:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1576350 00:31:47.599 10:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:47.599 10:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:47.858 10:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:31:47.858 10:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:31:48.116 true 00:31:48.116 10:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1576350 00:31:48.116 10:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:48.375 10:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:48.375 10:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:31:48.375 10:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:31:48.636 true 00:31:48.636 10:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1576350 00:31:48.636 10:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:48.898 10:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:48.898 10:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:31:48.899 10:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:31:49.159 true 00:31:49.159 10:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1576350 00:31:49.159 10:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:49.420 10:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:49.681 10:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:31:49.681 10:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:31:49.681 true 00:31:49.681 10:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1576350 00:31:49.681 10:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:49.942 10:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:50.202 10:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:31:50.202 10:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:31:50.202 true 00:31:50.202 10:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1576350 00:31:50.202 10:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:50.463 10:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:50.724 10:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:31:50.724 10:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:31:50.724 true 00:31:50.986 10:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1576350 00:31:50.986 10:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:50.986 10:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:51.249 10:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:31:51.249 10:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:31:51.509 true 00:31:51.509 10:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1576350 00:31:51.509 10:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:51.509 10:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:51.770 10:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:31:51.770 10:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:31:52.030 true 00:31:52.030 10:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1576350 00:31:52.030 10:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:52.291 10:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:52.291 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:31:52.291 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:31:52.552 true 00:31:52.552 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1576350 00:31:52.552 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:52.812 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:52.812 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:31:52.812 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:31:53.073 true 00:31:53.073 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1576350 00:31:53.073 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:53.333 10:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:53.594 10:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:31:53.594 10:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:31:53.594 true 00:31:53.594 10:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1576350 00:31:53.594 10:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:53.855 Initializing NVMe Controllers 00:31:53.855 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:53.855 Controller IO queue size 128, less than required. 00:31:53.855 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:53.855 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:53.855 Initialization complete. Launching workers. 00:31:53.855 ======================================================== 00:31:53.855 Latency(us) 00:31:53.855 Device Information : IOPS MiB/s Average min max 00:31:53.855 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 30365.96 14.83 4215.16 1114.05 11070.54 00:31:53.855 ======================================================== 00:31:53.855 Total : 30365.96 14.83 4215.16 1114.05 11070.54 00:31:53.855 00:31:53.855 10:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:54.116 10:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:31:54.116 10:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:31:54.116 true 00:31:54.116 10:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1576350 00:31:54.116 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1576350) - No such process 00:31:54.116 10:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1576350 00:31:54.116 10:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:54.376 10:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:54.636 10:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:31:54.636 10:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:31:54.636 10:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:31:54.636 10:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:54.636 10:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:31:54.636 null0 00:31:54.636 10:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:54.636 10:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:54.636 10:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:31:54.896 null1 00:31:54.896 10:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:54.896 10:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:54.896 10:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:31:55.157 null2 00:31:55.157 10:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:55.157 10:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:55.157 10:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:31:55.157 null3 00:31:55.157 10:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:55.157 10:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:55.157 10:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:31:55.417 null4 00:31:55.417 10:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:55.417 10:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:55.417 10:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:31:55.677 null5 00:31:55.677 10:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:55.677 10:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:55.677 10:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:31:55.677 null6 00:31:55.677 10:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:55.677 10:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:55.677 10:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:31:55.939 null7 00:31:55.939 10:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:55.939 10:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:55.939 10:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:31:55.939 10:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:55.939 10:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:55.939 10:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:31:55.939 10:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:31:55.939 10:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:55.939 10:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:55.939 10:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:55.939 10:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:55.939 10:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:55.939 10:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:55.939 10:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:55.939 10:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:55.939 10:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:31:55.939 10:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:31:55.939 10:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:55.939 10:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:55.939 10:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:55.939 10:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:55.939 10:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:55.939 10:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:55.939 10:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:31:55.939 10:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:31:55.939 10:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:55.939 10:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:55.939 10:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:55.939 10:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:55.939 10:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:55.939 10:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:55.939 10:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:31:55.939 10:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:31:55.939 10:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:55.939 10:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:55.939 10:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:55.939 10:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:55.939 10:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:55.939 10:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:55.939 10:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:31:55.939 10:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:31:55.939 10:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:55.939 10:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:55.939 10:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:55.939 10:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:55.939 10:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:55.939 10:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:55.939 10:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:31:55.939 10:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:31:55.939 10:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:55.939 10:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:55.939 10:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:55.939 10:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:55.939 10:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:55.939 10:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:31:55.939 10:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:55.939 10:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:31:55.939 10:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:55.939 10:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:55.939 10:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:55.939 10:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:55.939 10:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:55.939 10:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:55.939 10:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:31:55.939 10:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1582525 1582527 1582528 1582530 1582532 1582534 1582537 1582539 00:31:55.939 10:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:31:55.940 10:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:55.940 10:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:55.940 10:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:56.200 10:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:56.200 10:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:56.200 10:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:56.200 10:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:56.200 10:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:56.200 10:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:56.200 10:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:56.200 10:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:56.462 10:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:56.462 10:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:56.462 10:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:56.462 10:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:56.462 10:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:56.462 10:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:56.462 10:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:56.462 10:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:56.462 10:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:56.462 10:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:56.462 10:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:56.462 10:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:56.462 10:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:56.462 10:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:56.462 10:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:56.462 10:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:56.462 10:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:56.462 10:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:56.462 10:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:56.462 10:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:56.462 10:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:56.462 10:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:56.462 10:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:56.462 10:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:56.462 10:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:56.462 10:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:56.462 10:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:56.462 10:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:56.462 10:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:56.723 10:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:56.723 10:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:56.723 10:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:56.723 10:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:56.723 10:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:56.723 10:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:56.723 10:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:56.723 10:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:56.723 10:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:56.723 10:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:56.723 10:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:56.723 10:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:56.723 10:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:56.723 10:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:56.723 10:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:56.723 10:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:56.723 10:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:56.723 10:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:56.723 10:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:56.723 10:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:56.723 10:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:56.723 10:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:56.723 10:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:56.723 10:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:56.723 10:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:56.723 10:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:56.723 10:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:56.986 10:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:56.986 10:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:56.986 10:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:56.986 10:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:56.986 10:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:56.986 10:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:56.986 10:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:56.986 10:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:56.986 10:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:56.986 10:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:56.986 10:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:56.986 10:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:56.986 10:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:56.986 10:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:57.247 10:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:57.247 10:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:57.247 10:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:57.247 10:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:57.247 10:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:57.247 10:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:57.247 10:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:57.247 10:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:57.247 10:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:57.247 10:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:57.247 10:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:57.247 10:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:57.247 10:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:57.247 10:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:57.247 10:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:57.247 10:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:57.247 10:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:57.247 10:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:57.247 10:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:57.247 10:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:57.247 10:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:57.247 10:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:57.247 10:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:57.247 10:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:57.247 10:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:57.508 10:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:57.508 10:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:57.508 10:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:57.508 10:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:57.508 10:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:57.508 10:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:57.508 10:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:57.508 10:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:57.508 10:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:57.508 10:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:57.508 10:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:57.508 10:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:57.508 10:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:57.508 10:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:57.508 10:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:57.508 10:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:57.508 10:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:57.508 10:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:57.508 10:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:57.508 10:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:57.508 10:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:57.508 10:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:57.508 10:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:57.508 10:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:57.508 10:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:57.508 10:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:57.508 10:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:57.769 10:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:57.769 10:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:57.769 10:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:57.769 10:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:57.769 10:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:57.769 10:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:57.769 10:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:57.769 10:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:57.769 10:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:57.769 10:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:57.769 10:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:57.769 10:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:57.769 10:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:57.769 10:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:57.769 10:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:57.769 10:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:57.769 10:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:57.769 10:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:58.032 10:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:58.032 10:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:58.032 10:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:58.032 10:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:58.032 10:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:58.032 10:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:58.032 10:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:58.032 10:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:58.032 10:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:58.032 10:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:58.032 10:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:58.032 10:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:58.032 10:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:58.032 10:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:58.032 10:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:58.032 10:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:58.032 10:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:58.032 10:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:58.032 10:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:58.032 10:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:58.032 10:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:58.032 10:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:58.032 10:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:58.032 10:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:58.032 10:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:58.294 10:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:58.294 10:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:58.294 10:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:58.294 10:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:58.294 10:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:58.294 10:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:58.294 10:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:58.294 10:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:58.294 10:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:58.294 10:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:58.294 10:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:58.294 10:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:58.294 10:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:58.294 10:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:58.294 10:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:58.294 10:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:58.294 10:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:58.294 10:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:58.294 10:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:58.294 10:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:58.294 10:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:58.294 10:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:58.554 10:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:58.554 10:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:58.554 10:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:58.554 10:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:58.554 10:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:58.554 10:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:58.554 10:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:58.554 10:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:58.554 10:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:58.554 10:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:58.554 10:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:58.554 10:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:58.554 10:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:58.554 10:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:58.554 10:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:58.554 10:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:58.554 10:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:58.554 10:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:58.554 10:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:58.554 10:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:58.554 10:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:58.816 10:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:58.816 10:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:58.816 10:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:58.816 10:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:58.816 10:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:58.816 10:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:58.816 10:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:58.816 10:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:58.816 10:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:58.816 10:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:58.816 10:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:58.816 10:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:58.816 10:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:58.816 10:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:58.816 10:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:58.816 10:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:58.816 10:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:58.816 10:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:58.816 10:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:58.816 10:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:58.816 10:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:59.078 10:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:59.078 10:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:59.078 10:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:59.078 10:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:59.078 10:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:59.078 10:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:59.078 10:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:59.078 10:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:59.078 10:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:59.078 10:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:59.078 10:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:59.078 10:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:59.078 10:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:59.078 10:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:59.078 10:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:59.078 10:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:59.078 10:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:59.078 10:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:59.078 10:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:59.078 10:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:59.078 10:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:59.078 10:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:59.078 10:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:59.078 10:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:59.078 10:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:59.078 10:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:59.340 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:59.340 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:59.340 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:59.340 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:59.340 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:59.340 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:59.340 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:59.340 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:59.340 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:59.340 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:59.340 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:59.340 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:59.340 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:59.340 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:59.340 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:59.340 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:59.340 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:59.340 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:59.340 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:59.340 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:59.340 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:59.340 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:59.340 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:59.601 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:59.601 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:59.601 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:59.601 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:59.601 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:59.601 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:59.601 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:59.601 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:59.601 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:59.601 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:59.601 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:59.601 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:59.601 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:59.601 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:59.601 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:59.863 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:59.863 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:59.863 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:59.863 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:59.863 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:59.863 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:59.863 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:59.863 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:59.863 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:59.863 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:59.863 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:59.863 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:31:59.863 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:31:59.863 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:59.863 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:31:59.863 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:59.863 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:31:59.863 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:59.863 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:59.863 rmmod nvme_tcp 00:31:59.863 rmmod nvme_fabrics 00:32:00.124 rmmod nvme_keyring 00:32:00.124 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:00.124 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:32:00.124 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:32:00.124 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 1575750 ']' 00:32:00.124 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 1575750 00:32:00.124 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 1575750 ']' 00:32:00.124 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 1575750 00:32:00.124 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:32:00.124 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:00.124 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1575750 00:32:00.124 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:00.124 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:00.124 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1575750' 00:32:00.124 killing process with pid 1575750 00:32:00.124 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 1575750 00:32:00.124 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 1575750 00:32:00.385 10:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:00.385 10:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:00.385 10:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:00.385 10:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:32:00.385 10:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:32:00.385 10:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:00.385 10:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:32:00.385 10:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:00.385 10:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:00.385 10:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:00.385 10:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:00.385 10:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:02.302 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:02.302 00:32:02.302 real 0m49.246s 00:32:02.302 user 3m3.415s 00:32:02.302 sys 0m22.659s 00:32:02.302 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:02.302 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:32:02.302 ************************************ 00:32:02.302 END TEST nvmf_ns_hotplug_stress 00:32:02.302 ************************************ 00:32:02.302 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:32:02.302 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:02.302 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:02.302 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:02.565 ************************************ 00:32:02.565 START TEST nvmf_delete_subsystem 00:32:02.565 ************************************ 00:32:02.565 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:32:02.565 * Looking for test storage... 00:32:02.565 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:02.565 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:02.565 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:32:02.565 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:02.565 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:02.565 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:02.565 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:02.565 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:02.565 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:32:02.565 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:32:02.565 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:32:02.565 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:32:02.565 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:32:02.565 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:32:02.565 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:32:02.565 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:02.565 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:32:02.565 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:32:02.565 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:02.565 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:02.565 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:32:02.565 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:32:02.565 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:02.565 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:32:02.565 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:32:02.565 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:32:02.565 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:32:02.565 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:02.565 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:32:02.565 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:32:02.565 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:02.565 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:02.565 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:32:02.565 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:02.565 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:02.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:02.565 --rc genhtml_branch_coverage=1 00:32:02.565 --rc genhtml_function_coverage=1 00:32:02.565 --rc genhtml_legend=1 00:32:02.565 --rc geninfo_all_blocks=1 00:32:02.565 --rc geninfo_unexecuted_blocks=1 00:32:02.565 00:32:02.565 ' 00:32:02.565 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:02.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:02.565 --rc genhtml_branch_coverage=1 00:32:02.565 --rc genhtml_function_coverage=1 00:32:02.565 --rc genhtml_legend=1 00:32:02.565 --rc geninfo_all_blocks=1 00:32:02.565 --rc geninfo_unexecuted_blocks=1 00:32:02.565 00:32:02.565 ' 00:32:02.565 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:02.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:02.565 --rc genhtml_branch_coverage=1 00:32:02.565 --rc genhtml_function_coverage=1 00:32:02.565 --rc genhtml_legend=1 00:32:02.565 --rc geninfo_all_blocks=1 00:32:02.565 --rc geninfo_unexecuted_blocks=1 00:32:02.565 00:32:02.565 ' 00:32:02.565 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:02.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:02.565 --rc genhtml_branch_coverage=1 00:32:02.565 --rc genhtml_function_coverage=1 00:32:02.565 --rc genhtml_legend=1 00:32:02.565 --rc geninfo_all_blocks=1 00:32:02.565 --rc geninfo_unexecuted_blocks=1 00:32:02.565 00:32:02.565 ' 00:32:02.565 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:02.565 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:32:02.565 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:02.565 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:02.565 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:02.565 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:02.565 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:02.565 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:02.565 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:02.565 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:02.565 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:02.565 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:02.565 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:02.565 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:02.565 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:02.565 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:02.565 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:02.565 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:02.565 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:02.565 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:32:02.565 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:02.565 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:02.565 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:02.565 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:02.566 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:02.566 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:02.566 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:32:02.566 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:02.566 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:32:02.566 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:02.566 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:02.566 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:02.566 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:02.566 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:02.566 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:02.566 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:02.566 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:02.566 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:02.566 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:02.566 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:32:02.566 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:02.566 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:02.566 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:02.566 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:02.566 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:02.566 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:02.566 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:02.566 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:02.566 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:02.566 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:02.566 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:32:02.566 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:10.712 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:10.712 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:32:10.712 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:10.712 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:10.712 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:10.712 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:10.712 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:10.712 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:32:10.712 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:10.712 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:32:10.712 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:32:10.712 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:32:10.712 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:32:10.712 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:32:10.712 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:32:10.712 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:10.712 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:10.712 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:10.712 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:10.712 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:10.712 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:10.712 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:10.712 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:10.712 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:10.712 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:10.712 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:10.712 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:10.712 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:10.712 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:10.712 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:10.712 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:10.712 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:10.712 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:10.712 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:10.712 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:10.712 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:10.712 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:10.712 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:10.712 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:10.712 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:10.712 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:10.713 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:10.713 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:10.713 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:10.713 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:10.713 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:10.713 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:10.713 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:10.713 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:10.713 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:10.713 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:10.713 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:10.713 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:10.713 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:10.713 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:10.713 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:10.713 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:10.713 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:10.713 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:10.713 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:10.713 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:10.713 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:10.713 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:10.713 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:10.713 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:10.713 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:10.713 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:10.713 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:10.713 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:10.713 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:10.713 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:10.713 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:10.713 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:10.713 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:32:10.713 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:10.713 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:10.713 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:10.713 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:10.713 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:10.713 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:10.713 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:10.713 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:10.713 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:10.713 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:10.713 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:10.713 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:10.713 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:10.713 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:10.713 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:10.713 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:10.713 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:10.713 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:10.713 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:10.713 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:10.713 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:10.713 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:10.713 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:10.713 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:10.713 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:10.713 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:10.713 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:10.713 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.703 ms 00:32:10.713 00:32:10.713 --- 10.0.0.2 ping statistics --- 00:32:10.713 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:10.713 rtt min/avg/max/mdev = 0.703/0.703/0.703/0.000 ms 00:32:10.713 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:10.713 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:10.713 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.305 ms 00:32:10.713 00:32:10.713 --- 10.0.0.1 ping statistics --- 00:32:10.713 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:10.713 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:32:10.713 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:10.713 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:32:10.713 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:10.713 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:10.713 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:10.713 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:10.713 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:10.713 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:10.713 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:10.713 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:32:10.713 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:10.713 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:10.713 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:10.713 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=1587679 00:32:10.713 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 1587679 00:32:10.713 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:32:10.713 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 1587679 ']' 00:32:10.713 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:10.713 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:10.713 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:10.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:10.713 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:10.713 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:10.713 [2024-11-20 10:05:40.962877] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:10.713 [2024-11-20 10:05:40.964015] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:32:10.713 [2024-11-20 10:05:40.964068] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:10.713 [2024-11-20 10:05:41.065268] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:10.713 [2024-11-20 10:05:41.116610] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:10.714 [2024-11-20 10:05:41.116661] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:10.714 [2024-11-20 10:05:41.116670] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:10.714 [2024-11-20 10:05:41.116678] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:10.714 [2024-11-20 10:05:41.116685] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:10.714 [2024-11-20 10:05:41.118484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:10.714 [2024-11-20 10:05:41.118610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:10.714 [2024-11-20 10:05:41.195243] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:10.714 [2024-11-20 10:05:41.195776] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:10.714 [2024-11-20 10:05:41.196088] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:10.974 10:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:10.975 10:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:32:10.975 10:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:10.975 10:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:10.975 10:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:10.975 10:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:10.975 10:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:10.975 10:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:10.975 10:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:10.975 [2024-11-20 10:05:41.827549] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:10.975 10:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:10.975 10:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:32:10.975 10:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:10.975 10:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:10.975 10:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:10.975 10:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:10.975 10:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:10.975 10:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:10.975 [2024-11-20 10:05:41.860188] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:10.975 10:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:10.975 10:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:32:10.975 10:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:10.975 10:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:10.975 NULL1 00:32:10.975 10:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:10.975 10:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:32:10.975 10:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:10.975 10:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:11.236 Delay0 00:32:11.236 10:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.236 10:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:11.236 10:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.236 10:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:11.236 10:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.236 10:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1587967 00:32:11.236 10:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:32:11.236 10:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:32:11.236 [2024-11-20 10:05:41.985941] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:32:13.152 10:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:13.152 10:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:13.152 10:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:13.413 Write completed with error (sct=0, sc=8) 00:32:13.413 Read completed with error (sct=0, sc=8) 00:32:13.413 Read completed with error (sct=0, sc=8) 00:32:13.413 Read completed with error (sct=0, sc=8) 00:32:13.413 starting I/O failed: -6 00:32:13.413 Read completed with error (sct=0, sc=8) 00:32:13.413 Write completed with error (sct=0, sc=8) 00:32:13.413 Write completed with error (sct=0, sc=8) 00:32:13.413 Read completed with error (sct=0, sc=8) 00:32:13.413 starting I/O failed: -6 00:32:13.413 Read completed with error (sct=0, sc=8) 00:32:13.413 Read completed with error (sct=0, sc=8) 00:32:13.413 Write completed with error (sct=0, sc=8) 00:32:13.413 Read completed with error (sct=0, sc=8) 00:32:13.413 starting I/O failed: -6 00:32:13.413 Write completed with error (sct=0, sc=8) 00:32:13.413 Write completed with error (sct=0, sc=8) 00:32:13.413 Read completed with error (sct=0, sc=8) 00:32:13.413 Write completed with error (sct=0, sc=8) 00:32:13.413 starting I/O failed: -6 00:32:13.413 Read completed with error (sct=0, sc=8) 00:32:13.413 Read completed with error (sct=0, sc=8) 00:32:13.413 Read completed with error (sct=0, sc=8) 00:32:13.413 Read completed with error (sct=0, sc=8) 00:32:13.413 starting I/O failed: -6 00:32:13.413 Read completed with error (sct=0, sc=8) 00:32:13.413 Read completed with error (sct=0, sc=8) 00:32:13.413 Read completed with error (sct=0, sc=8) 00:32:13.413 Read completed with error (sct=0, sc=8) 00:32:13.413 starting I/O failed: -6 00:32:13.413 Read completed with error (sct=0, sc=8) 00:32:13.413 Read completed with error (sct=0, sc=8) 00:32:13.413 Read completed with error (sct=0, sc=8) 00:32:13.413 Read completed with error (sct=0, sc=8) 00:32:13.413 starting I/O failed: -6 00:32:13.413 Read completed with error (sct=0, sc=8) 00:32:13.413 Read completed with error (sct=0, sc=8) 00:32:13.413 Write completed with error (sct=0, sc=8) 00:32:13.413 Read completed with error (sct=0, sc=8) 00:32:13.413 starting I/O failed: -6 00:32:13.413 Write completed with error (sct=0, sc=8) 00:32:13.414 Read completed with error (sct=0, sc=8) 00:32:13.414 Read completed with error (sct=0, sc=8) 00:32:13.414 Read completed with error (sct=0, sc=8) 00:32:13.414 starting I/O failed: -6 00:32:13.414 Write completed with error (sct=0, sc=8) 00:32:13.414 Read completed with error (sct=0, sc=8) 00:32:13.414 Write completed with error (sct=0, sc=8) 00:32:13.414 Read completed with error (sct=0, sc=8) 00:32:13.414 starting I/O failed: -6 00:32:13.414 Read completed with error (sct=0, sc=8) 00:32:13.414 Write completed with error (sct=0, sc=8) 00:32:13.414 Write completed with error (sct=0, sc=8) 00:32:13.414 Read completed with error (sct=0, sc=8) 00:32:13.414 starting I/O failed: -6 00:32:13.414 Read completed with error (sct=0, sc=8) 00:32:13.414 Read completed with error (sct=0, sc=8) 00:32:13.414 starting I/O failed: -6 00:32:13.414 Read completed with error (sct=0, sc=8) 00:32:13.414 Read completed with error (sct=0, sc=8) 00:32:13.414 starting I/O failed: -6 00:32:13.414 Write completed with error (sct=0, sc=8) 00:32:13.414 Write completed with error (sct=0, sc=8) 00:32:13.414 starting I/O failed: -6 00:32:13.414 Write completed with error (sct=0, sc=8) 00:32:13.414 Read completed with error (sct=0, sc=8) 00:32:13.414 starting I/O failed: -6 00:32:13.414 Read completed with error (sct=0, sc=8) 00:32:13.414 Read completed with error (sct=0, sc=8) 00:32:13.414 starting I/O failed: -6 00:32:13.414 Read completed with error (sct=0, sc=8) 00:32:13.414 Write completed with error (sct=0, sc=8) 00:32:13.414 starting I/O failed: -6 00:32:13.414 Write completed with error (sct=0, sc=8) 00:32:13.414 Read completed with error (sct=0, sc=8) 00:32:13.414 starting I/O failed: -6 00:32:13.414 Read completed with error (sct=0, sc=8) 00:32:13.414 Read completed with error (sct=0, sc=8) 00:32:13.414 starting I/O failed: -6 00:32:13.414 Write completed with error (sct=0, sc=8) 00:32:13.414 Write completed with error (sct=0, sc=8) 00:32:13.414 starting I/O failed: -6 00:32:13.414 Read completed with error (sct=0, sc=8) 00:32:13.414 Read completed with error (sct=0, sc=8) 00:32:13.414 starting I/O failed: -6 00:32:13.414 Read completed with error (sct=0, sc=8) 00:32:13.414 Read completed with error (sct=0, sc=8) 00:32:13.414 starting I/O failed: -6 00:32:13.414 Read completed with error (sct=0, sc=8) 00:32:13.414 Read completed with error (sct=0, sc=8) 00:32:13.414 starting I/O failed: -6 00:32:13.414 Write completed with error (sct=0, sc=8) 00:32:13.414 Read completed with error (sct=0, sc=8) 00:32:13.414 starting I/O failed: -6 00:32:13.414 Read completed with error (sct=0, sc=8) 00:32:13.414 Read completed with error (sct=0, sc=8) 00:32:13.414 starting I/O failed: -6 00:32:13.414 Read completed with error (sct=0, sc=8) 00:32:13.414 Read completed with error (sct=0, sc=8) 00:32:13.414 starting I/O failed: -6 00:32:13.414 Read completed with error (sct=0, sc=8) 00:32:13.414 Write completed with error (sct=0, sc=8) 00:32:13.414 starting I/O failed: -6 00:32:13.414 Read completed with error (sct=0, sc=8) 00:32:13.414 Read completed with error (sct=0, sc=8) 00:32:13.414 starting I/O failed: -6 00:32:13.414 Read completed with error (sct=0, sc=8) 00:32:13.414 Write completed with error (sct=0, sc=8) 00:32:13.414 starting I/O failed: -6 00:32:13.414 Write completed with error (sct=0, sc=8) 00:32:13.414 Write completed with error (sct=0, sc=8) 00:32:13.414 starting I/O failed: -6 00:32:13.414 Read completed with error (sct=0, sc=8) 00:32:13.414 Read completed with error (sct=0, sc=8) 00:32:13.414 starting I/O failed: -6 00:32:13.414 Write completed with error (sct=0, sc=8) 00:32:13.414 Read completed with error (sct=0, sc=8) 00:32:13.414 starting I/O failed: -6 00:32:13.414 Read completed with error (sct=0, sc=8) 00:32:13.414 Read completed with error (sct=0, sc=8) 00:32:13.414 starting I/O failed: -6 00:32:13.414 Write completed with error (sct=0, sc=8) 00:32:13.414 Read completed with error (sct=0, sc=8) 00:32:13.414 starting I/O failed: -6 00:32:13.414 Read completed with error (sct=0, sc=8) 00:32:13.414 Read completed with error (sct=0, sc=8) 00:32:13.414 starting I/O failed: -6 00:32:13.414 Write completed with error (sct=0, sc=8) 00:32:13.414 Read completed with error (sct=0, sc=8) 00:32:13.414 starting I/O failed: -6 00:32:13.414 Read completed with error (sct=0, sc=8) 00:32:13.414 Read completed with error (sct=0, sc=8) 00:32:13.414 starting I/O failed: -6 00:32:13.414 Read completed with error (sct=0, sc=8) 00:32:13.414 Read completed with error (sct=0, sc=8) 00:32:13.414 starting I/O failed: -6 00:32:13.414 Read completed with error (sct=0, sc=8) 00:32:13.414 Read completed with error (sct=0, sc=8) 00:32:13.414 starting I/O failed: -6 00:32:13.414 starting I/O failed: -6 00:32:13.414 Read completed with error (sct=0, sc=8) 00:32:13.414 starting I/O failed: -6 00:32:13.414 Read completed with error (sct=0, sc=8) 00:32:13.414 Read completed with error (sct=0, sc=8) 00:32:13.414 Write completed with error (sct=0, sc=8) 00:32:13.414 Read completed with error (sct=0, sc=8) 00:32:13.414 starting I/O failed: -6 00:32:13.414 Read completed with error (sct=0, sc=8) 00:32:13.414 Write completed with error (sct=0, sc=8) 00:32:13.414 Read completed with error (sct=0, sc=8) 00:32:13.414 Read completed with error (sct=0, sc=8) 00:32:13.414 starting I/O failed: -6 00:32:13.414 Read completed with error (sct=0, sc=8) 00:32:13.414 Read completed with error (sct=0, sc=8) 00:32:13.414 Read completed with error (sct=0, sc=8) 00:32:13.414 Read completed with error (sct=0, sc=8) 00:32:13.414 starting I/O failed: -6 00:32:13.414 Read completed with error (sct=0, sc=8) 00:32:13.414 Write completed with error (sct=0, sc=8) 00:32:13.414 Write completed with error (sct=0, sc=8) 00:32:13.414 Write completed with error (sct=0, sc=8) 00:32:13.414 starting I/O failed: -6 00:32:13.414 Write completed with error (sct=0, sc=8) 00:32:13.414 Read completed with error (sct=0, sc=8) 00:32:13.414 Read completed with error (sct=0, sc=8) 00:32:13.414 Write completed with error (sct=0, sc=8) 00:32:13.414 starting I/O failed: -6 00:32:13.414 Write completed with error (sct=0, sc=8) 00:32:13.414 Read completed with error (sct=0, sc=8) 00:32:13.414 Write completed with error (sct=0, sc=8) 00:32:13.414 Read completed with error (sct=0, sc=8) 00:32:13.414 starting I/O failed: -6 00:32:13.414 Read completed with error (sct=0, sc=8) 00:32:13.414 Read completed with error (sct=0, sc=8) 00:32:13.414 Read completed with error (sct=0, sc=8) 00:32:13.414 Read completed with error (sct=0, sc=8) 00:32:13.414 starting I/O failed: -6 00:32:13.414 Read completed with error (sct=0, sc=8) 00:32:13.414 Read completed with error (sct=0, sc=8) 00:32:13.414 Read completed with error (sct=0, sc=8) 00:32:13.414 Read completed with error (sct=0, sc=8) 00:32:13.414 starting I/O failed: -6 00:32:13.414 [2024-11-20 10:05:44.112273] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f3fc000d490 is same with the state(6) to be set 00:32:13.414 Write completed with error (sct=0, sc=8) 00:32:13.414 Write completed with error (sct=0, sc=8) 00:32:13.414 Write completed with error (sct=0, sc=8) 00:32:13.414 Read completed with error (sct=0, sc=8) 00:32:13.414 Read completed with error (sct=0, sc=8) 00:32:13.414 Read completed with error (sct=0, sc=8) 00:32:13.414 Read completed with error (sct=0, sc=8) 00:32:13.414 Read completed with error (sct=0, sc=8) 00:32:13.414 Read completed with error (sct=0, sc=8) 00:32:13.414 Write completed with error (sct=0, sc=8) 00:32:13.414 Read completed with error (sct=0, sc=8) 00:32:13.414 Read completed with error (sct=0, sc=8) 00:32:13.414 Read completed with error (sct=0, sc=8) 00:32:13.414 Read completed with error (sct=0, sc=8) 00:32:13.414 Write completed with error (sct=0, sc=8) 00:32:13.414 Write completed with error (sct=0, sc=8) 00:32:13.414 Read completed with error (sct=0, sc=8) 00:32:13.414 Write completed with error (sct=0, sc=8) 00:32:13.414 Write completed with error (sct=0, sc=8) 00:32:13.414 Write completed with error (sct=0, sc=8) 00:32:13.414 Write completed with error (sct=0, sc=8) 00:32:13.414 Write completed with error (sct=0, sc=8) 00:32:13.414 Write completed with error (sct=0, sc=8) 00:32:13.414 Read completed with error (sct=0, sc=8) 00:32:13.414 Read completed with error (sct=0, sc=8) 00:32:13.414 Read completed with error (sct=0, sc=8) 00:32:13.414 Write completed with error (sct=0, sc=8) 00:32:13.414 Read completed with error (sct=0, sc=8) 00:32:13.414 Read completed with error (sct=0, sc=8) 00:32:13.414 Write completed with error (sct=0, sc=8) 00:32:13.414 Read completed with error (sct=0, sc=8) 00:32:13.414 [2024-11-20 10:05:44.112473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2cef0 is same with Read completed with error (sct=0, sc=8) 00:32:13.414 the state(6) to be set 00:32:13.414 Read completed with error (sct=0, sc=8) 00:32:13.414 Read completed with error (sct=0, sc=8) 00:32:13.414 Write completed with error (sct=0, sc=8) 00:32:13.414 [2024-11-20 10:05:44.112518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2cef0 is same with Read completed with error (sct=0, sc=8) 00:32:13.414 the state(6) to be set 00:32:13.414 Write completed with error (sct=0, sc=8) 00:32:13.414 [2024-11-20 10:05:44.112530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2cef0 is same with Write completed with error (sct=0, sc=8) 00:32:13.414 the state(6) to be set 00:32:13.414 Read completed with error (sct=0, sc=8) 00:32:13.414 [2024-11-20 10:05:44.112541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2cef0 is same with the state(6) to be set 00:32:13.414 Read completed with error (sct=0, sc=8) 00:32:13.414 Read completed with error (sct=0, sc=8) 00:32:13.414 [2024-11-20 10:05:44.112549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2cef0 is same with the state(6) to be set 00:32:13.414 Read completed with error (sct=0, sc=8) 00:32:13.414 [2024-11-20 10:05:44.112565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2cef0 is same with the state(6) to be set 00:32:13.414 [2024-11-20 10:05:44.112574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2cef0 is same with the state(6) to be set 00:32:13.414 [2024-11-20 10:05:44.112581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2cef0 is same with the state(6) to be set 00:32:13.414 [2024-11-20 10:05:44.112587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2cef0 is same with the state(6) to be set 00:32:13.415 [2024-11-20 10:05:44.112594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2cef0 is same with the state(6) to be set 00:32:13.415 [2024-11-20 10:05:44.112601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2cef0 is same with the state(6) to be set 00:32:13.415 [2024-11-20 10:05:44.112608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2cef0 is same with the state(6) to be set 00:32:13.415 [2024-11-20 10:05:44.112616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2cef0 is same with the state(6) to be set 00:32:13.415 [2024-11-20 10:05:44.112629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2cef0 is same with the state(6) to be set 00:32:13.415 [2024-11-20 10:05:44.112636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2cef0 is same with the state(6) to be set 00:32:13.415 [2024-11-20 10:05:44.112643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2cef0 is same with the state(6) to be set 00:32:13.415 [2024-11-20 10:05:44.112650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2cef0 is same with the state(6) to be set 00:32:13.415 [2024-11-20 10:05:44.112657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2cef0 is same with the state(6) to be set 00:32:13.415 [2024-11-20 10:05:44.112664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2cef0 is same with the state(6) to be set 00:32:13.415 [2024-11-20 10:05:44.112670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2cef0 is same with the state(6) to be set 00:32:13.415 Read completed with error (sct=0, sc=8) 00:32:13.415 Read completed with error (sct=0, sc=8) 00:32:13.415 Read completed with error (sct=0, sc=8) 00:32:13.415 Read completed with error (sct=0, sc=8) 00:32:13.415 Read completed with error (sct=0, sc=8) 00:32:13.415 Read completed with error (sct=0, sc=8) 00:32:13.415 Read completed with error (sct=0, sc=8) 00:32:13.415 Read completed with error (sct=0, sc=8) 00:32:13.415 Read completed with error (sct=0, sc=8) 00:32:13.415 Read completed with error (sct=0, sc=8) 00:32:13.415 Read completed with error (sct=0, sc=8) 00:32:13.415 Read completed with error (sct=0, sc=8) 00:32:13.415 Read completed with error (sct=0, sc=8) 00:32:13.415 Write completed with error (sct=0, sc=8) 00:32:13.415 Read completed with error (sct=0, sc=8) 00:32:13.415 Write completed with error (sct=0, sc=8) 00:32:13.415 Read completed with error (sct=0, sc=8) 00:32:13.415 Read completed with error (sct=0, sc=8) 00:32:13.415 Write completed with error (sct=0, sc=8) 00:32:13.415 Write completed with error (sct=0, sc=8) 00:32:13.415 Read completed with error (sct=0, sc=8) 00:32:13.415 Read completed with error (sct=0, sc=8) 00:32:13.415 Read completed with error (sct=0, sc=8) 00:32:13.415 Write completed with error (sct=0, sc=8) 00:32:13.415 Read completed with error (sct=0, sc=8) 00:32:13.415 Read completed with error (sct=0, sc=8) 00:32:13.415 Write completed with error (sct=0, sc=8) 00:32:13.415 Read completed with error (sct=0, sc=8) 00:32:13.415 [2024-11-20 10:05:44.113290] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f3fc000d7c0 is same with the state(6) to be set 00:32:14.359 [2024-11-20 10:05:45.084687] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb419a0 is same with the state(6) to be set 00:32:14.359 Read completed with error (sct=0, sc=8) 00:32:14.359 Read completed with error (sct=0, sc=8) 00:32:14.359 Read completed with error (sct=0, sc=8) 00:32:14.359 Read completed with error (sct=0, sc=8) 00:32:14.359 Write completed with error (sct=0, sc=8) 00:32:14.359 Read completed with error (sct=0, sc=8) 00:32:14.359 Write completed with error (sct=0, sc=8) 00:32:14.359 Read completed with error (sct=0, sc=8) 00:32:14.359 Write completed with error (sct=0, sc=8) 00:32:14.359 Write completed with error (sct=0, sc=8) 00:32:14.359 Read completed with error (sct=0, sc=8) 00:32:14.359 Read completed with error (sct=0, sc=8) 00:32:14.359 Read completed with error (sct=0, sc=8) 00:32:14.359 Read completed with error (sct=0, sc=8) 00:32:14.359 Read completed with error (sct=0, sc=8) 00:32:14.359 Read completed with error (sct=0, sc=8) 00:32:14.359 Write completed with error (sct=0, sc=8) 00:32:14.359 Write completed with error (sct=0, sc=8) 00:32:14.359 Read completed with error (sct=0, sc=8) 00:32:14.359 Write completed with error (sct=0, sc=8) 00:32:14.359 Read completed with error (sct=0, sc=8) 00:32:14.359 Read completed with error (sct=0, sc=8) 00:32:14.359 Write completed with error (sct=0, sc=8) 00:32:14.359 Write completed with error (sct=0, sc=8) 00:32:14.359 Read completed with error (sct=0, sc=8) 00:32:14.359 Read completed with error (sct=0, sc=8) 00:32:14.359 Read completed with error (sct=0, sc=8) 00:32:14.359 Read completed with error (sct=0, sc=8) 00:32:14.359 Read completed with error (sct=0, sc=8) 00:32:14.359 Write completed with error (sct=0, sc=8) 00:32:14.359 Read completed with error (sct=0, sc=8) 00:32:14.359 Read completed with error (sct=0, sc=8) 00:32:14.359 Write completed with error (sct=0, sc=8) 00:32:14.359 Read completed with error (sct=0, sc=8) 00:32:14.359 Read completed with error (sct=0, sc=8) 00:32:14.359 Write completed with error (sct=0, sc=8) 00:32:14.359 Read completed with error (sct=0, sc=8) 00:32:14.359 Read completed with error (sct=0, sc=8) 00:32:14.359 [2024-11-20 10:05:45.114672] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb404a0 is same with the state(6) to be set 00:32:14.359 Write completed with error (sct=0, sc=8) 00:32:14.359 Read completed with error (sct=0, sc=8) 00:32:14.359 Write completed with error (sct=0, sc=8) 00:32:14.359 Read completed with error (sct=0, sc=8) 00:32:14.359 Read completed with error (sct=0, sc=8) 00:32:14.359 Read completed with error (sct=0, sc=8) 00:32:14.359 Read completed with error (sct=0, sc=8) 00:32:14.359 Write completed with error (sct=0, sc=8) 00:32:14.359 Read completed with error (sct=0, sc=8) 00:32:14.359 Read completed with error (sct=0, sc=8) 00:32:14.359 [2024-11-20 10:05:45.114907] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f3fc000d020 is same with the state(6) to be set 00:32:14.359 Read completed with error (sct=0, sc=8) 00:32:14.359 Read completed with error (sct=0, sc=8) 00:32:14.359 Write completed with error (sct=0, sc=8) 00:32:14.359 Read completed with error (sct=0, sc=8) 00:32:14.359 Read completed with error (sct=0, sc=8) 00:32:14.359 Write completed with error (sct=0, sc=8) 00:32:14.359 Read completed with error (sct=0, sc=8) 00:32:14.359 Read completed with error (sct=0, sc=8) 00:32:14.359 Write completed with error (sct=0, sc=8) 00:32:14.359 Read completed with error (sct=0, sc=8) 00:32:14.359 Write completed with error (sct=0, sc=8) 00:32:14.359 Write completed with error (sct=0, sc=8) 00:32:14.359 Write completed with error (sct=0, sc=8) 00:32:14.359 Write completed with error (sct=0, sc=8) 00:32:14.359 Write completed with error (sct=0, sc=8) 00:32:14.359 Read completed with error (sct=0, sc=8) 00:32:14.359 Write completed with error (sct=0, sc=8) 00:32:14.359 Write completed with error (sct=0, sc=8) 00:32:14.359 Write completed with error (sct=0, sc=8) 00:32:14.359 Read completed with error (sct=0, sc=8) 00:32:14.359 Write completed with error (sct=0, sc=8) 00:32:14.359 Read completed with error (sct=0, sc=8) 00:32:14.359 Read completed with error (sct=0, sc=8) 00:32:14.359 Write completed with error (sct=0, sc=8) 00:32:14.359 Read completed with error (sct=0, sc=8) 00:32:14.359 Write completed with error (sct=0, sc=8) 00:32:14.359 Read completed with error (sct=0, sc=8) 00:32:14.359 Write completed with error (sct=0, sc=8) 00:32:14.359 Write completed with error (sct=0, sc=8) 00:32:14.359 Read completed with error (sct=0, sc=8) 00:32:14.359 Write completed with error (sct=0, sc=8) 00:32:14.359 Read completed with error (sct=0, sc=8) 00:32:14.359 Write completed with error (sct=0, sc=8) 00:32:14.359 Write completed with error (sct=0, sc=8) 00:32:14.359 Write completed with error (sct=0, sc=8) 00:32:14.359 Read completed with error (sct=0, sc=8) 00:32:14.359 Read completed with error (sct=0, sc=8) 00:32:14.359 Read completed with error (sct=0, sc=8) 00:32:14.359 [2024-11-20 10:05:45.115148] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb40860 is same with the state(6) to be set 00:32:14.359 Initializing NVMe Controllers 00:32:14.359 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:14.359 Controller IO queue size 128, less than required. 00:32:14.359 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:14.359 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:32:14.359 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:32:14.359 Initialization complete. Launching workers. 00:32:14.359 ======================================================== 00:32:14.359 Latency(us) 00:32:14.359 Device Information : IOPS MiB/s Average min max 00:32:14.359 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 183.08 0.09 906399.35 439.51 1010160.82 00:32:14.359 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 142.29 0.07 935854.07 312.30 1012361.60 00:32:14.359 ======================================================== 00:32:14.360 Total : 325.37 0.16 919280.16 312.30 1012361.60 00:32:14.360 00:32:14.360 [2024-11-20 10:05:45.116278] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb419a0 (9): Bad file descriptor 00:32:14.360 10:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:14.360 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:32:14.360 10:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:32:14.360 10:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1587967 00:32:14.360 10:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:32:14.933 10:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:32:14.933 10:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1587967 00:32:14.933 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1587967) - No such process 00:32:14.933 10:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1587967 00:32:14.933 10:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:32:14.933 10:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1587967 00:32:14.933 10:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:32:14.933 10:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:14.933 10:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:32:14.933 10:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:14.933 10:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 1587967 00:32:14.933 10:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:32:14.933 10:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:14.933 10:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:14.933 10:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:14.933 10:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:32:14.933 10:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:14.933 10:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:14.933 10:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:14.933 10:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:14.933 10:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:14.933 10:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:14.933 [2024-11-20 10:05:45.648044] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:14.933 10:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:14.933 10:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:14.933 10:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:14.933 10:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:14.933 10:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:14.933 10:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1588705 00:32:14.933 10:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:32:14.933 10:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1588705 00:32:14.933 10:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:32:14.933 10:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:14.933 [2024-11-20 10:05:45.749313] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:32:15.505 10:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:15.505 10:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1588705 00:32:15.505 10:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:15.766 10:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:15.766 10:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1588705 00:32:15.766 10:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:16.338 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:16.338 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1588705 00:32:16.338 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:16.909 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:16.909 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1588705 00:32:16.909 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:17.480 10:05:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:17.480 10:05:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1588705 00:32:17.480 10:05:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:18.050 10:05:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:18.050 10:05:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1588705 00:32:18.050 10:05:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:18.311 Initializing NVMe Controllers 00:32:18.311 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:18.311 Controller IO queue size 128, less than required. 00:32:18.311 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:18.311 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:32:18.311 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:32:18.311 Initialization complete. Launching workers. 00:32:18.311 ======================================================== 00:32:18.311 Latency(us) 00:32:18.311 Device Information : IOPS MiB/s Average min max 00:32:18.311 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002735.97 1000240.79 1006759.67 00:32:18.311 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005318.82 1000312.44 1041795.29 00:32:18.311 ======================================================== 00:32:18.311 Total : 256.00 0.12 1004027.40 1000240.79 1041795.29 00:32:18.311 00:32:18.311 [2024-11-20 10:05:48.968913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7bd30 is same with the state(6) to be set 00:32:18.311 10:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:18.311 10:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1588705 00:32:18.311 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1588705) - No such process 00:32:18.311 10:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1588705 00:32:18.311 10:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:32:18.311 10:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:32:18.311 10:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:18.311 10:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:32:18.311 10:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:18.311 10:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:32:18.311 10:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:18.311 10:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:18.311 rmmod nvme_tcp 00:32:18.572 rmmod nvme_fabrics 00:32:18.572 rmmod nvme_keyring 00:32:18.572 10:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:18.572 10:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:32:18.572 10:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:32:18.572 10:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 1587679 ']' 00:32:18.572 10:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 1587679 00:32:18.572 10:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 1587679 ']' 00:32:18.572 10:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 1587679 00:32:18.572 10:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:32:18.572 10:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:18.572 10:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1587679 00:32:18.572 10:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:18.572 10:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:18.572 10:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1587679' 00:32:18.572 killing process with pid 1587679 00:32:18.572 10:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 1587679 00:32:18.572 10:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 1587679 00:32:18.572 10:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:18.572 10:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:18.572 10:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:18.572 10:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:32:18.572 10:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:32:18.572 10:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:18.572 10:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:32:18.572 10:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:18.572 10:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:18.572 10:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:18.573 10:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:18.573 10:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:21.120 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:21.120 00:32:21.120 real 0m18.309s 00:32:21.120 user 0m26.921s 00:32:21.120 sys 0m7.157s 00:32:21.120 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:21.120 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:21.120 ************************************ 00:32:21.120 END TEST nvmf_delete_subsystem 00:32:21.120 ************************************ 00:32:21.120 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:32:21.120 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:21.120 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:21.120 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:21.120 ************************************ 00:32:21.120 START TEST nvmf_host_management 00:32:21.120 ************************************ 00:32:21.120 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:32:21.120 * Looking for test storage... 00:32:21.120 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:21.120 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:21.120 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:32:21.120 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:21.120 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:21.120 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:21.120 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:21.120 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:21.120 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:32:21.120 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:32:21.120 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:32:21.120 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:32:21.120 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:32:21.120 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:32:21.120 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:32:21.120 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:21.120 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:32:21.120 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:32:21.120 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:21.120 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:21.120 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:32:21.120 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:32:21.120 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:21.120 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:32:21.120 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:32:21.120 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:32:21.120 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:32:21.120 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:21.120 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:32:21.120 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:32:21.120 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:21.120 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:21.120 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:32:21.120 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:21.120 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:21.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:21.121 --rc genhtml_branch_coverage=1 00:32:21.121 --rc genhtml_function_coverage=1 00:32:21.121 --rc genhtml_legend=1 00:32:21.121 --rc geninfo_all_blocks=1 00:32:21.121 --rc geninfo_unexecuted_blocks=1 00:32:21.121 00:32:21.121 ' 00:32:21.121 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:21.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:21.121 --rc genhtml_branch_coverage=1 00:32:21.121 --rc genhtml_function_coverage=1 00:32:21.121 --rc genhtml_legend=1 00:32:21.121 --rc geninfo_all_blocks=1 00:32:21.121 --rc geninfo_unexecuted_blocks=1 00:32:21.121 00:32:21.121 ' 00:32:21.121 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:21.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:21.121 --rc genhtml_branch_coverage=1 00:32:21.121 --rc genhtml_function_coverage=1 00:32:21.121 --rc genhtml_legend=1 00:32:21.121 --rc geninfo_all_blocks=1 00:32:21.121 --rc geninfo_unexecuted_blocks=1 00:32:21.121 00:32:21.121 ' 00:32:21.121 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:21.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:21.121 --rc genhtml_branch_coverage=1 00:32:21.121 --rc genhtml_function_coverage=1 00:32:21.121 --rc genhtml_legend=1 00:32:21.121 --rc geninfo_all_blocks=1 00:32:21.121 --rc geninfo_unexecuted_blocks=1 00:32:21.121 00:32:21.121 ' 00:32:21.121 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:21.121 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:32:21.121 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:21.121 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:21.121 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:21.121 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:21.121 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:21.121 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:21.121 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:21.121 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:21.121 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:21.121 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:21.121 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:21.121 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:21.121 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:21.121 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:21.121 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:21.121 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:21.121 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:21.121 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:32:21.121 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:21.121 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:21.121 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:21.121 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:21.121 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:21.121 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:21.121 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:32:21.121 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:21.121 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:32:21.121 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:21.121 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:21.121 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:21.121 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:21.121 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:21.121 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:21.121 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:21.121 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:21.121 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:21.121 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:21.121 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:21.121 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:21.121 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:32:21.121 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:21.121 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:21.121 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:21.122 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:21.122 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:21.122 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:21.122 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:21.122 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:21.122 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:21.122 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:21.122 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:32:21.122 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:29.431 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:29.431 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:32:29.431 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:29.431 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:29.431 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:29.431 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:29.431 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:29.431 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:32:29.431 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:29.431 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:32:29.431 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:32:29.431 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:32:29.431 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:32:29.431 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:32:29.431 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:32:29.431 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:29.431 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:29.431 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:29.431 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:29.431 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:29.431 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:29.431 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:29.431 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:29.431 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:29.431 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:29.431 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:29.431 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:29.431 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:29.431 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:29.431 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:29.431 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:29.431 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:29.431 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:29.431 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:29.431 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:29.431 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:29.431 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:29.431 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:29.431 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:29.431 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:29.431 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:29.431 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:29.431 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:29.431 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:29.431 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:29.431 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:29.431 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:29.431 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:29.431 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:29.431 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:29.431 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:29.431 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:29.431 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:29.431 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:29.431 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:29.431 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:29.431 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:29.431 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:29.431 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:29.431 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:29.431 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:29.431 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:29.431 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:29.431 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:29.431 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:29.431 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:29.431 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:29.431 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:29.431 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:29.431 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:29.431 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:29.431 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:29.432 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:29.432 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:32:29.432 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:29.432 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:29.432 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:29.432 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:29.432 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:29.432 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:29.432 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:29.432 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:29.432 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:29.432 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:29.432 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:29.432 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:29.432 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:29.432 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:29.432 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:29.432 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:29.432 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:29.432 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:29.432 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:29.432 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:29.432 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:29.432 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:29.432 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:29.432 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:29.432 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:29.432 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:29.432 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:29.432 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.647 ms 00:32:29.432 00:32:29.432 --- 10.0.0.2 ping statistics --- 00:32:29.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:29.432 rtt min/avg/max/mdev = 0.647/0.647/0.647/0.000 ms 00:32:29.432 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:29.432 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:29.432 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:32:29.432 00:32:29.432 --- 10.0.0.1 ping statistics --- 00:32:29.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:29.432 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:32:29.432 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:29.432 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:32:29.432 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:29.432 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:29.432 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:29.432 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:29.432 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:29.432 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:29.432 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:29.432 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:32:29.432 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:32:29.432 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:32:29.432 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:29.432 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:29.432 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:29.432 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=1593390 00:32:29.432 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 1593390 00:32:29.432 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:32:29.432 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1593390 ']' 00:32:29.432 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:29.432 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:29.432 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:29.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:29.432 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:29.432 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:29.432 [2024-11-20 10:05:59.394372] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:29.432 [2024-11-20 10:05:59.395502] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:32:29.432 [2024-11-20 10:05:59.395552] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:29.432 [2024-11-20 10:05:59.495300] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:29.432 [2024-11-20 10:05:59.548481] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:29.432 [2024-11-20 10:05:59.548529] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:29.432 [2024-11-20 10:05:59.548538] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:29.432 [2024-11-20 10:05:59.548545] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:29.432 [2024-11-20 10:05:59.548551] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:29.432 [2024-11-20 10:05:59.550729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:29.432 [2024-11-20 10:05:59.550894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:29.432 [2024-11-20 10:05:59.551050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:29.432 [2024-11-20 10:05:59.551050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:32:29.432 [2024-11-20 10:05:59.629009] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:29.432 [2024-11-20 10:05:59.629626] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:29.432 [2024-11-20 10:05:59.630115] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:29.432 [2024-11-20 10:05:59.630457] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:29.432 [2024-11-20 10:05:59.630518] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:29.432 10:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:29.432 10:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:32:29.432 10:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:29.432 10:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:29.432 10:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:29.432 10:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:29.432 10:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:29.432 10:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:29.432 10:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:29.432 [2024-11-20 10:06:00.263929] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:29.432 10:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:29.432 10:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:32:29.432 10:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:29.432 10:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:29.432 10:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:32:29.433 10:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:32:29.433 10:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:32:29.433 10:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:29.433 10:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:29.433 Malloc0 00:32:29.694 [2024-11-20 10:06:00.360244] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:29.694 10:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:29.694 10:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:32:29.694 10:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:29.694 10:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:29.694 10:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1593785 00:32:29.694 10:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1593785 /var/tmp/bdevperf.sock 00:32:29.694 10:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1593785 ']' 00:32:29.694 10:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:29.694 10:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:29.694 10:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:29.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:29.694 10:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:32:29.694 10:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:29.694 10:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:32:29.694 10:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:29.694 10:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:32:29.694 10:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:32:29.694 10:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:29.694 10:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:29.694 { 00:32:29.694 "params": { 00:32:29.694 "name": "Nvme$subsystem", 00:32:29.694 "trtype": "$TEST_TRANSPORT", 00:32:29.694 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:29.694 "adrfam": "ipv4", 00:32:29.694 "trsvcid": "$NVMF_PORT", 00:32:29.694 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:29.694 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:29.694 "hdgst": ${hdgst:-false}, 00:32:29.694 "ddgst": ${ddgst:-false} 00:32:29.694 }, 00:32:29.694 "method": "bdev_nvme_attach_controller" 00:32:29.694 } 00:32:29.694 EOF 00:32:29.694 )") 00:32:29.694 10:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:32:29.694 10:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:32:29.694 10:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:32:29.694 10:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:29.694 "params": { 00:32:29.694 "name": "Nvme0", 00:32:29.694 "trtype": "tcp", 00:32:29.694 "traddr": "10.0.0.2", 00:32:29.694 "adrfam": "ipv4", 00:32:29.694 "trsvcid": "4420", 00:32:29.694 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:29.694 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:29.694 "hdgst": false, 00:32:29.694 "ddgst": false 00:32:29.694 }, 00:32:29.694 "method": "bdev_nvme_attach_controller" 00:32:29.694 }' 00:32:29.694 [2024-11-20 10:06:00.470958] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:32:29.694 [2024-11-20 10:06:00.471029] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1593785 ] 00:32:29.694 [2024-11-20 10:06:00.564950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:29.955 [2024-11-20 10:06:00.618169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:30.216 Running I/O for 10 seconds... 00:32:30.481 10:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:30.481 10:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:32:30.481 10:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:32:30.481 10:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:30.481 10:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:30.481 10:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:30.481 10:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:30.481 10:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:32:30.481 10:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:32:30.481 10:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:32:30.481 10:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:32:30.481 10:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:32:30.481 10:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:32:30.481 10:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:32:30.481 10:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:32:30.481 10:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:32:30.481 10:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:30.481 10:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:30.481 10:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:30.481 10:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=515 00:32:30.481 10:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:32:30.481 10:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:32:30.481 10:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:32:30.481 10:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:32:30.481 10:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:32:30.481 10:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:30.481 10:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:30.481 [2024-11-20 10:06:01.376287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20712a0 is same with the state(6) to be set 00:32:30.481 [2024-11-20 10:06:01.376366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20712a0 is same with the state(6) to be set 00:32:30.481 [2024-11-20 10:06:01.376376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20712a0 is same with the state(6) to be set 00:32:30.481 [2024-11-20 10:06:01.376384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20712a0 is same with the state(6) to be set 00:32:30.481 [2024-11-20 10:06:01.376429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20712a0 is same with the state(6) to be set 00:32:30.481 [2024-11-20 10:06:01.376438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20712a0 is same with the state(6) to be set 00:32:30.481 [2024-11-20 10:06:01.376449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20712a0 is same with the state(6) to be set 00:32:30.481 [2024-11-20 10:06:01.376457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20712a0 is same with the state(6) to be set 00:32:30.481 [2024-11-20 10:06:01.376472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20712a0 is same with the state(6) to be set 00:32:30.481 [2024-11-20 10:06:01.376479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20712a0 is same with the state(6) to be set 00:32:30.481 [2024-11-20 10:06:01.376487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20712a0 is same with the state(6) to be set 00:32:30.481 [2024-11-20 10:06:01.376494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20712a0 is same with the state(6) to be set 00:32:30.481 [2024-11-20 10:06:01.376502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20712a0 is same with the state(6) to be set 00:32:30.481 [2024-11-20 10:06:01.376510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20712a0 is same with the state(6) to be set 00:32:30.481 [2024-11-20 10:06:01.376517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20712a0 is same with the state(6) to be set 00:32:30.481 [2024-11-20 10:06:01.376525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20712a0 is same with the state(6) to be set 00:32:30.481 [2024-11-20 10:06:01.376532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20712a0 is same with the state(6) to be set 00:32:30.481 [2024-11-20 10:06:01.376539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20712a0 is same with the state(6) to be set 00:32:30.481 [2024-11-20 10:06:01.376546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20712a0 is same with the state(6) to be set 00:32:30.481 [2024-11-20 10:06:01.376555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20712a0 is same with the state(6) to be set 00:32:30.481 [2024-11-20 10:06:01.376565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20712a0 is same with the state(6) to be set 00:32:30.481 [2024-11-20 10:06:01.376603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20712a0 is same with the state(6) to be set 00:32:30.481 [2024-11-20 10:06:01.376611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20712a0 is same with the state(6) to be set 00:32:30.481 [2024-11-20 10:06:01.376637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20712a0 is same with the state(6) to be set 00:32:30.481 [2024-11-20 10:06:01.376646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20712a0 is same with the state(6) to be set 00:32:30.481 [2024-11-20 10:06:01.376653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20712a0 is same with the state(6) to be set 00:32:30.481 [2024-11-20 10:06:01.376661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20712a0 is same with the state(6) to be set 00:32:30.481 [2024-11-20 10:06:01.376668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20712a0 is same with the state(6) to be set 00:32:30.481 [2024-11-20 10:06:01.376677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20712a0 is same with the state(6) to be set 00:32:30.481 [2024-11-20 10:06:01.376684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20712a0 is same with the state(6) to be set 00:32:30.481 [2024-11-20 10:06:01.376691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20712a0 is same with the state(6) to be set 00:32:30.481 [2024-11-20 10:06:01.376700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20712a0 is same with the state(6) to be set 00:32:30.481 [2024-11-20 10:06:01.376708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20712a0 is same with the state(6) to be set 00:32:30.481 [2024-11-20 10:06:01.376715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20712a0 is same with the state(6) to be set 00:32:30.481 [2024-11-20 10:06:01.376722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20712a0 is same with the state(6) to be set 00:32:30.481 [2024-11-20 10:06:01.376732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20712a0 is same with the state(6) to be set 00:32:30.481 [2024-11-20 10:06:01.376740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20712a0 is same with the state(6) to be set 00:32:30.481 [2024-11-20 10:06:01.376772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20712a0 is same with the state(6) to be set 00:32:30.481 [2024-11-20 10:06:01.376779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20712a0 is same with the state(6) to be set 00:32:30.481 [2024-11-20 10:06:01.376786] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20712a0 is same with the state(6) to be set 00:32:30.481 [2024-11-20 10:06:01.376793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20712a0 is same with the state(6) to be set 00:32:30.481 [2024-11-20 10:06:01.376801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20712a0 is same with the state(6) to be set 00:32:30.481 [2024-11-20 10:06:01.376808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20712a0 is same with the state(6) to be set 00:32:30.481 [2024-11-20 10:06:01.376816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20712a0 is same with the state(6) to be set 00:32:30.481 [2024-11-20 10:06:01.376824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20712a0 is same with the state(6) to be set 00:32:30.481 [2024-11-20 10:06:01.376831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20712a0 is same with the state(6) to be set 00:32:30.481 [2024-11-20 10:06:01.376840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20712a0 is same with the state(6) to be set 00:32:30.481 [2024-11-20 10:06:01.376847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20712a0 is same with the state(6) to be set 00:32:30.481 [2024-11-20 10:06:01.376855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20712a0 is same with the state(6) to be set 00:32:30.481 [2024-11-20 10:06:01.376863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20712a0 is same with the state(6) to be set 00:32:30.481 [2024-11-20 10:06:01.376874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20712a0 is same with the state(6) to be set 00:32:30.481 [2024-11-20 10:06:01.376882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20712a0 is same with the state(6) to be set 00:32:30.481 [2024-11-20 10:06:01.376889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20712a0 is same with the state(6) to be set 00:32:30.481 [2024-11-20 10:06:01.376897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20712a0 is same with the state(6) to be set 00:32:30.481 [2024-11-20 10:06:01.376904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20712a0 is same with the state(6) to be set 00:32:30.482 [2024-11-20 10:06:01.376911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20712a0 is same with the state(6) to be set 00:32:30.482 [2024-11-20 10:06:01.376919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20712a0 is same with the state(6) to be set 00:32:30.482 [2024-11-20 10:06:01.376926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20712a0 is same with the state(6) to be set 00:32:30.482 [2024-11-20 10:06:01.376934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20712a0 is same with the state(6) to be set 00:32:30.482 [2024-11-20 10:06:01.376942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20712a0 is same with the state(6) to be set 00:32:30.482 [2024-11-20 10:06:01.376950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20712a0 is same with the state(6) to be set 00:32:30.482 [2024-11-20 10:06:01.377049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.482 [2024-11-20 10:06:01.377122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.482 [2024-11-20 10:06:01.377149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.482 [2024-11-20 10:06:01.377190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.482 [2024-11-20 10:06:01.377202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.482 [2024-11-20 10:06:01.377210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.482 [2024-11-20 10:06:01.377221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.482 [2024-11-20 10:06:01.377229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.482 [2024-11-20 10:06:01.377239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.482 [2024-11-20 10:06:01.377248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.482 [2024-11-20 10:06:01.377258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.482 [2024-11-20 10:06:01.377266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.482 [2024-11-20 10:06:01.377275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.482 [2024-11-20 10:06:01.377283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.482 [2024-11-20 10:06:01.377294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.482 [2024-11-20 10:06:01.377302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.482 [2024-11-20 10:06:01.377313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.482 [2024-11-20 10:06:01.377323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.482 [2024-11-20 10:06:01.377334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.482 [2024-11-20 10:06:01.377345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.482 [2024-11-20 10:06:01.377355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.482 [2024-11-20 10:06:01.377365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.482 [2024-11-20 10:06:01.377377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.482 [2024-11-20 10:06:01.377384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.482 [2024-11-20 10:06:01.377394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.482 [2024-11-20 10:06:01.377402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.482 [2024-11-20 10:06:01.377415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.482 [2024-11-20 10:06:01.377423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.482 [2024-11-20 10:06:01.377432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.482 [2024-11-20 10:06:01.377441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.482 [2024-11-20 10:06:01.377451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.482 [2024-11-20 10:06:01.377459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.482 [2024-11-20 10:06:01.377469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.482 [2024-11-20 10:06:01.377477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.482 [2024-11-20 10:06:01.377487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.482 [2024-11-20 10:06:01.377496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.482 [2024-11-20 10:06:01.377506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.482 [2024-11-20 10:06:01.377514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.482 [2024-11-20 10:06:01.377523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.482 [2024-11-20 10:06:01.377531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.482 [2024-11-20 10:06:01.377540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.482 [2024-11-20 10:06:01.377548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.482 [2024-11-20 10:06:01.377558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.482 [2024-11-20 10:06:01.377567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.482 [2024-11-20 10:06:01.377577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.482 [2024-11-20 10:06:01.377585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.482 [2024-11-20 10:06:01.377594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.482 [2024-11-20 10:06:01.377602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.482 [2024-11-20 10:06:01.377611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.482 [2024-11-20 10:06:01.377620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.482 [2024-11-20 10:06:01.377629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.482 [2024-11-20 10:06:01.377640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.482 [2024-11-20 10:06:01.377649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.482 [2024-11-20 10:06:01.377656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.482 [2024-11-20 10:06:01.377666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.482 [2024-11-20 10:06:01.377674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.482 [2024-11-20 10:06:01.377683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.482 [2024-11-20 10:06:01.377692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.482 [2024-11-20 10:06:01.377701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.482 [2024-11-20 10:06:01.377709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.482 [2024-11-20 10:06:01.377719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.482 [2024-11-20 10:06:01.377727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.482 [2024-11-20 10:06:01.377737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.482 [2024-11-20 10:06:01.377744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.482 [2024-11-20 10:06:01.377756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.482 [2024-11-20 10:06:01.377763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.482 [2024-11-20 10:06:01.377773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.482 [2024-11-20 10:06:01.377781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.482 [2024-11-20 10:06:01.377791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.482 [2024-11-20 10:06:01.377799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.482 [2024-11-20 10:06:01.377809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.482 [2024-11-20 10:06:01.377817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.482 [2024-11-20 10:06:01.377826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.483 [2024-11-20 10:06:01.377834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.483 [2024-11-20 10:06:01.377844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.483 [2024-11-20 10:06:01.377853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.483 [2024-11-20 10:06:01.377865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.483 [2024-11-20 10:06:01.377872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.483 [2024-11-20 10:06:01.377882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.483 [2024-11-20 10:06:01.377890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.483 [2024-11-20 10:06:01.377899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.483 [2024-11-20 10:06:01.377908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.483 [2024-11-20 10:06:01.377917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.483 [2024-11-20 10:06:01.377925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.483 [2024-11-20 10:06:01.377935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.483 [2024-11-20 10:06:01.377943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.483 [2024-11-20 10:06:01.377953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.483 [2024-11-20 10:06:01.377961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.483 [2024-11-20 10:06:01.377971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.483 [2024-11-20 10:06:01.377978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.483 [2024-11-20 10:06:01.377989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.483 [2024-11-20 10:06:01.377998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.483 [2024-11-20 10:06:01.378008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.483 [2024-11-20 10:06:01.378016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.483 [2024-11-20 10:06:01.378025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.483 [2024-11-20 10:06:01.378032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.483 [2024-11-20 10:06:01.378043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.483 [2024-11-20 10:06:01.378051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.483 [2024-11-20 10:06:01.378061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.483 [2024-11-20 10:06:01.378068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.483 [2024-11-20 10:06:01.378078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.483 [2024-11-20 10:06:01.378088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.483 [2024-11-20 10:06:01.378099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.483 [2024-11-20 10:06:01.378107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.483 [2024-11-20 10:06:01.378117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.483 [2024-11-20 10:06:01.378124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.483 [2024-11-20 10:06:01.378134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.483 [2024-11-20 10:06:01.378141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.483 [2024-11-20 10:06:01.378153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.483 [2024-11-20 10:06:01.378167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.483 [2024-11-20 10:06:01.378176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.483 [2024-11-20 10:06:01.378185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.483 [2024-11-20 10:06:01.378195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.483 [2024-11-20 10:06:01.378204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.483 [2024-11-20 10:06:01.378217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.483 [2024-11-20 10:06:01.378227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.483 [2024-11-20 10:06:01.378236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.483 [2024-11-20 10:06:01.378245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.483 [2024-11-20 10:06:01.378255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.483 [2024-11-20 10:06:01.378262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.483 [2024-11-20 10:06:01.378271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.483 [2024-11-20 10:06:01.378279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.483 [2024-11-20 10:06:01.378288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.483 [2024-11-20 10:06:01.378296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.483 [2024-11-20 10:06:01.378305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.483 [2024-11-20 10:06:01.378312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.483 [2024-11-20 10:06:01.378322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.483 [2024-11-20 10:06:01.378332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.483 [2024-11-20 10:06:01.378342] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4190 is same with the state(6) to be set 00:32:30.483 [2024-11-20 10:06:01.379661] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:30.483 10:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:30.483 10:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:32:30.483 task offset: 73728 on job bdev=Nvme0n1 fails 00:32:30.483 00:32:30.483 Latency(us) 00:32:30.483 [2024-11-20T09:06:01.399Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:30.483 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:30.483 Job: Nvme0n1 ended in about 0.39 seconds with error 00:32:30.483 Verification LBA range: start 0x0 length 0x400 00:32:30.483 Nvme0n1 : 0.39 1478.66 92.42 164.30 0.00 37681.98 5434.03 34515.63 00:32:30.483 [2024-11-20T09:06:01.399Z] =================================================================================================================== 00:32:30.483 [2024-11-20T09:06:01.399Z] Total : 1478.66 92.42 164.30 0.00 37681.98 5434.03 34515.63 00:32:30.483 10:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:30.483 10:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:30.483 [2024-11-20 10:06:01.381907] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:32:30.483 [2024-11-20 10:06:01.381950] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b000 (9): Bad file descriptor 00:32:30.483 [2024-11-20 10:06:01.383604] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:32:30.483 [2024-11-20 10:06:01.383699] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:32:30.483 [2024-11-20 10:06:01.383728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.483 [2024-11-20 10:06:01.383746] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:32:30.483 [2024-11-20 10:06:01.383754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:32:30.483 [2024-11-20 10:06:01.383762] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:30.483 [2024-11-20 10:06:01.383770] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb9b000 00:32:30.483 [2024-11-20 10:06:01.383794] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b000 (9): Bad file descriptor 00:32:30.483 [2024-11-20 10:06:01.383808] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:30.483 [2024-11-20 10:06:01.383817] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:30.483 [2024-11-20 10:06:01.383827] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:30.484 [2024-11-20 10:06:01.383838] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:30.747 10:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:30.747 10:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:32:31.690 10:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1593785 00:32:31.690 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1593785) - No such process 00:32:31.690 10:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:32:31.690 10:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:32:31.690 10:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:32:31.690 10:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:32:31.690 10:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:32:31.690 10:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:32:31.690 10:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:31.690 10:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:31.690 { 00:32:31.690 "params": { 00:32:31.690 "name": "Nvme$subsystem", 00:32:31.690 "trtype": "$TEST_TRANSPORT", 00:32:31.690 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:31.690 "adrfam": "ipv4", 00:32:31.690 "trsvcid": "$NVMF_PORT", 00:32:31.690 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:31.690 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:31.690 "hdgst": ${hdgst:-false}, 00:32:31.690 "ddgst": ${ddgst:-false} 00:32:31.690 }, 00:32:31.690 "method": "bdev_nvme_attach_controller" 00:32:31.690 } 00:32:31.690 EOF 00:32:31.690 )") 00:32:31.690 10:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:32:31.690 10:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:32:31.690 10:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:32:31.690 10:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:31.690 "params": { 00:32:31.690 "name": "Nvme0", 00:32:31.690 "trtype": "tcp", 00:32:31.690 "traddr": "10.0.0.2", 00:32:31.690 "adrfam": "ipv4", 00:32:31.690 "trsvcid": "4420", 00:32:31.690 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:31.690 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:31.691 "hdgst": false, 00:32:31.691 "ddgst": false 00:32:31.691 }, 00:32:31.691 "method": "bdev_nvme_attach_controller" 00:32:31.691 }' 00:32:31.691 [2024-11-20 10:06:02.455776] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:32:31.691 [2024-11-20 10:06:02.455853] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1594218 ] 00:32:31.691 [2024-11-20 10:06:02.548482] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:31.691 [2024-11-20 10:06:02.600339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:32.262 Running I/O for 1 seconds... 00:32:33.204 2015.00 IOPS, 125.94 MiB/s 00:32:33.204 Latency(us) 00:32:33.204 [2024-11-20T09:06:04.120Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:33.204 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:33.204 Verification LBA range: start 0x0 length 0x400 00:32:33.204 Nvme0n1 : 1.02 2046.81 127.93 0.00 0.00 30446.84 2525.87 36481.71 00:32:33.204 [2024-11-20T09:06:04.120Z] =================================================================================================================== 00:32:33.204 [2024-11-20T09:06:04.120Z] Total : 2046.81 127.93 0.00 0.00 30446.84 2525.87 36481.71 00:32:33.204 10:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:32:33.204 10:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:32:33.204 10:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:32:33.204 10:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:32:33.204 10:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:32:33.204 10:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:33.204 10:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:32:33.204 10:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:33.204 10:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:32:33.204 10:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:33.204 10:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:33.204 rmmod nvme_tcp 00:32:33.204 rmmod nvme_fabrics 00:32:33.204 rmmod nvme_keyring 00:32:33.204 10:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:33.204 10:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:32:33.204 10:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:32:33.204 10:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 1593390 ']' 00:32:33.204 10:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 1593390 00:32:33.204 10:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 1593390 ']' 00:32:33.204 10:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 1593390 00:32:33.204 10:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:32:33.204 10:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:33.204 10:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1593390 00:32:33.466 10:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:33.466 10:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:33.466 10:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1593390' 00:32:33.466 killing process with pid 1593390 00:32:33.466 10:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 1593390 00:32:33.466 10:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 1593390 00:32:33.466 [2024-11-20 10:06:04.274930] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:32:33.466 10:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:33.466 10:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:33.466 10:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:33.466 10:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:32:33.466 10:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:32:33.466 10:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:32:33.466 10:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:33.466 10:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:33.466 10:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:33.466 10:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:33.466 10:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:33.466 10:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:36.015 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:36.015 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:32:36.015 00:32:36.015 real 0m14.777s 00:32:36.015 user 0m19.820s 00:32:36.015 sys 0m7.457s 00:32:36.015 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:36.015 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:36.015 ************************************ 00:32:36.015 END TEST nvmf_host_management 00:32:36.015 ************************************ 00:32:36.015 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:32:36.015 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:36.015 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:36.015 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:36.015 ************************************ 00:32:36.015 START TEST nvmf_lvol 00:32:36.015 ************************************ 00:32:36.015 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:32:36.015 * Looking for test storage... 00:32:36.015 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:36.015 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:36.015 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:32:36.015 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:36.015 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:36.015 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:36.015 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:36.015 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:36.015 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:32:36.015 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:32:36.015 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:32:36.015 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:32:36.015 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:32:36.015 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:32:36.015 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:32:36.015 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:36.015 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:32:36.015 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:32:36.015 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:36.015 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:36.015 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:32:36.015 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:32:36.015 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:36.015 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:32:36.015 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:32:36.015 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:32:36.015 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:32:36.015 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:36.015 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:32:36.015 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:32:36.015 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:36.015 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:36.015 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:32:36.015 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:36.015 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:36.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:36.015 --rc genhtml_branch_coverage=1 00:32:36.015 --rc genhtml_function_coverage=1 00:32:36.015 --rc genhtml_legend=1 00:32:36.015 --rc geninfo_all_blocks=1 00:32:36.015 --rc geninfo_unexecuted_blocks=1 00:32:36.015 00:32:36.015 ' 00:32:36.015 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:36.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:36.015 --rc genhtml_branch_coverage=1 00:32:36.015 --rc genhtml_function_coverage=1 00:32:36.015 --rc genhtml_legend=1 00:32:36.015 --rc geninfo_all_blocks=1 00:32:36.015 --rc geninfo_unexecuted_blocks=1 00:32:36.015 00:32:36.015 ' 00:32:36.015 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:36.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:36.015 --rc genhtml_branch_coverage=1 00:32:36.015 --rc genhtml_function_coverage=1 00:32:36.015 --rc genhtml_legend=1 00:32:36.015 --rc geninfo_all_blocks=1 00:32:36.015 --rc geninfo_unexecuted_blocks=1 00:32:36.015 00:32:36.015 ' 00:32:36.015 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:36.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:36.015 --rc genhtml_branch_coverage=1 00:32:36.015 --rc genhtml_function_coverage=1 00:32:36.015 --rc genhtml_legend=1 00:32:36.015 --rc geninfo_all_blocks=1 00:32:36.015 --rc geninfo_unexecuted_blocks=1 00:32:36.015 00:32:36.015 ' 00:32:36.015 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:36.015 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:32:36.015 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:36.015 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:36.016 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:36.016 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:36.016 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:36.016 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:36.016 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:36.016 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:36.016 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:36.016 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:36.016 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:36.016 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:36.016 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:36.016 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:36.016 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:36.016 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:36.016 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:36.016 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:32:36.016 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:36.016 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:36.016 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:36.016 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:36.016 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:36.016 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:36.016 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:32:36.016 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:36.016 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:32:36.016 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:36.016 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:36.016 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:36.016 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:36.016 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:36.016 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:36.016 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:36.016 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:36.016 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:36.016 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:36.016 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:36.016 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:36.016 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:32:36.016 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:32:36.016 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:36.016 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:32:36.016 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:36.016 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:36.016 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:36.016 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:36.016 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:36.016 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:36.016 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:36.016 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:36.016 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:36.016 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:36.016 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:32:36.016 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:44.163 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:44.163 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:32:44.163 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:44.163 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:44.163 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:44.163 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:44.163 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:44.163 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:32:44.163 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:44.163 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:32:44.163 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:32:44.163 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:32:44.163 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:32:44.163 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:32:44.163 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:32:44.163 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:44.163 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:44.163 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:44.163 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:44.163 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:44.163 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:44.163 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:44.163 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:44.163 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:44.163 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:44.163 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:44.163 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:44.163 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:44.163 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:44.163 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:44.163 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:44.163 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:44.163 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:44.163 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:44.163 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:44.163 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:44.163 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:44.163 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:44.163 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:44.163 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:44.163 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:44.163 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:44.163 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:44.163 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:44.163 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:44.163 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:44.163 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:44.163 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:44.163 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:44.163 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:44.163 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:44.163 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:44.163 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:44.163 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:44.163 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:44.163 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:44.163 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:44.163 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:44.163 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:44.163 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:44.163 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:44.163 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:44.163 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:44.163 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:44.163 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:44.163 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:44.163 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:44.163 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:44.163 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:44.163 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:44.163 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:44.163 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:44.163 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:44.163 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:32:44.163 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:44.163 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:44.163 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:44.164 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:44.164 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:44.164 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:44.164 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:44.164 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:44.164 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:44.164 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:44.164 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:44.164 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:44.164 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:44.164 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:44.164 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:44.164 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:44.164 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:44.164 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:44.164 10:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:44.164 10:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:44.164 10:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:44.164 10:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:44.164 10:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:44.164 10:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:44.164 10:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:44.164 10:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:44.164 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:44.164 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.546 ms 00:32:44.164 00:32:44.164 --- 10.0.0.2 ping statistics --- 00:32:44.164 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:44.164 rtt min/avg/max/mdev = 0.546/0.546/0.546/0.000 ms 00:32:44.164 10:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:44.164 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:44.164 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:32:44.164 00:32:44.164 --- 10.0.0.1 ping statistics --- 00:32:44.164 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:44.164 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:32:44.164 10:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:44.164 10:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:32:44.164 10:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:44.164 10:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:44.164 10:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:44.164 10:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:44.164 10:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:44.164 10:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:44.164 10:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:44.164 10:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:32:44.164 10:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:44.164 10:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:44.164 10:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:44.164 10:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=1599068 00:32:44.164 10:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 1599068 00:32:44.164 10:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:32:44.164 10:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 1599068 ']' 00:32:44.164 10:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:44.164 10:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:44.164 10:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:44.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:44.164 10:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:44.164 10:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:44.164 [2024-11-20 10:06:14.338175] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:44.164 [2024-11-20 10:06:14.339301] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:32:44.164 [2024-11-20 10:06:14.339354] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:44.164 [2024-11-20 10:06:14.439678] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:44.164 [2024-11-20 10:06:14.492723] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:44.164 [2024-11-20 10:06:14.492771] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:44.164 [2024-11-20 10:06:14.492780] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:44.164 [2024-11-20 10:06:14.492787] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:44.164 [2024-11-20 10:06:14.492793] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:44.164 [2024-11-20 10:06:14.494553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:44.164 [2024-11-20 10:06:14.494714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:44.164 [2024-11-20 10:06:14.494715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:44.164 [2024-11-20 10:06:14.572273] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:44.164 [2024-11-20 10:06:14.573323] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:44.164 [2024-11-20 10:06:14.573467] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:44.164 [2024-11-20 10:06:14.573666] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:44.426 10:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:44.426 10:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:32:44.426 10:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:44.426 10:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:44.426 10:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:44.426 10:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:44.426 10:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:44.687 [2024-11-20 10:06:15.363635] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:44.687 10:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:44.948 10:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:32:44.948 10:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:44.948 10:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:32:44.948 10:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:32:45.210 10:06:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:32:45.471 10:06:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=31b55bea-5a09-4263-b016-209dafe53899 00:32:45.471 10:06:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 31b55bea-5a09-4263-b016-209dafe53899 lvol 20 00:32:45.733 10:06:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=7f43e577-1f20-4d8d-a3f2-e3b5ced7f0a9 00:32:45.733 10:06:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:45.733 10:06:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7f43e577-1f20-4d8d-a3f2-e3b5ced7f0a9 00:32:45.996 10:06:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:46.256 [2024-11-20 10:06:16.991577] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:46.256 10:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:46.517 10:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1599715 00:32:46.517 10:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:32:46.517 10:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:32:47.462 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 7f43e577-1f20-4d8d-a3f2-e3b5ced7f0a9 MY_SNAPSHOT 00:32:47.724 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=c2b31cf7-d304-4da3-b4fc-b471ce3f0b02 00:32:47.724 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 7f43e577-1f20-4d8d-a3f2-e3b5ced7f0a9 30 00:32:47.985 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone c2b31cf7-d304-4da3-b4fc-b471ce3f0b02 MY_CLONE 00:32:48.245 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=5b63bbc7-f8f2-43c8-8da7-0dc20675cb6a 00:32:48.245 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 5b63bbc7-f8f2-43c8-8da7-0dc20675cb6a 00:32:48.511 10:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1599715 00:32:58.510 Initializing NVMe Controllers 00:32:58.510 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:32:58.510 Controller IO queue size 128, less than required. 00:32:58.510 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:58.510 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:32:58.510 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:32:58.510 Initialization complete. Launching workers. 00:32:58.510 ======================================================== 00:32:58.510 Latency(us) 00:32:58.510 Device Information : IOPS MiB/s Average min max 00:32:58.510 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 15477.80 60.46 8273.88 1908.38 90811.18 00:32:58.510 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 15344.80 59.94 8340.98 2134.07 74210.90 00:32:58.510 ======================================================== 00:32:58.510 Total : 30822.60 120.40 8307.28 1908.38 90811.18 00:32:58.510 00:32:58.510 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:58.510 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 7f43e577-1f20-4d8d-a3f2-e3b5ced7f0a9 00:32:58.510 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 31b55bea-5a09-4263-b016-209dafe53899 00:32:58.510 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:32:58.510 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:32:58.510 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:32:58.510 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:58.510 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:32:58.510 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:58.510 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:32:58.510 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:58.510 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:58.510 rmmod nvme_tcp 00:32:58.510 rmmod nvme_fabrics 00:32:58.510 rmmod nvme_keyring 00:32:58.510 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:58.510 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:32:58.510 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:32:58.510 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 1599068 ']' 00:32:58.510 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 1599068 00:32:58.510 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 1599068 ']' 00:32:58.510 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 1599068 00:32:58.510 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:32:58.510 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:58.510 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1599068 00:32:58.510 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:58.510 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:58.510 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1599068' 00:32:58.510 killing process with pid 1599068 00:32:58.510 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 1599068 00:32:58.510 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 1599068 00:32:58.510 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:58.510 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:58.510 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:58.510 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:32:58.510 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:32:58.510 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:58.510 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:32:58.510 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:58.510 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:58.510 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:58.510 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:58.510 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:59.897 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:59.897 00:32:59.897 real 0m24.038s 00:32:59.897 user 0m56.104s 00:32:59.897 sys 0m10.949s 00:32:59.897 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:59.897 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:59.897 ************************************ 00:32:59.897 END TEST nvmf_lvol 00:32:59.897 ************************************ 00:32:59.897 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:32:59.897 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:59.897 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:59.897 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:59.897 ************************************ 00:32:59.897 START TEST nvmf_lvs_grow 00:32:59.897 ************************************ 00:32:59.897 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:32:59.897 * Looking for test storage... 00:32:59.897 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:59.897 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:59.897 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:59.897 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:32:59.897 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:59.897 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:59.897 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:59.897 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:59.897 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:32:59.897 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:32:59.897 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:32:59.897 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:32:59.897 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:32:59.897 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:32:59.897 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:32:59.897 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:59.897 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:32:59.897 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:32:59.897 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:59.897 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:59.897 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:32:59.897 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:32:59.897 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:59.897 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:32:59.897 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:32:59.897 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:32:59.897 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:32:59.897 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:59.897 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:32:59.897 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:32:59.897 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:59.897 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:59.897 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:32:59.897 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:59.898 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:59.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:59.898 --rc genhtml_branch_coverage=1 00:32:59.898 --rc genhtml_function_coverage=1 00:32:59.898 --rc genhtml_legend=1 00:32:59.898 --rc geninfo_all_blocks=1 00:32:59.898 --rc geninfo_unexecuted_blocks=1 00:32:59.898 00:32:59.898 ' 00:32:59.898 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:59.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:59.898 --rc genhtml_branch_coverage=1 00:32:59.898 --rc genhtml_function_coverage=1 00:32:59.898 --rc genhtml_legend=1 00:32:59.898 --rc geninfo_all_blocks=1 00:32:59.898 --rc geninfo_unexecuted_blocks=1 00:32:59.898 00:32:59.898 ' 00:32:59.898 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:59.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:59.898 --rc genhtml_branch_coverage=1 00:32:59.898 --rc genhtml_function_coverage=1 00:32:59.898 --rc genhtml_legend=1 00:32:59.898 --rc geninfo_all_blocks=1 00:32:59.898 --rc geninfo_unexecuted_blocks=1 00:32:59.898 00:32:59.898 ' 00:32:59.898 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:59.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:59.898 --rc genhtml_branch_coverage=1 00:32:59.898 --rc genhtml_function_coverage=1 00:32:59.898 --rc genhtml_legend=1 00:32:59.898 --rc geninfo_all_blocks=1 00:32:59.898 --rc geninfo_unexecuted_blocks=1 00:32:59.898 00:32:59.898 ' 00:32:59.898 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:59.898 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:32:59.898 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:59.898 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:59.898 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:59.898 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:59.898 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:59.898 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:59.898 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:59.898 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:59.898 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:59.898 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:00.160 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:00.160 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:00.160 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:00.160 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:00.160 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:00.160 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:00.160 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:00.160 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:33:00.160 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:00.160 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:00.160 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:00.160 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:00.160 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:00.160 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:00.160 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:33:00.160 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:00.160 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:33:00.160 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:00.160 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:00.160 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:00.160 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:00.160 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:00.160 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:00.160 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:00.160 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:00.160 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:00.160 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:00.160 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:00.160 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:00.160 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:33:00.160 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:00.160 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:00.160 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:00.160 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:00.160 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:00.160 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:00.160 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:00.160 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:00.160 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:00.160 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:00.160 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:33:00.160 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:08.306 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:08.306 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:33:08.306 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:08.306 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:08.306 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:08.306 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:08.306 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:08.306 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:33:08.306 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:08.306 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:33:08.306 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:33:08.306 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:33:08.306 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:33:08.306 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:33:08.306 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:33:08.306 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:08.306 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:08.306 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:08.306 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:08.306 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:08.306 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:08.306 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:08.306 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:08.306 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:08.306 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:08.306 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:08.306 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:08.306 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:08.306 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:08.306 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:08.306 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:08.306 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:08.306 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:08.306 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:08.306 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:08.306 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:08.306 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:08.306 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:08.307 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:08.307 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:08.307 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:08.307 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:08.307 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:08.307 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:08.307 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:08.307 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:08.307 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:08.307 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:08.307 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:08.307 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:08.307 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:08.307 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:08.307 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:08.307 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:08.307 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:08.307 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:08.307 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:08.307 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:08.307 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:08.307 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:08.307 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:08.307 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:08.307 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:08.307 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:08.307 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:08.307 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:08.307 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:08.307 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:08.307 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:08.307 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:08.307 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:08.307 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:08.307 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:08.307 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:33:08.307 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:08.307 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:08.307 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:08.307 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:08.307 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:08.307 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:08.307 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:08.307 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:08.307 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:08.307 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:08.307 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:08.307 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:08.307 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:08.307 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:08.307 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:08.307 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:08.307 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:08.307 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:08.307 10:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:08.307 10:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:08.307 10:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:08.307 10:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:08.307 10:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:08.307 10:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:08.307 10:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:08.307 10:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:08.307 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:08.307 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.547 ms 00:33:08.307 00:33:08.307 --- 10.0.0.2 ping statistics --- 00:33:08.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:08.307 rtt min/avg/max/mdev = 0.547/0.547/0.547/0.000 ms 00:33:08.307 10:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:08.307 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:08.307 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.289 ms 00:33:08.307 00:33:08.307 --- 10.0.0.1 ping statistics --- 00:33:08.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:08.307 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:33:08.307 10:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:08.307 10:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:33:08.307 10:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:08.307 10:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:08.307 10:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:08.307 10:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:08.307 10:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:08.307 10:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:08.307 10:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:08.307 10:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:33:08.307 10:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:08.307 10:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:08.307 10:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:08.308 10:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=1606027 00:33:08.308 10:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 1606027 00:33:08.308 10:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:33:08.308 10:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 1606027 ']' 00:33:08.308 10:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:08.308 10:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:08.308 10:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:08.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:08.308 10:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:08.308 10:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:08.308 [2024-11-20 10:06:38.380081] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:08.308 [2024-11-20 10:06:38.381223] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:33:08.308 [2024-11-20 10:06:38.381276] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:08.308 [2024-11-20 10:06:38.481255] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:08.308 [2024-11-20 10:06:38.531791] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:08.308 [2024-11-20 10:06:38.531844] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:08.308 [2024-11-20 10:06:38.531853] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:08.308 [2024-11-20 10:06:38.531860] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:08.308 [2024-11-20 10:06:38.531867] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:08.308 [2024-11-20 10:06:38.532664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:08.308 [2024-11-20 10:06:38.608751] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:08.308 [2024-11-20 10:06:38.609042] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:08.308 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:08.308 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:33:08.308 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:08.308 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:08.308 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:08.568 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:08.568 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:08.568 [2024-11-20 10:06:39.413601] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:08.568 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:33:08.568 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:08.568 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:08.568 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:08.828 ************************************ 00:33:08.828 START TEST lvs_grow_clean 00:33:08.828 ************************************ 00:33:08.828 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:33:08.828 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:33:08.828 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:33:08.828 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:33:08.828 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:33:08.828 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:33:08.828 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:33:08.828 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:08.828 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:08.828 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:33:08.828 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:33:08.828 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:33:09.089 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=f876e4a3-6d28-4375-aaec-98ca0a6485f9 00:33:09.089 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f876e4a3-6d28-4375-aaec-98ca0a6485f9 00:33:09.089 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:33:09.349 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:33:09.349 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:33:09.349 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f876e4a3-6d28-4375-aaec-98ca0a6485f9 lvol 150 00:33:09.609 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=382f2eca-63b1-4376-8907-3001286d06dc 00:33:09.609 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:09.609 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:33:09.609 [2024-11-20 10:06:40.469265] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:33:09.609 [2024-11-20 10:06:40.469429] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:33:09.610 true 00:33:09.610 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f876e4a3-6d28-4375-aaec-98ca0a6485f9 00:33:09.610 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:33:09.923 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:33:09.923 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:33:10.183 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 382f2eca-63b1-4376-8907-3001286d06dc 00:33:10.183 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:10.443 [2024-11-20 10:06:41.217924] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:10.443 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:10.705 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1606492 00:33:10.705 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:10.705 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:33:10.705 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1606492 /var/tmp/bdevperf.sock 00:33:10.705 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 1606492 ']' 00:33:10.705 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:10.705 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:10.705 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:10.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:10.705 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:10.705 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:33:10.705 [2024-11-20 10:06:41.472935] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:33:10.705 [2024-11-20 10:06:41.473008] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1606492 ] 00:33:10.705 [2024-11-20 10:06:41.547935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:10.705 [2024-11-20 10:06:41.601364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:11.648 10:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:11.648 10:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:33:11.648 10:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:33:11.648 Nvme0n1 00:33:11.648 10:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:33:11.909 [ 00:33:11.909 { 00:33:11.909 "name": "Nvme0n1", 00:33:11.909 "aliases": [ 00:33:11.909 "382f2eca-63b1-4376-8907-3001286d06dc" 00:33:11.909 ], 00:33:11.909 "product_name": "NVMe disk", 00:33:11.909 "block_size": 4096, 00:33:11.909 "num_blocks": 38912, 00:33:11.909 "uuid": "382f2eca-63b1-4376-8907-3001286d06dc", 00:33:11.909 "numa_id": 0, 00:33:11.909 "assigned_rate_limits": { 00:33:11.909 "rw_ios_per_sec": 0, 00:33:11.909 "rw_mbytes_per_sec": 0, 00:33:11.909 "r_mbytes_per_sec": 0, 00:33:11.909 "w_mbytes_per_sec": 0 00:33:11.909 }, 00:33:11.909 "claimed": false, 00:33:11.909 "zoned": false, 00:33:11.909 "supported_io_types": { 00:33:11.909 "read": true, 00:33:11.909 "write": true, 00:33:11.909 "unmap": true, 00:33:11.909 "flush": true, 00:33:11.909 "reset": true, 00:33:11.909 "nvme_admin": true, 00:33:11.909 "nvme_io": true, 00:33:11.909 "nvme_io_md": false, 00:33:11.909 "write_zeroes": true, 00:33:11.909 "zcopy": false, 00:33:11.909 "get_zone_info": false, 00:33:11.909 "zone_management": false, 00:33:11.909 "zone_append": false, 00:33:11.909 "compare": true, 00:33:11.909 "compare_and_write": true, 00:33:11.909 "abort": true, 00:33:11.909 "seek_hole": false, 00:33:11.909 "seek_data": false, 00:33:11.909 "copy": true, 00:33:11.909 "nvme_iov_md": false 00:33:11.909 }, 00:33:11.910 "memory_domains": [ 00:33:11.910 { 00:33:11.910 "dma_device_id": "system", 00:33:11.910 "dma_device_type": 1 00:33:11.910 } 00:33:11.910 ], 00:33:11.910 "driver_specific": { 00:33:11.910 "nvme": [ 00:33:11.910 { 00:33:11.910 "trid": { 00:33:11.910 "trtype": "TCP", 00:33:11.910 "adrfam": "IPv4", 00:33:11.910 "traddr": "10.0.0.2", 00:33:11.910 "trsvcid": "4420", 00:33:11.910 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:33:11.910 }, 00:33:11.910 "ctrlr_data": { 00:33:11.910 "cntlid": 1, 00:33:11.910 "vendor_id": "0x8086", 00:33:11.910 "model_number": "SPDK bdev Controller", 00:33:11.910 "serial_number": "SPDK0", 00:33:11.910 "firmware_revision": "25.01", 00:33:11.910 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:11.910 "oacs": { 00:33:11.910 "security": 0, 00:33:11.910 "format": 0, 00:33:11.910 "firmware": 0, 00:33:11.910 "ns_manage": 0 00:33:11.910 }, 00:33:11.910 "multi_ctrlr": true, 00:33:11.910 "ana_reporting": false 00:33:11.910 }, 00:33:11.910 "vs": { 00:33:11.910 "nvme_version": "1.3" 00:33:11.910 }, 00:33:11.910 "ns_data": { 00:33:11.910 "id": 1, 00:33:11.910 "can_share": true 00:33:11.910 } 00:33:11.910 } 00:33:11.910 ], 00:33:11.910 "mp_policy": "active_passive" 00:33:11.910 } 00:33:11.910 } 00:33:11.910 ] 00:33:11.910 10:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:11.910 10:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1606775 00:33:11.910 10:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:33:11.910 Running I/O for 10 seconds... 00:33:13.298 Latency(us) 00:33:13.298 [2024-11-20T09:06:44.214Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:13.298 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:13.298 Nvme0n1 : 1.00 16891.00 65.98 0.00 0.00 0.00 0.00 0.00 00:33:13.298 [2024-11-20T09:06:44.214Z] =================================================================================================================== 00:33:13.298 [2024-11-20T09:06:44.214Z] Total : 16891.00 65.98 0.00 0.00 0.00 0.00 0.00 00:33:13.298 00:33:13.871 10:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u f876e4a3-6d28-4375-aaec-98ca0a6485f9 00:33:14.132 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:14.132 Nvme0n1 : 2.00 17145.00 66.97 0.00 0.00 0.00 0.00 0.00 00:33:14.132 [2024-11-20T09:06:45.048Z] =================================================================================================================== 00:33:14.132 [2024-11-20T09:06:45.048Z] Total : 17145.00 66.97 0.00 0.00 0.00 0.00 0.00 00:33:14.132 00:33:14.132 true 00:33:14.132 10:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f876e4a3-6d28-4375-aaec-98ca0a6485f9 00:33:14.132 10:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:33:14.392 10:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:33:14.393 10:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:33:14.393 10:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1606775 00:33:14.965 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:14.965 Nvme0n1 : 3.00 17399.00 67.96 0.00 0.00 0.00 0.00 0.00 00:33:14.965 [2024-11-20T09:06:45.881Z] =================================================================================================================== 00:33:14.965 [2024-11-20T09:06:45.881Z] Total : 17399.00 67.96 0.00 0.00 0.00 0.00 0.00 00:33:14.965 00:33:15.907 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:15.907 Nvme0n1 : 4.00 17605.50 68.77 0.00 0.00 0.00 0.00 0.00 00:33:15.907 [2024-11-20T09:06:46.823Z] =================================================================================================================== 00:33:15.907 [2024-11-20T09:06:46.823Z] Total : 17605.50 68.77 0.00 0.00 0.00 0.00 0.00 00:33:15.907 00:33:17.291 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:17.291 Nvme0n1 : 5.00 19186.80 74.95 0.00 0.00 0.00 0.00 0.00 00:33:17.291 [2024-11-20T09:06:48.207Z] =================================================================================================================== 00:33:17.291 [2024-11-20T09:06:48.207Z] Total : 19186.80 74.95 0.00 0.00 0.00 0.00 0.00 00:33:17.291 00:33:18.235 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:18.235 Nvme0n1 : 6.00 20243.50 79.08 0.00 0.00 0.00 0.00 0.00 00:33:18.235 [2024-11-20T09:06:49.151Z] =================================================================================================================== 00:33:18.235 [2024-11-20T09:06:49.151Z] Total : 20243.50 79.08 0.00 0.00 0.00 0.00 0.00 00:33:18.235 00:33:19.175 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:19.175 Nvme0n1 : 7.00 21007.43 82.06 0.00 0.00 0.00 0.00 0.00 00:33:19.175 [2024-11-20T09:06:50.091Z] =================================================================================================================== 00:33:19.175 [2024-11-20T09:06:50.091Z] Total : 21007.43 82.06 0.00 0.00 0.00 0.00 0.00 00:33:19.175 00:33:20.118 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:20.118 Nvme0n1 : 8.00 21570.50 84.26 0.00 0.00 0.00 0.00 0.00 00:33:20.118 [2024-11-20T09:06:51.034Z] =================================================================================================================== 00:33:20.118 [2024-11-20T09:06:51.034Z] Total : 21570.50 84.26 0.00 0.00 0.00 0.00 0.00 00:33:20.118 00:33:21.062 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:21.062 Nvme0n1 : 9.00 22017.22 86.00 0.00 0.00 0.00 0.00 0.00 00:33:21.062 [2024-11-20T09:06:51.978Z] =================================================================================================================== 00:33:21.062 [2024-11-20T09:06:51.978Z] Total : 22017.22 86.00 0.00 0.00 0.00 0.00 0.00 00:33:21.062 00:33:22.005 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:22.005 Nvme0n1 : 10.00 22379.50 87.42 0.00 0.00 0.00 0.00 0.00 00:33:22.005 [2024-11-20T09:06:52.921Z] =================================================================================================================== 00:33:22.005 [2024-11-20T09:06:52.921Z] Total : 22379.50 87.42 0.00 0.00 0.00 0.00 0.00 00:33:22.005 00:33:22.005 00:33:22.005 Latency(us) 00:33:22.005 [2024-11-20T09:06:52.921Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:22.005 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:22.005 Nvme0n1 : 10.00 22377.85 87.41 0.00 0.00 5716.85 2853.55 31894.19 00:33:22.005 [2024-11-20T09:06:52.921Z] =================================================================================================================== 00:33:22.005 [2024-11-20T09:06:52.921Z] Total : 22377.85 87.41 0.00 0.00 5716.85 2853.55 31894.19 00:33:22.005 { 00:33:22.005 "results": [ 00:33:22.005 { 00:33:22.005 "job": "Nvme0n1", 00:33:22.005 "core_mask": "0x2", 00:33:22.005 "workload": "randwrite", 00:33:22.005 "status": "finished", 00:33:22.005 "queue_depth": 128, 00:33:22.005 "io_size": 4096, 00:33:22.005 "runtime": 10.003642, 00:33:22.005 "iops": 22377.849987034722, 00:33:22.005 "mibps": 87.41347651185438, 00:33:22.005 "io_failed": 0, 00:33:22.005 "io_timeout": 0, 00:33:22.005 "avg_latency_us": 5716.852240269216, 00:33:22.005 "min_latency_us": 2853.5466666666666, 00:33:22.005 "max_latency_us": 31894.18666666667 00:33:22.005 } 00:33:22.005 ], 00:33:22.005 "core_count": 1 00:33:22.005 } 00:33:22.005 10:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1606492 00:33:22.005 10:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 1606492 ']' 00:33:22.005 10:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 1606492 00:33:22.005 10:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:33:22.005 10:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:22.005 10:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1606492 00:33:22.266 10:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:22.266 10:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:22.266 10:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1606492' 00:33:22.266 killing process with pid 1606492 00:33:22.266 10:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 1606492 00:33:22.266 Received shutdown signal, test time was about 10.000000 seconds 00:33:22.266 00:33:22.266 Latency(us) 00:33:22.266 [2024-11-20T09:06:53.182Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:22.266 [2024-11-20T09:06:53.182Z] =================================================================================================================== 00:33:22.266 [2024-11-20T09:06:53.182Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:22.266 10:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 1606492 00:33:22.266 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:22.527 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:22.527 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f876e4a3-6d28-4375-aaec-98ca0a6485f9 00:33:22.527 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:33:22.788 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:33:22.788 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:33:22.788 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:33:22.788 [2024-11-20 10:06:53.669286] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:33:23.049 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f876e4a3-6d28-4375-aaec-98ca0a6485f9 00:33:23.049 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:33:23.049 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f876e4a3-6d28-4375-aaec-98ca0a6485f9 00:33:23.049 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:23.049 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:23.049 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:23.049 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:23.049 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:23.049 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:23.049 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:23.049 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:33:23.049 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f876e4a3-6d28-4375-aaec-98ca0a6485f9 00:33:23.049 request: 00:33:23.049 { 00:33:23.049 "uuid": "f876e4a3-6d28-4375-aaec-98ca0a6485f9", 00:33:23.049 "method": "bdev_lvol_get_lvstores", 00:33:23.049 "req_id": 1 00:33:23.049 } 00:33:23.049 Got JSON-RPC error response 00:33:23.049 response: 00:33:23.049 { 00:33:23.049 "code": -19, 00:33:23.049 "message": "No such device" 00:33:23.049 } 00:33:23.049 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:33:23.049 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:23.049 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:23.049 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:23.049 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:33:23.310 aio_bdev 00:33:23.310 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 382f2eca-63b1-4376-8907-3001286d06dc 00:33:23.310 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=382f2eca-63b1-4376-8907-3001286d06dc 00:33:23.310 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:23.310 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:33:23.310 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:23.310 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:23.310 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:33:23.572 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 382f2eca-63b1-4376-8907-3001286d06dc -t 2000 00:33:23.572 [ 00:33:23.572 { 00:33:23.572 "name": "382f2eca-63b1-4376-8907-3001286d06dc", 00:33:23.572 "aliases": [ 00:33:23.572 "lvs/lvol" 00:33:23.572 ], 00:33:23.572 "product_name": "Logical Volume", 00:33:23.572 "block_size": 4096, 00:33:23.572 "num_blocks": 38912, 00:33:23.572 "uuid": "382f2eca-63b1-4376-8907-3001286d06dc", 00:33:23.572 "assigned_rate_limits": { 00:33:23.572 "rw_ios_per_sec": 0, 00:33:23.572 "rw_mbytes_per_sec": 0, 00:33:23.572 "r_mbytes_per_sec": 0, 00:33:23.572 "w_mbytes_per_sec": 0 00:33:23.572 }, 00:33:23.572 "claimed": false, 00:33:23.572 "zoned": false, 00:33:23.572 "supported_io_types": { 00:33:23.572 "read": true, 00:33:23.572 "write": true, 00:33:23.572 "unmap": true, 00:33:23.572 "flush": false, 00:33:23.572 "reset": true, 00:33:23.572 "nvme_admin": false, 00:33:23.572 "nvme_io": false, 00:33:23.572 "nvme_io_md": false, 00:33:23.572 "write_zeroes": true, 00:33:23.572 "zcopy": false, 00:33:23.572 "get_zone_info": false, 00:33:23.572 "zone_management": false, 00:33:23.572 "zone_append": false, 00:33:23.572 "compare": false, 00:33:23.572 "compare_and_write": false, 00:33:23.572 "abort": false, 00:33:23.572 "seek_hole": true, 00:33:23.572 "seek_data": true, 00:33:23.572 "copy": false, 00:33:23.572 "nvme_iov_md": false 00:33:23.572 }, 00:33:23.572 "driver_specific": { 00:33:23.572 "lvol": { 00:33:23.572 "lvol_store_uuid": "f876e4a3-6d28-4375-aaec-98ca0a6485f9", 00:33:23.572 "base_bdev": "aio_bdev", 00:33:23.572 "thin_provision": false, 00:33:23.572 "num_allocated_clusters": 38, 00:33:23.572 "snapshot": false, 00:33:23.572 "clone": false, 00:33:23.572 "esnap_clone": false 00:33:23.572 } 00:33:23.572 } 00:33:23.572 } 00:33:23.572 ] 00:33:23.572 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:33:23.572 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f876e4a3-6d28-4375-aaec-98ca0a6485f9 00:33:23.572 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:33:23.833 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:33:23.833 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f876e4a3-6d28-4375-aaec-98ca0a6485f9 00:33:23.833 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:33:24.093 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:33:24.093 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 382f2eca-63b1-4376-8907-3001286d06dc 00:33:24.093 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f876e4a3-6d28-4375-aaec-98ca0a6485f9 00:33:24.354 10:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:33:24.615 10:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:24.615 00:33:24.615 real 0m15.852s 00:33:24.615 user 0m15.506s 00:33:24.615 sys 0m1.460s 00:33:24.615 10:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:24.615 10:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:33:24.615 ************************************ 00:33:24.615 END TEST lvs_grow_clean 00:33:24.615 ************************************ 00:33:24.615 10:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:33:24.615 10:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:24.615 10:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:24.615 10:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:24.615 ************************************ 00:33:24.615 START TEST lvs_grow_dirty 00:33:24.615 ************************************ 00:33:24.615 10:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:33:24.615 10:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:33:24.615 10:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:33:24.615 10:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:33:24.615 10:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:33:24.615 10:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:33:24.615 10:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:33:24.615 10:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:24.615 10:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:24.615 10:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:33:24.876 10:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:33:24.876 10:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:33:25.137 10:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=d89a1c9a-4c9a-481f-b66b-cba097ec73d1 00:33:25.137 10:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d89a1c9a-4c9a-481f-b66b-cba097ec73d1 00:33:25.137 10:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:33:25.137 10:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:33:25.137 10:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:33:25.137 10:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d89a1c9a-4c9a-481f-b66b-cba097ec73d1 lvol 150 00:33:25.398 10:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=e41cf6f1-f5fb-4d0c-b3d6-b73445a3d984 00:33:25.398 10:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:25.398 10:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:33:25.658 [2024-11-20 10:06:56.329270] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:33:25.658 [2024-11-20 10:06:56.329441] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:33:25.658 true 00:33:25.658 10:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d89a1c9a-4c9a-481f-b66b-cba097ec73d1 00:33:25.658 10:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:33:25.658 10:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:33:25.658 10:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:33:25.919 10:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e41cf6f1-f5fb-4d0c-b3d6-b73445a3d984 00:33:26.179 10:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:26.179 [2024-11-20 10:06:57.021724] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:26.180 10:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:26.468 10:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:33:26.468 10:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1609510 00:33:26.468 10:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:26.468 10:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1609510 /var/tmp/bdevperf.sock 00:33:26.468 10:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1609510 ']' 00:33:26.468 10:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:26.468 10:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:26.468 10:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:26.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:26.468 10:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:26.468 10:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:26.468 [2024-11-20 10:06:57.236400] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:33:26.468 [2024-11-20 10:06:57.236452] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1609510 ] 00:33:26.468 [2024-11-20 10:06:57.294945] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:26.468 [2024-11-20 10:06:57.324640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:26.792 10:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:26.792 10:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:33:26.792 10:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:33:26.792 Nvme0n1 00:33:26.792 10:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:33:27.084 [ 00:33:27.084 { 00:33:27.084 "name": "Nvme0n1", 00:33:27.084 "aliases": [ 00:33:27.084 "e41cf6f1-f5fb-4d0c-b3d6-b73445a3d984" 00:33:27.084 ], 00:33:27.084 "product_name": "NVMe disk", 00:33:27.084 "block_size": 4096, 00:33:27.084 "num_blocks": 38912, 00:33:27.084 "uuid": "e41cf6f1-f5fb-4d0c-b3d6-b73445a3d984", 00:33:27.084 "numa_id": 0, 00:33:27.084 "assigned_rate_limits": { 00:33:27.084 "rw_ios_per_sec": 0, 00:33:27.084 "rw_mbytes_per_sec": 0, 00:33:27.084 "r_mbytes_per_sec": 0, 00:33:27.084 "w_mbytes_per_sec": 0 00:33:27.084 }, 00:33:27.084 "claimed": false, 00:33:27.084 "zoned": false, 00:33:27.084 "supported_io_types": { 00:33:27.084 "read": true, 00:33:27.084 "write": true, 00:33:27.084 "unmap": true, 00:33:27.084 "flush": true, 00:33:27.084 "reset": true, 00:33:27.084 "nvme_admin": true, 00:33:27.084 "nvme_io": true, 00:33:27.084 "nvme_io_md": false, 00:33:27.084 "write_zeroes": true, 00:33:27.084 "zcopy": false, 00:33:27.084 "get_zone_info": false, 00:33:27.084 "zone_management": false, 00:33:27.084 "zone_append": false, 00:33:27.084 "compare": true, 00:33:27.084 "compare_and_write": true, 00:33:27.084 "abort": true, 00:33:27.084 "seek_hole": false, 00:33:27.084 "seek_data": false, 00:33:27.084 "copy": true, 00:33:27.084 "nvme_iov_md": false 00:33:27.084 }, 00:33:27.084 "memory_domains": [ 00:33:27.084 { 00:33:27.084 "dma_device_id": "system", 00:33:27.084 "dma_device_type": 1 00:33:27.084 } 00:33:27.084 ], 00:33:27.084 "driver_specific": { 00:33:27.084 "nvme": [ 00:33:27.084 { 00:33:27.084 "trid": { 00:33:27.084 "trtype": "TCP", 00:33:27.084 "adrfam": "IPv4", 00:33:27.084 "traddr": "10.0.0.2", 00:33:27.084 "trsvcid": "4420", 00:33:27.084 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:33:27.084 }, 00:33:27.084 "ctrlr_data": { 00:33:27.084 "cntlid": 1, 00:33:27.084 "vendor_id": "0x8086", 00:33:27.084 "model_number": "SPDK bdev Controller", 00:33:27.084 "serial_number": "SPDK0", 00:33:27.084 "firmware_revision": "25.01", 00:33:27.084 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:27.084 "oacs": { 00:33:27.084 "security": 0, 00:33:27.084 "format": 0, 00:33:27.084 "firmware": 0, 00:33:27.084 "ns_manage": 0 00:33:27.084 }, 00:33:27.084 "multi_ctrlr": true, 00:33:27.084 "ana_reporting": false 00:33:27.084 }, 00:33:27.084 "vs": { 00:33:27.084 "nvme_version": "1.3" 00:33:27.084 }, 00:33:27.084 "ns_data": { 00:33:27.084 "id": 1, 00:33:27.084 "can_share": true 00:33:27.084 } 00:33:27.084 } 00:33:27.084 ], 00:33:27.084 "mp_policy": "active_passive" 00:33:27.084 } 00:33:27.084 } 00:33:27.085 ] 00:33:27.085 10:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1609703 00:33:27.085 10:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:33:27.085 10:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:27.085 Running I/O for 10 seconds... 00:33:28.027 Latency(us) 00:33:28.027 [2024-11-20T09:06:58.943Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:28.027 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:28.027 Nvme0n1 : 1.00 17526.00 68.46 0.00 0.00 0.00 0.00 0.00 00:33:28.027 [2024-11-20T09:06:58.943Z] =================================================================================================================== 00:33:28.027 [2024-11-20T09:06:58.943Z] Total : 17526.00 68.46 0.00 0.00 0.00 0.00 0.00 00:33:28.027 00:33:28.969 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u d89a1c9a-4c9a-481f-b66b-cba097ec73d1 00:33:29.230 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:29.230 Nvme0n1 : 2.00 17780.00 69.45 0.00 0.00 0.00 0.00 0.00 00:33:29.230 [2024-11-20T09:07:00.146Z] =================================================================================================================== 00:33:29.230 [2024-11-20T09:07:00.146Z] Total : 17780.00 69.45 0.00 0.00 0.00 0.00 0.00 00:33:29.230 00:33:29.230 true 00:33:29.230 10:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d89a1c9a-4c9a-481f-b66b-cba097ec73d1 00:33:29.230 10:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:33:29.490 10:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:33:29.490 10:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:33:29.490 10:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1609703 00:33:30.061 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:30.061 Nvme0n1 : 3.00 17907.00 69.95 0.00 0.00 0.00 0.00 0.00 00:33:30.061 [2024-11-20T09:07:00.977Z] =================================================================================================================== 00:33:30.061 [2024-11-20T09:07:00.977Z] Total : 17907.00 69.95 0.00 0.00 0.00 0.00 0.00 00:33:30.061 00:33:31.446 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:31.446 Nvme0n1 : 4.00 17967.50 70.19 0.00 0.00 0.00 0.00 0.00 00:33:31.446 [2024-11-20T09:07:02.362Z] =================================================================================================================== 00:33:31.446 [2024-11-20T09:07:02.362Z] Total : 17967.50 70.19 0.00 0.00 0.00 0.00 0.00 00:33:31.446 00:33:32.421 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:32.421 Nvme0n1 : 5.00 18783.40 73.37 0.00 0.00 0.00 0.00 0.00 00:33:32.421 [2024-11-20T09:07:03.337Z] =================================================================================================================== 00:33:32.421 [2024-11-20T09:07:03.337Z] Total : 18783.40 73.37 0.00 0.00 0.00 0.00 0.00 00:33:32.421 00:33:33.360 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:33.360 Nvme0n1 : 6.00 19926.00 77.84 0.00 0.00 0.00 0.00 0.00 00:33:33.360 [2024-11-20T09:07:04.276Z] =================================================================================================================== 00:33:33.360 [2024-11-20T09:07:04.276Z] Total : 19926.00 77.84 0.00 0.00 0.00 0.00 0.00 00:33:33.360 00:33:34.302 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:34.302 Nvme0n1 : 7.00 20726.14 80.96 0.00 0.00 0.00 0.00 0.00 00:33:34.302 [2024-11-20T09:07:05.218Z] =================================================================================================================== 00:33:34.302 [2024-11-20T09:07:05.218Z] Total : 20726.14 80.96 0.00 0.00 0.00 0.00 0.00 00:33:34.302 00:33:35.245 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:35.245 Nvme0n1 : 8.00 21342.12 83.37 0.00 0.00 0.00 0.00 0.00 00:33:35.245 [2024-11-20T09:07:06.161Z] =================================================================================================================== 00:33:35.245 [2024-11-20T09:07:06.161Z] Total : 21342.12 83.37 0.00 0.00 0.00 0.00 0.00 00:33:35.245 00:33:36.187 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:36.187 Nvme0n1 : 9.00 21821.22 85.24 0.00 0.00 0.00 0.00 0.00 00:33:36.187 [2024-11-20T09:07:07.103Z] =================================================================================================================== 00:33:36.187 [2024-11-20T09:07:07.103Z] Total : 21821.22 85.24 0.00 0.00 0.00 0.00 0.00 00:33:36.187 00:33:37.128 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:37.128 Nvme0n1 : 10.00 22204.50 86.74 0.00 0.00 0.00 0.00 0.00 00:33:37.128 [2024-11-20T09:07:08.044Z] =================================================================================================================== 00:33:37.128 [2024-11-20T09:07:08.044Z] Total : 22204.50 86.74 0.00 0.00 0.00 0.00 0.00 00:33:37.128 00:33:37.128 00:33:37.128 Latency(us) 00:33:37.128 [2024-11-20T09:07:08.044Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:37.128 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:37.128 Nvme0n1 : 10.01 22205.35 86.74 0.00 0.00 5761.88 2880.85 30801.92 00:33:37.128 [2024-11-20T09:07:08.044Z] =================================================================================================================== 00:33:37.128 [2024-11-20T09:07:08.044Z] Total : 22205.35 86.74 0.00 0.00 5761.88 2880.85 30801.92 00:33:37.128 { 00:33:37.128 "results": [ 00:33:37.128 { 00:33:37.128 "job": "Nvme0n1", 00:33:37.128 "core_mask": "0x2", 00:33:37.128 "workload": "randwrite", 00:33:37.128 "status": "finished", 00:33:37.128 "queue_depth": 128, 00:33:37.128 "io_size": 4096, 00:33:37.128 "runtime": 10.00538, 00:33:37.128 "iops": 22205.353519806344, 00:33:37.128 "mibps": 86.73966218674353, 00:33:37.128 "io_failed": 0, 00:33:37.128 "io_timeout": 0, 00:33:37.128 "avg_latency_us": 5761.878490965749, 00:33:37.128 "min_latency_us": 2880.8533333333335, 00:33:37.129 "max_latency_us": 30801.92 00:33:37.129 } 00:33:37.129 ], 00:33:37.129 "core_count": 1 00:33:37.129 } 00:33:37.129 10:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1609510 00:33:37.129 10:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 1609510 ']' 00:33:37.129 10:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 1609510 00:33:37.129 10:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:33:37.129 10:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:37.129 10:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1609510 00:33:37.389 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:37.389 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:37.389 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1609510' 00:33:37.389 killing process with pid 1609510 00:33:37.389 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 1609510 00:33:37.389 Received shutdown signal, test time was about 10.000000 seconds 00:33:37.389 00:33:37.389 Latency(us) 00:33:37.389 [2024-11-20T09:07:08.306Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:37.390 [2024-11-20T09:07:08.306Z] =================================================================================================================== 00:33:37.390 [2024-11-20T09:07:08.306Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:37.390 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 1609510 00:33:37.390 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:37.650 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:37.650 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d89a1c9a-4c9a-481f-b66b-cba097ec73d1 00:33:37.650 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:33:37.912 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:33:37.912 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:33:37.912 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1606027 00:33:37.912 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1606027 00:33:37.912 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1606027 Killed "${NVMF_APP[@]}" "$@" 00:33:37.912 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:33:37.912 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:33:37.912 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:37.912 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:37.912 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:37.912 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=1611801 00:33:37.912 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 1611801 00:33:37.912 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:33:37.912 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1611801 ']' 00:33:37.912 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:37.912 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:37.912 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:37.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:37.912 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:37.912 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:37.912 [2024-11-20 10:07:08.814409] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:37.912 [2024-11-20 10:07:08.815432] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:33:37.912 [2024-11-20 10:07:08.815483] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:38.172 [2024-11-20 10:07:08.908500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:38.172 [2024-11-20 10:07:08.939838] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:38.172 [2024-11-20 10:07:08.939868] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:38.172 [2024-11-20 10:07:08.939874] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:38.172 [2024-11-20 10:07:08.939879] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:38.172 [2024-11-20 10:07:08.939884] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:38.172 [2024-11-20 10:07:08.940352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:38.172 [2024-11-20 10:07:08.991359] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:38.172 [2024-11-20 10:07:08.991555] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:38.744 10:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:38.744 10:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:33:38.744 10:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:38.744 10:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:38.744 10:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:38.744 10:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:38.744 10:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:33:39.006 [2024-11-20 10:07:09.802597] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:33:39.006 [2024-11-20 10:07:09.802848] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:33:39.006 [2024-11-20 10:07:09.802938] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:33:39.006 10:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:33:39.006 10:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev e41cf6f1-f5fb-4d0c-b3d6-b73445a3d984 00:33:39.006 10:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=e41cf6f1-f5fb-4d0c-b3d6-b73445a3d984 00:33:39.006 10:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:39.006 10:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:33:39.006 10:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:39.007 10:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:39.007 10:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:33:39.267 10:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e41cf6f1-f5fb-4d0c-b3d6-b73445a3d984 -t 2000 00:33:39.267 [ 00:33:39.267 { 00:33:39.267 "name": "e41cf6f1-f5fb-4d0c-b3d6-b73445a3d984", 00:33:39.267 "aliases": [ 00:33:39.267 "lvs/lvol" 00:33:39.267 ], 00:33:39.267 "product_name": "Logical Volume", 00:33:39.267 "block_size": 4096, 00:33:39.267 "num_blocks": 38912, 00:33:39.267 "uuid": "e41cf6f1-f5fb-4d0c-b3d6-b73445a3d984", 00:33:39.267 "assigned_rate_limits": { 00:33:39.267 "rw_ios_per_sec": 0, 00:33:39.267 "rw_mbytes_per_sec": 0, 00:33:39.267 "r_mbytes_per_sec": 0, 00:33:39.267 "w_mbytes_per_sec": 0 00:33:39.267 }, 00:33:39.267 "claimed": false, 00:33:39.267 "zoned": false, 00:33:39.267 "supported_io_types": { 00:33:39.267 "read": true, 00:33:39.267 "write": true, 00:33:39.267 "unmap": true, 00:33:39.267 "flush": false, 00:33:39.267 "reset": true, 00:33:39.267 "nvme_admin": false, 00:33:39.267 "nvme_io": false, 00:33:39.267 "nvme_io_md": false, 00:33:39.267 "write_zeroes": true, 00:33:39.267 "zcopy": false, 00:33:39.267 "get_zone_info": false, 00:33:39.267 "zone_management": false, 00:33:39.267 "zone_append": false, 00:33:39.267 "compare": false, 00:33:39.267 "compare_and_write": false, 00:33:39.267 "abort": false, 00:33:39.267 "seek_hole": true, 00:33:39.267 "seek_data": true, 00:33:39.267 "copy": false, 00:33:39.267 "nvme_iov_md": false 00:33:39.267 }, 00:33:39.267 "driver_specific": { 00:33:39.267 "lvol": { 00:33:39.267 "lvol_store_uuid": "d89a1c9a-4c9a-481f-b66b-cba097ec73d1", 00:33:39.267 "base_bdev": "aio_bdev", 00:33:39.267 "thin_provision": false, 00:33:39.267 "num_allocated_clusters": 38, 00:33:39.267 "snapshot": false, 00:33:39.267 "clone": false, 00:33:39.267 "esnap_clone": false 00:33:39.267 } 00:33:39.267 } 00:33:39.267 } 00:33:39.267 ] 00:33:39.267 10:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:33:39.267 10:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d89a1c9a-4c9a-481f-b66b-cba097ec73d1 00:33:39.267 10:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:33:39.529 10:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:33:39.529 10:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d89a1c9a-4c9a-481f-b66b-cba097ec73d1 00:33:39.529 10:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:33:39.790 10:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:33:39.790 10:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:33:39.790 [2024-11-20 10:07:10.660828] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:33:40.051 10:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d89a1c9a-4c9a-481f-b66b-cba097ec73d1 00:33:40.051 10:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:33:40.051 10:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d89a1c9a-4c9a-481f-b66b-cba097ec73d1 00:33:40.051 10:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:40.051 10:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:40.051 10:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:40.051 10:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:40.051 10:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:40.051 10:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:40.051 10:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:40.051 10:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:33:40.051 10:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d89a1c9a-4c9a-481f-b66b-cba097ec73d1 00:33:40.051 request: 00:33:40.051 { 00:33:40.051 "uuid": "d89a1c9a-4c9a-481f-b66b-cba097ec73d1", 00:33:40.051 "method": "bdev_lvol_get_lvstores", 00:33:40.051 "req_id": 1 00:33:40.051 } 00:33:40.051 Got JSON-RPC error response 00:33:40.051 response: 00:33:40.051 { 00:33:40.051 "code": -19, 00:33:40.051 "message": "No such device" 00:33:40.051 } 00:33:40.051 10:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:33:40.051 10:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:40.051 10:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:40.051 10:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:40.051 10:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:33:40.312 aio_bdev 00:33:40.312 10:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev e41cf6f1-f5fb-4d0c-b3d6-b73445a3d984 00:33:40.312 10:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=e41cf6f1-f5fb-4d0c-b3d6-b73445a3d984 00:33:40.312 10:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:40.312 10:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:33:40.312 10:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:40.312 10:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:40.312 10:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:33:40.312 10:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e41cf6f1-f5fb-4d0c-b3d6-b73445a3d984 -t 2000 00:33:40.574 [ 00:33:40.574 { 00:33:40.574 "name": "e41cf6f1-f5fb-4d0c-b3d6-b73445a3d984", 00:33:40.574 "aliases": [ 00:33:40.574 "lvs/lvol" 00:33:40.574 ], 00:33:40.574 "product_name": "Logical Volume", 00:33:40.574 "block_size": 4096, 00:33:40.574 "num_blocks": 38912, 00:33:40.574 "uuid": "e41cf6f1-f5fb-4d0c-b3d6-b73445a3d984", 00:33:40.574 "assigned_rate_limits": { 00:33:40.574 "rw_ios_per_sec": 0, 00:33:40.574 "rw_mbytes_per_sec": 0, 00:33:40.574 "r_mbytes_per_sec": 0, 00:33:40.574 "w_mbytes_per_sec": 0 00:33:40.574 }, 00:33:40.574 "claimed": false, 00:33:40.574 "zoned": false, 00:33:40.574 "supported_io_types": { 00:33:40.574 "read": true, 00:33:40.574 "write": true, 00:33:40.574 "unmap": true, 00:33:40.574 "flush": false, 00:33:40.574 "reset": true, 00:33:40.574 "nvme_admin": false, 00:33:40.574 "nvme_io": false, 00:33:40.574 "nvme_io_md": false, 00:33:40.574 "write_zeroes": true, 00:33:40.574 "zcopy": false, 00:33:40.574 "get_zone_info": false, 00:33:40.574 "zone_management": false, 00:33:40.574 "zone_append": false, 00:33:40.574 "compare": false, 00:33:40.574 "compare_and_write": false, 00:33:40.574 "abort": false, 00:33:40.574 "seek_hole": true, 00:33:40.574 "seek_data": true, 00:33:40.574 "copy": false, 00:33:40.574 "nvme_iov_md": false 00:33:40.574 }, 00:33:40.574 "driver_specific": { 00:33:40.574 "lvol": { 00:33:40.574 "lvol_store_uuid": "d89a1c9a-4c9a-481f-b66b-cba097ec73d1", 00:33:40.574 "base_bdev": "aio_bdev", 00:33:40.574 "thin_provision": false, 00:33:40.574 "num_allocated_clusters": 38, 00:33:40.574 "snapshot": false, 00:33:40.574 "clone": false, 00:33:40.574 "esnap_clone": false 00:33:40.574 } 00:33:40.574 } 00:33:40.574 } 00:33:40.574 ] 00:33:40.574 10:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:33:40.574 10:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d89a1c9a-4c9a-481f-b66b-cba097ec73d1 00:33:40.574 10:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:33:40.834 10:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:33:40.834 10:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d89a1c9a-4c9a-481f-b66b-cba097ec73d1 00:33:40.834 10:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:33:40.834 10:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:33:40.835 10:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e41cf6f1-f5fb-4d0c-b3d6-b73445a3d984 00:33:41.097 10:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d89a1c9a-4c9a-481f-b66b-cba097ec73d1 00:33:41.359 10:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:33:41.359 10:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:41.620 00:33:41.621 real 0m16.868s 00:33:41.621 user 0m34.758s 00:33:41.621 sys 0m2.990s 00:33:41.621 10:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:41.621 10:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:41.621 ************************************ 00:33:41.621 END TEST lvs_grow_dirty 00:33:41.621 ************************************ 00:33:41.621 10:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:33:41.621 10:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:33:41.621 10:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:33:41.621 10:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:33:41.621 10:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:33:41.621 10:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:33:41.621 10:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:33:41.621 10:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:33:41.621 10:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:33:41.621 nvmf_trace.0 00:33:41.621 10:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:33:41.621 10:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:33:41.621 10:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:41.621 10:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:33:41.621 10:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:41.621 10:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:33:41.621 10:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:41.621 10:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:41.621 rmmod nvme_tcp 00:33:41.621 rmmod nvme_fabrics 00:33:41.621 rmmod nvme_keyring 00:33:41.621 10:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:41.621 10:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:33:41.621 10:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:33:41.621 10:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 1611801 ']' 00:33:41.621 10:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 1611801 00:33:41.621 10:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 1611801 ']' 00:33:41.621 10:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 1611801 00:33:41.621 10:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:33:41.621 10:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:41.621 10:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1611801 00:33:41.621 10:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:41.621 10:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:41.621 10:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1611801' 00:33:41.621 killing process with pid 1611801 00:33:41.621 10:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 1611801 00:33:41.621 10:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 1611801 00:33:41.882 10:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:41.882 10:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:41.882 10:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:41.882 10:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:33:41.882 10:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:33:41.882 10:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:41.882 10:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:33:41.882 10:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:41.882 10:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:41.882 10:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:41.882 10:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:41.882 10:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:44.429 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:44.429 00:33:44.429 real 0m44.136s 00:33:44.429 user 0m53.338s 00:33:44.429 sys 0m10.540s 00:33:44.429 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:44.429 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:44.429 ************************************ 00:33:44.429 END TEST nvmf_lvs_grow 00:33:44.429 ************************************ 00:33:44.429 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:33:44.429 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:44.429 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:44.429 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:44.429 ************************************ 00:33:44.429 START TEST nvmf_bdev_io_wait 00:33:44.429 ************************************ 00:33:44.429 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:33:44.429 * Looking for test storage... 00:33:44.429 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:44.429 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:44.429 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:33:44.429 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:44.429 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:44.429 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:44.429 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:44.429 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:44.429 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:33:44.429 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:33:44.429 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:33:44.429 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:33:44.429 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:33:44.429 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:33:44.429 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:33:44.429 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:44.429 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:33:44.429 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:33:44.429 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:44.429 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:44.429 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:33:44.429 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:33:44.429 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:44.429 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:33:44.429 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:33:44.429 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:33:44.429 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:33:44.429 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:44.429 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:33:44.429 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:33:44.429 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:44.429 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:44.429 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:33:44.429 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:44.429 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:44.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:44.429 --rc genhtml_branch_coverage=1 00:33:44.429 --rc genhtml_function_coverage=1 00:33:44.429 --rc genhtml_legend=1 00:33:44.429 --rc geninfo_all_blocks=1 00:33:44.429 --rc geninfo_unexecuted_blocks=1 00:33:44.429 00:33:44.429 ' 00:33:44.429 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:44.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:44.429 --rc genhtml_branch_coverage=1 00:33:44.429 --rc genhtml_function_coverage=1 00:33:44.429 --rc genhtml_legend=1 00:33:44.429 --rc geninfo_all_blocks=1 00:33:44.429 --rc geninfo_unexecuted_blocks=1 00:33:44.429 00:33:44.429 ' 00:33:44.429 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:44.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:44.429 --rc genhtml_branch_coverage=1 00:33:44.429 --rc genhtml_function_coverage=1 00:33:44.429 --rc genhtml_legend=1 00:33:44.429 --rc geninfo_all_blocks=1 00:33:44.429 --rc geninfo_unexecuted_blocks=1 00:33:44.429 00:33:44.429 ' 00:33:44.429 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:44.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:44.429 --rc genhtml_branch_coverage=1 00:33:44.429 --rc genhtml_function_coverage=1 00:33:44.429 --rc genhtml_legend=1 00:33:44.429 --rc geninfo_all_blocks=1 00:33:44.429 --rc geninfo_unexecuted_blocks=1 00:33:44.429 00:33:44.429 ' 00:33:44.429 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:44.429 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:33:44.429 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:44.429 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:44.429 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:44.429 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:44.429 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:44.429 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:44.429 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:44.429 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:44.429 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:44.430 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:44.430 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:44.430 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:44.430 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:44.430 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:44.430 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:44.430 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:44.430 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:44.430 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:33:44.430 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:44.430 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:44.430 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:44.430 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:44.430 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:44.430 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:44.430 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:33:44.430 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:44.430 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:33:44.430 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:44.430 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:44.430 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:44.430 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:44.430 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:44.430 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:44.430 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:44.430 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:44.430 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:44.430 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:44.430 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:44.430 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:44.430 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:33:44.430 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:44.430 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:44.430 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:44.430 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:44.430 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:44.430 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:44.430 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:44.430 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:44.430 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:44.430 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:44.430 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:33:44.430 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:52.575 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:52.575 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:33:52.575 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:52.575 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:52.575 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:52.575 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:52.575 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:52.575 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:33:52.575 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:52.575 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:33:52.575 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:33:52.575 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:33:52.575 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:33:52.575 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:33:52.575 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:33:52.575 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:52.575 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:52.575 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:52.575 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:52.575 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:52.575 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:52.575 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:52.575 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:52.575 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:52.575 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:52.575 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:52.575 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:52.575 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:52.575 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:52.575 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:52.575 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:52.575 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:52.575 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:52.575 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:52.575 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:52.575 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:52.575 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:52.575 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:52.575 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:52.575 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:52.575 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:52.575 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:52.575 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:52.575 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:52.575 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:52.575 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:52.575 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:52.575 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:52.575 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:52.575 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:52.575 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:52.575 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:52.575 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:52.575 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:52.575 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:52.575 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:52.575 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:52.575 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:52.575 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:52.575 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:52.575 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:52.575 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:52.575 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:52.575 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:52.575 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:52.575 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:52.575 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:52.575 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:52.575 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:52.575 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:52.575 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:52.575 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:52.575 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:52.575 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:33:52.575 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:52.575 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:52.575 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:52.575 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:52.575 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:52.575 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:52.575 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:52.575 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:52.575 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:52.575 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:52.575 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:52.575 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:52.575 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:52.575 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:52.575 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:52.575 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:52.575 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:52.575 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:52.575 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:52.576 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:52.576 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:52.576 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:52.576 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:52.576 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:52.576 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:52.576 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:52.576 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:52.576 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.607 ms 00:33:52.576 00:33:52.576 --- 10.0.0.2 ping statistics --- 00:33:52.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:52.576 rtt min/avg/max/mdev = 0.607/0.607/0.607/0.000 ms 00:33:52.576 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:52.576 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:52.576 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.339 ms 00:33:52.576 00:33:52.576 --- 10.0.0.1 ping statistics --- 00:33:52.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:52.576 rtt min/avg/max/mdev = 0.339/0.339/0.339/0.000 ms 00:33:52.576 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:52.576 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:33:52.576 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:52.576 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:52.576 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:52.576 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:52.576 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:52.576 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:52.576 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:52.576 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:33:52.576 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:52.576 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:52.576 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:52.576 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=1616598 00:33:52.576 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 1616598 00:33:52.576 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:33:52.576 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 1616598 ']' 00:33:52.576 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:52.576 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:52.576 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:52.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:52.576 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:52.576 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:52.576 [2024-11-20 10:07:22.648960] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:52.576 [2024-11-20 10:07:22.650083] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:33:52.576 [2024-11-20 10:07:22.650133] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:52.576 [2024-11-20 10:07:22.750876] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:52.576 [2024-11-20 10:07:22.805508] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:52.576 [2024-11-20 10:07:22.805562] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:52.576 [2024-11-20 10:07:22.805573] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:52.576 [2024-11-20 10:07:22.805580] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:52.576 [2024-11-20 10:07:22.805587] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:52.576 [2024-11-20 10:07:22.807619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:52.576 [2024-11-20 10:07:22.807780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:52.576 [2024-11-20 10:07:22.807938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:52.576 [2024-11-20 10:07:22.807939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:52.576 [2024-11-20 10:07:22.808300] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:52.576 10:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:52.576 10:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:33:52.576 10:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:52.576 10:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:52.576 10:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:52.838 10:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:52.838 10:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:33:52.838 10:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.838 10:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:52.838 10:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.838 10:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:33:52.838 10:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.838 10:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:52.838 [2024-11-20 10:07:23.576756] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:52.838 [2024-11-20 10:07:23.577280] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:52.838 [2024-11-20 10:07:23.577280] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:52.838 [2024-11-20 10:07:23.577478] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:52.838 10:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.838 10:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:52.838 10:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.838 10:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:52.838 [2024-11-20 10:07:23.588797] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:52.838 10:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.838 10:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:52.838 10:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.838 10:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:52.838 Malloc0 00:33:52.838 10:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.838 10:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:52.838 10:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.838 10:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:52.838 10:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.838 10:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:52.838 10:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.838 10:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:52.838 10:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.838 10:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:52.838 10:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.838 10:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:52.838 [2024-11-20 10:07:23.661057] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:52.839 10:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.839 10:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1616945 00:33:52.839 10:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1616947 00:33:52.839 10:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:33:52.839 10:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:33:52.839 10:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:33:52.839 10:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:33:52.839 10:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:52.839 10:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:52.839 { 00:33:52.839 "params": { 00:33:52.839 "name": "Nvme$subsystem", 00:33:52.839 "trtype": "$TEST_TRANSPORT", 00:33:52.839 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:52.839 "adrfam": "ipv4", 00:33:52.839 "trsvcid": "$NVMF_PORT", 00:33:52.839 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:52.839 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:52.839 "hdgst": ${hdgst:-false}, 00:33:52.839 "ddgst": ${ddgst:-false} 00:33:52.839 }, 00:33:52.839 "method": "bdev_nvme_attach_controller" 00:33:52.839 } 00:33:52.839 EOF 00:33:52.839 )") 00:33:52.839 10:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1616949 00:33:52.839 10:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:33:52.839 10:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:33:52.839 10:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:33:52.839 10:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:33:52.839 10:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:52.839 10:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:52.839 { 00:33:52.839 "params": { 00:33:52.839 "name": "Nvme$subsystem", 00:33:52.839 "trtype": "$TEST_TRANSPORT", 00:33:52.839 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:52.839 "adrfam": "ipv4", 00:33:52.839 "trsvcid": "$NVMF_PORT", 00:33:52.839 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:52.839 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:52.839 "hdgst": ${hdgst:-false}, 00:33:52.839 "ddgst": ${ddgst:-false} 00:33:52.839 }, 00:33:52.839 "method": "bdev_nvme_attach_controller" 00:33:52.839 } 00:33:52.839 EOF 00:33:52.839 )") 00:33:52.839 10:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1616952 00:33:52.839 10:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:33:52.839 10:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:33:52.839 10:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:33:52.839 10:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:33:52.839 10:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:33:52.839 10:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:33:52.839 10:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:52.839 10:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:52.839 { 00:33:52.839 "params": { 00:33:52.839 "name": "Nvme$subsystem", 00:33:52.839 "trtype": "$TEST_TRANSPORT", 00:33:52.839 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:52.839 "adrfam": "ipv4", 00:33:52.839 "trsvcid": "$NVMF_PORT", 00:33:52.839 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:52.839 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:52.839 "hdgst": ${hdgst:-false}, 00:33:52.839 "ddgst": ${ddgst:-false} 00:33:52.839 }, 00:33:52.839 "method": "bdev_nvme_attach_controller" 00:33:52.839 } 00:33:52.839 EOF 00:33:52.839 )") 00:33:52.839 10:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:33:52.839 10:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:33:52.839 10:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:33:52.839 10:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:33:52.839 10:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:33:52.839 10:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:52.839 10:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:52.839 { 00:33:52.839 "params": { 00:33:52.839 "name": "Nvme$subsystem", 00:33:52.839 "trtype": "$TEST_TRANSPORT", 00:33:52.839 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:52.839 "adrfam": "ipv4", 00:33:52.839 "trsvcid": "$NVMF_PORT", 00:33:52.839 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:52.839 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:52.839 "hdgst": ${hdgst:-false}, 00:33:52.839 "ddgst": ${ddgst:-false} 00:33:52.839 }, 00:33:52.839 "method": "bdev_nvme_attach_controller" 00:33:52.839 } 00:33:52.839 EOF 00:33:52.839 )") 00:33:52.839 10:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:33:52.839 10:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1616945 00:33:52.839 10:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:33:52.839 10:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:33:52.839 10:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:33:52.839 10:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:33:52.839 10:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:33:52.839 10:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:52.839 "params": { 00:33:52.839 "name": "Nvme1", 00:33:52.839 "trtype": "tcp", 00:33:52.839 "traddr": "10.0.0.2", 00:33:52.839 "adrfam": "ipv4", 00:33:52.839 "trsvcid": "4420", 00:33:52.839 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:52.839 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:52.839 "hdgst": false, 00:33:52.839 "ddgst": false 00:33:52.839 }, 00:33:52.839 "method": "bdev_nvme_attach_controller" 00:33:52.839 }' 00:33:52.839 10:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:33:52.839 10:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:33:52.839 10:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:52.839 "params": { 00:33:52.839 "name": "Nvme1", 00:33:52.839 "trtype": "tcp", 00:33:52.839 "traddr": "10.0.0.2", 00:33:52.839 "adrfam": "ipv4", 00:33:52.839 "trsvcid": "4420", 00:33:52.839 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:52.839 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:52.839 "hdgst": false, 00:33:52.839 "ddgst": false 00:33:52.839 }, 00:33:52.839 "method": "bdev_nvme_attach_controller" 00:33:52.839 }' 00:33:52.839 10:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:33:52.839 10:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:52.839 "params": { 00:33:52.839 "name": "Nvme1", 00:33:52.839 "trtype": "tcp", 00:33:52.839 "traddr": "10.0.0.2", 00:33:52.839 "adrfam": "ipv4", 00:33:52.839 "trsvcid": "4420", 00:33:52.839 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:52.839 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:52.839 "hdgst": false, 00:33:52.839 "ddgst": false 00:33:52.839 }, 00:33:52.839 "method": "bdev_nvme_attach_controller" 00:33:52.839 }' 00:33:52.839 10:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:33:52.839 10:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:52.839 "params": { 00:33:52.839 "name": "Nvme1", 00:33:52.839 "trtype": "tcp", 00:33:52.839 "traddr": "10.0.0.2", 00:33:52.839 "adrfam": "ipv4", 00:33:52.839 "trsvcid": "4420", 00:33:52.839 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:52.839 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:52.839 "hdgst": false, 00:33:52.839 "ddgst": false 00:33:52.839 }, 00:33:52.839 "method": "bdev_nvme_attach_controller" 00:33:52.839 }' 00:33:52.839 [2024-11-20 10:07:23.719906] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:33:52.839 [2024-11-20 10:07:23.719983] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:33:52.839 [2024-11-20 10:07:23.722684] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:33:52.840 [2024-11-20 10:07:23.722753] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:33:52.840 [2024-11-20 10:07:23.730142] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:33:52.840 [2024-11-20 10:07:23.730210] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:33:52.840 [2024-11-20 10:07:23.730438] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:33:52.840 [2024-11-20 10:07:23.730495] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:33:53.101 [2024-11-20 10:07:23.935623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:53.101 [2024-11-20 10:07:23.978124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:33:53.101 [2024-11-20 10:07:24.003362] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:53.362 [2024-11-20 10:07:24.041315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:33:53.362 [2024-11-20 10:07:24.096264] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:53.362 [2024-11-20 10:07:24.135670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:33:53.362 [2024-11-20 10:07:24.188466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:53.362 [2024-11-20 10:07:24.230630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:33:53.622 Running I/O for 1 seconds... 00:33:53.622 Running I/O for 1 seconds... 00:33:53.622 Running I/O for 1 seconds... 00:33:53.622 Running I/O for 1 seconds... 00:33:54.568 187152.00 IOPS, 731.06 MiB/s 00:33:54.568 Latency(us) 00:33:54.568 [2024-11-20T09:07:25.484Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:54.568 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:33:54.568 Nvme1n1 : 1.00 186775.99 729.59 0.00 0.00 681.28 300.37 1979.73 00:33:54.568 [2024-11-20T09:07:25.484Z] =================================================================================================================== 00:33:54.568 [2024-11-20T09:07:25.484Z] Total : 186775.99 729.59 0.00 0.00 681.28 300.37 1979.73 00:33:54.568 7365.00 IOPS, 28.77 MiB/s 00:33:54.568 Latency(us) 00:33:54.568 [2024-11-20T09:07:25.484Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:54.568 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:33:54.568 Nvme1n1 : 1.02 7342.59 28.68 0.00 0.00 17213.19 2525.87 30365.01 00:33:54.568 [2024-11-20T09:07:25.484Z] =================================================================================================================== 00:33:54.568 [2024-11-20T09:07:25.484Z] Total : 7342.59 28.68 0.00 0.00 17213.19 2525.87 30365.01 00:33:54.568 11836.00 IOPS, 46.23 MiB/s 00:33:54.568 Latency(us) 00:33:54.568 [2024-11-20T09:07:25.484Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:54.568 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:33:54.568 Nvme1n1 : 1.01 11875.31 46.39 0.00 0.00 10738.02 5434.03 15837.87 00:33:54.568 [2024-11-20T09:07:25.484Z] =================================================================================================================== 00:33:54.568 [2024-11-20T09:07:25.484Z] Total : 11875.31 46.39 0.00 0.00 10738.02 5434.03 15837.87 00:33:54.568 7319.00 IOPS, 28.59 MiB/s 00:33:54.568 Latency(us) 00:33:54.568 [2024-11-20T09:07:25.484Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:54.568 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:33:54.568 Nvme1n1 : 1.01 7441.02 29.07 0.00 0.00 17155.46 3904.85 34952.53 00:33:54.568 [2024-11-20T09:07:25.484Z] =================================================================================================================== 00:33:54.568 [2024-11-20T09:07:25.484Z] Total : 7441.02 29.07 0.00 0.00 17155.46 3904.85 34952.53 00:33:54.831 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1616947 00:33:54.831 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1616949 00:33:54.831 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1616952 00:33:54.831 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:54.831 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.831 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:54.831 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.831 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:33:54.831 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:33:54.831 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:54.831 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:33:54.831 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:54.831 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:33:54.831 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:54.831 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:54.831 rmmod nvme_tcp 00:33:54.831 rmmod nvme_fabrics 00:33:54.831 rmmod nvme_keyring 00:33:54.831 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:54.831 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:33:54.831 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:33:54.831 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 1616598 ']' 00:33:54.831 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 1616598 00:33:54.831 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 1616598 ']' 00:33:54.832 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 1616598 00:33:54.832 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:33:54.832 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:54.832 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1616598 00:33:54.832 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:54.832 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:54.832 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1616598' 00:33:54.832 killing process with pid 1616598 00:33:54.832 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 1616598 00:33:54.832 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 1616598 00:33:55.093 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:55.093 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:55.093 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:55.093 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:33:55.093 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:33:55.093 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:55.093 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:33:55.093 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:55.093 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:55.093 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:55.093 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:55.093 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:57.644 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:57.644 00:33:57.644 real 0m13.153s 00:33:57.644 user 0m16.001s 00:33:57.644 sys 0m7.669s 00:33:57.644 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:57.644 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:57.644 ************************************ 00:33:57.644 END TEST nvmf_bdev_io_wait 00:33:57.644 ************************************ 00:33:57.644 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:33:57.644 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:57.644 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:57.644 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:57.644 ************************************ 00:33:57.644 START TEST nvmf_queue_depth 00:33:57.644 ************************************ 00:33:57.644 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:33:57.644 * Looking for test storage... 00:33:57.644 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:57.644 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:57.644 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:33:57.644 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:57.644 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:57.644 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:57.644 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:57.644 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:57.644 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:33:57.644 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:33:57.644 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:33:57.644 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:33:57.644 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:33:57.644 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:33:57.644 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:33:57.644 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:57.644 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:33:57.644 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:33:57.644 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:57.644 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:57.644 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:33:57.644 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:33:57.644 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:57.644 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:33:57.644 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:33:57.644 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:33:57.644 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:33:57.644 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:57.644 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:33:57.644 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:33:57.644 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:57.644 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:57.644 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:33:57.644 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:57.644 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:57.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:57.644 --rc genhtml_branch_coverage=1 00:33:57.644 --rc genhtml_function_coverage=1 00:33:57.644 --rc genhtml_legend=1 00:33:57.644 --rc geninfo_all_blocks=1 00:33:57.644 --rc geninfo_unexecuted_blocks=1 00:33:57.644 00:33:57.644 ' 00:33:57.644 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:57.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:57.644 --rc genhtml_branch_coverage=1 00:33:57.644 --rc genhtml_function_coverage=1 00:33:57.645 --rc genhtml_legend=1 00:33:57.645 --rc geninfo_all_blocks=1 00:33:57.645 --rc geninfo_unexecuted_blocks=1 00:33:57.645 00:33:57.645 ' 00:33:57.645 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:57.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:57.645 --rc genhtml_branch_coverage=1 00:33:57.645 --rc genhtml_function_coverage=1 00:33:57.645 --rc genhtml_legend=1 00:33:57.645 --rc geninfo_all_blocks=1 00:33:57.645 --rc geninfo_unexecuted_blocks=1 00:33:57.645 00:33:57.645 ' 00:33:57.645 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:57.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:57.645 --rc genhtml_branch_coverage=1 00:33:57.645 --rc genhtml_function_coverage=1 00:33:57.645 --rc genhtml_legend=1 00:33:57.645 --rc geninfo_all_blocks=1 00:33:57.645 --rc geninfo_unexecuted_blocks=1 00:33:57.645 00:33:57.645 ' 00:33:57.645 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:57.645 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:33:57.645 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:57.645 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:57.645 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:57.645 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:57.645 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:57.645 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:57.645 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:57.645 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:57.645 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:57.645 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:57.645 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:57.645 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:57.645 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:57.645 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:57.645 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:57.645 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:57.645 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:57.645 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:33:57.645 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:57.645 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:57.645 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:57.645 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:57.645 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:57.645 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:57.645 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:33:57.645 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:57.645 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:33:57.645 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:57.645 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:57.645 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:57.645 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:57.645 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:57.645 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:57.645 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:57.645 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:57.645 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:57.645 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:57.645 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:33:57.645 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:33:57.645 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:57.645 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:33:57.645 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:57.645 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:57.645 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:57.645 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:57.645 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:57.645 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:57.645 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:57.645 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:57.645 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:57.645 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:57.645 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:33:57.645 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:05.789 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:05.789 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:34:05.789 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:05.789 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:05.789 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:05.789 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:05.789 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:05.789 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:34:05.789 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:05.789 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:34:05.789 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:34:05.789 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:34:05.789 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:34:05.789 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:34:05.789 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:34:05.789 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:05.789 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:05.789 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:05.789 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:05.789 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:05.789 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:05.789 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:05.789 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:05.789 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:05.789 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:05.789 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:05.789 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:05.789 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:05.789 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:05.789 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:05.789 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:05.789 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:05.789 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:05.789 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:05.789 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:05.789 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:05.789 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:05.789 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:05.789 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:05.789 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:05.789 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:05.789 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:05.790 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:05.790 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:05.790 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:05.790 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:05.790 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:05.790 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:05.790 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:05.790 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:05.790 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:05.790 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:05.790 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:05.790 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:05.790 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:05.790 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:05.790 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:05.790 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:05.790 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:05.790 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:05.790 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:05.790 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:05.790 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:05.790 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:05.790 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:05.790 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:05.790 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:05.790 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:05.790 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:05.790 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:05.790 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:05.790 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:05.790 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:05.790 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:34:05.790 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:05.790 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:05.790 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:05.790 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:05.790 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:05.790 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:05.790 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:05.790 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:05.790 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:05.790 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:05.790 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:05.790 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:05.790 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:05.790 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:05.790 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:05.790 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:05.790 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:05.790 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:05.790 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:05.790 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:05.790 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:05.790 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:05.790 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:05.790 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:05.790 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:05.790 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:05.790 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:05.790 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.607 ms 00:34:05.790 00:34:05.790 --- 10.0.0.2 ping statistics --- 00:34:05.790 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:05.790 rtt min/avg/max/mdev = 0.607/0.607/0.607/0.000 ms 00:34:05.790 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:05.790 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:05.790 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:34:05.790 00:34:05.790 --- 10.0.0.1 ping statistics --- 00:34:05.790 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:05.790 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:34:05.790 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:05.790 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:34:05.790 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:05.790 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:05.790 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:05.790 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:05.790 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:05.790 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:05.790 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:05.790 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:34:05.790 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:05.790 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:05.790 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:05.790 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=1621333 00:34:05.790 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 1621333 00:34:05.790 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:34:05.790 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1621333 ']' 00:34:05.790 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:05.790 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:05.790 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:05.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:05.790 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:05.790 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:05.790 [2024-11-20 10:07:35.690321] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:05.790 [2024-11-20 10:07:35.691281] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:34:05.790 [2024-11-20 10:07:35.691318] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:05.790 [2024-11-20 10:07:35.785765] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:05.790 [2024-11-20 10:07:35.820820] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:05.790 [2024-11-20 10:07:35.820850] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:05.790 [2024-11-20 10:07:35.820859] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:05.790 [2024-11-20 10:07:35.820865] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:05.790 [2024-11-20 10:07:35.820871] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:05.791 [2024-11-20 10:07:35.821411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:05.791 [2024-11-20 10:07:35.875731] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:05.791 [2024-11-20 10:07:35.875987] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:05.791 10:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:05.791 10:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:34:05.791 10:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:05.791 10:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:05.791 10:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:05.791 10:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:05.791 10:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:05.791 10:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.791 10:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:05.791 [2024-11-20 10:07:36.522131] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:05.791 10:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.791 10:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:05.791 10:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.791 10:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:05.791 Malloc0 00:34:05.791 10:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.791 10:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:05.791 10:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.791 10:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:05.791 10:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.791 10:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:05.791 10:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.791 10:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:05.791 10:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.791 10:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:05.791 10:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.791 10:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:05.791 [2024-11-20 10:07:36.598209] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:05.791 10:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.791 10:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1621655 00:34:05.791 10:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:05.791 10:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:34:05.791 10:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1621655 /var/tmp/bdevperf.sock 00:34:05.791 10:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1621655 ']' 00:34:05.791 10:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:05.791 10:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:05.791 10:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:05.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:05.791 10:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:05.791 10:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:05.791 [2024-11-20 10:07:36.651365] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:34:05.791 [2024-11-20 10:07:36.651413] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1621655 ] 00:34:06.052 [2024-11-20 10:07:36.739634] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:06.052 [2024-11-20 10:07:36.778462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:06.625 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:06.625 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:34:06.625 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:06.625 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.625 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:06.886 NVMe0n1 00:34:06.886 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.886 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:34:06.886 Running I/O for 10 seconds... 00:34:08.773 8329.00 IOPS, 32.54 MiB/s [2024-11-20T09:07:41.077Z] 8704.00 IOPS, 34.00 MiB/s [2024-11-20T09:07:42.020Z] 9380.67 IOPS, 36.64 MiB/s [2024-11-20T09:07:42.963Z] 10270.75 IOPS, 40.12 MiB/s [2024-11-20T09:07:43.905Z] 10858.20 IOPS, 42.41 MiB/s [2024-11-20T09:07:44.845Z] 11268.00 IOPS, 44.02 MiB/s [2024-11-20T09:07:45.784Z] 11545.71 IOPS, 45.10 MiB/s [2024-11-20T09:07:46.726Z] 11745.12 IOPS, 45.88 MiB/s [2024-11-20T09:07:47.667Z] 11945.22 IOPS, 46.66 MiB/s [2024-11-20T09:07:47.927Z] 12087.10 IOPS, 47.22 MiB/s 00:34:17.011 Latency(us) 00:34:17.011 [2024-11-20T09:07:47.927Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:17.011 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:34:17.011 Verification LBA range: start 0x0 length 0x4000 00:34:17.011 NVMe0n1 : 10.06 12112.58 47.31 0.00 0.00 84255.28 25449.81 74711.04 00:34:17.011 [2024-11-20T09:07:47.927Z] =================================================================================================================== 00:34:17.011 [2024-11-20T09:07:47.927Z] Total : 12112.58 47.31 0.00 0.00 84255.28 25449.81 74711.04 00:34:17.011 { 00:34:17.011 "results": [ 00:34:17.011 { 00:34:17.011 "job": "NVMe0n1", 00:34:17.011 "core_mask": "0x1", 00:34:17.011 "workload": "verify", 00:34:17.011 "status": "finished", 00:34:17.011 "verify_range": { 00:34:17.011 "start": 0, 00:34:17.011 "length": 16384 00:34:17.011 }, 00:34:17.011 "queue_depth": 1024, 00:34:17.011 "io_size": 4096, 00:34:17.011 "runtime": 10.06342, 00:34:17.011 "iops": 12112.582004924767, 00:34:17.011 "mibps": 47.31477345673737, 00:34:17.011 "io_failed": 0, 00:34:17.011 "io_timeout": 0, 00:34:17.011 "avg_latency_us": 84255.2798254221, 00:34:17.011 "min_latency_us": 25449.81333333333, 00:34:17.011 "max_latency_us": 74711.04 00:34:17.011 } 00:34:17.011 ], 00:34:17.011 "core_count": 1 00:34:17.011 } 00:34:17.011 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1621655 00:34:17.011 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1621655 ']' 00:34:17.011 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1621655 00:34:17.011 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:34:17.011 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:17.011 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1621655 00:34:17.011 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:17.011 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:17.011 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1621655' 00:34:17.011 killing process with pid 1621655 00:34:17.011 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1621655 00:34:17.011 Received shutdown signal, test time was about 10.000000 seconds 00:34:17.011 00:34:17.011 Latency(us) 00:34:17.011 [2024-11-20T09:07:47.927Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:17.011 [2024-11-20T09:07:47.927Z] =================================================================================================================== 00:34:17.011 [2024-11-20T09:07:47.927Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:17.011 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1621655 00:34:17.011 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:34:17.011 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:34:17.011 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:17.011 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:34:17.011 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:17.011 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:34:17.011 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:17.011 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:17.272 rmmod nvme_tcp 00:34:17.272 rmmod nvme_fabrics 00:34:17.272 rmmod nvme_keyring 00:34:17.272 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:17.272 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:34:17.272 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:34:17.272 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 1621333 ']' 00:34:17.272 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 1621333 00:34:17.272 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1621333 ']' 00:34:17.272 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1621333 00:34:17.272 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:34:17.272 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:17.272 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1621333 00:34:17.272 10:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:17.272 10:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:17.272 10:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1621333' 00:34:17.272 killing process with pid 1621333 00:34:17.272 10:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1621333 00:34:17.272 10:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1621333 00:34:17.272 10:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:17.272 10:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:17.272 10:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:17.272 10:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:34:17.272 10:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:34:17.272 10:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:17.272 10:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:34:17.272 10:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:17.272 10:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:17.272 10:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:17.272 10:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:17.272 10:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:19.820 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:19.820 00:34:19.820 real 0m22.196s 00:34:19.820 user 0m24.378s 00:34:19.820 sys 0m7.353s 00:34:19.820 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:19.820 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:19.820 ************************************ 00:34:19.820 END TEST nvmf_queue_depth 00:34:19.820 ************************************ 00:34:19.820 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:34:19.821 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:19.821 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:19.821 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:19.821 ************************************ 00:34:19.821 START TEST nvmf_target_multipath 00:34:19.821 ************************************ 00:34:19.821 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:34:19.821 * Looking for test storage... 00:34:19.821 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:19.821 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:19.821 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:34:19.821 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:19.821 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:19.821 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:19.821 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:19.821 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:19.821 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:34:19.821 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:34:19.821 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:34:19.821 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:34:19.821 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:34:19.821 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:34:19.821 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:34:19.821 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:19.821 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:34:19.821 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:34:19.821 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:19.821 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:19.821 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:34:19.821 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:34:19.821 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:19.821 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:34:19.821 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:34:19.821 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:34:19.821 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:34:19.821 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:19.821 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:34:19.821 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:34:19.821 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:19.821 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:19.821 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:34:19.821 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:19.821 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:19.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:19.821 --rc genhtml_branch_coverage=1 00:34:19.821 --rc genhtml_function_coverage=1 00:34:19.821 --rc genhtml_legend=1 00:34:19.821 --rc geninfo_all_blocks=1 00:34:19.821 --rc geninfo_unexecuted_blocks=1 00:34:19.821 00:34:19.821 ' 00:34:19.821 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:19.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:19.821 --rc genhtml_branch_coverage=1 00:34:19.821 --rc genhtml_function_coverage=1 00:34:19.821 --rc genhtml_legend=1 00:34:19.821 --rc geninfo_all_blocks=1 00:34:19.821 --rc geninfo_unexecuted_blocks=1 00:34:19.821 00:34:19.821 ' 00:34:19.821 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:19.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:19.821 --rc genhtml_branch_coverage=1 00:34:19.821 --rc genhtml_function_coverage=1 00:34:19.821 --rc genhtml_legend=1 00:34:19.821 --rc geninfo_all_blocks=1 00:34:19.821 --rc geninfo_unexecuted_blocks=1 00:34:19.821 00:34:19.821 ' 00:34:19.821 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:19.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:19.821 --rc genhtml_branch_coverage=1 00:34:19.821 --rc genhtml_function_coverage=1 00:34:19.821 --rc genhtml_legend=1 00:34:19.821 --rc geninfo_all_blocks=1 00:34:19.821 --rc geninfo_unexecuted_blocks=1 00:34:19.821 00:34:19.821 ' 00:34:19.821 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:19.821 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:34:19.821 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:19.821 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:19.821 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:19.821 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:19.821 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:19.821 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:19.821 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:19.821 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:19.821 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:19.821 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:19.821 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:19.821 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:19.821 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:19.821 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:19.821 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:19.821 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:19.821 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:19.821 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:34:19.821 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:19.821 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:19.821 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:19.821 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:19.821 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:19.821 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:19.821 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:34:19.822 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:19.822 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:34:19.822 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:19.822 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:19.822 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:19.822 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:19.822 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:19.822 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:19.822 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:19.822 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:19.822 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:19.822 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:19.822 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:19.822 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:19.822 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:34:19.822 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:19.822 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:34:19.822 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:19.822 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:19.822 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:19.822 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:19.822 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:19.822 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:19.822 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:19.822 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:19.822 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:19.822 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:19.822 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:34:19.822 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:34:28.082 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:28.082 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:34:28.082 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:28.082 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:28.082 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:28.082 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:28.082 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:28.082 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:34:28.082 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:28.082 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:34:28.082 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:34:28.082 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:34:28.082 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:34:28.082 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:34:28.082 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:34:28.082 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:28.082 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:28.082 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:28.082 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:28.082 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:28.082 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:28.082 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:28.082 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:28.082 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:28.082 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:28.082 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:28.082 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:28.082 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:28.082 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:28.082 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:28.082 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:28.082 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:28.082 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:28.082 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:28.082 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:28.082 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:28.082 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:28.082 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:28.082 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:28.082 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:28.082 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:28.082 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:28.082 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:28.082 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:28.082 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:28.082 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:28.082 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:28.082 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:28.082 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:28.082 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:28.082 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:28.082 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:28.082 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:28.082 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:28.082 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:28.082 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:28.082 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:28.082 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:28.083 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:28.083 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:28.083 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:28.083 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:28.083 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:28.083 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:28.083 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:28.083 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:28.083 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:28.083 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:28.083 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:28.083 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:28.083 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:28.083 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:28.083 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:28.083 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:34:28.083 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:28.083 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:28.083 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:28.083 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:28.083 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:28.083 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:28.083 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:28.083 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:28.083 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:28.083 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:28.083 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:28.083 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:28.083 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:28.083 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:28.083 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:28.083 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:28.083 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:28.083 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:28.083 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:28.083 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:28.083 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:28.083 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:28.083 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:28.083 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:28.083 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:28.083 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:28.083 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:28.083 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.683 ms 00:34:28.083 00:34:28.083 --- 10.0.0.2 ping statistics --- 00:34:28.083 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:28.083 rtt min/avg/max/mdev = 0.683/0.683/0.683/0.000 ms 00:34:28.083 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:28.083 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:28.083 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.315 ms 00:34:28.083 00:34:28.083 --- 10.0.0.1 ping statistics --- 00:34:28.083 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:28.083 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:34:28.083 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:28.083 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:34:28.083 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:28.083 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:28.083 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:28.083 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:28.083 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:28.083 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:28.083 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:28.083 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:34:28.083 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:34:28.083 only one NIC for nvmf test 00:34:28.083 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:34:28.083 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:28.083 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:34:28.083 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:28.083 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:34:28.083 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:28.083 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:28.083 rmmod nvme_tcp 00:34:28.083 rmmod nvme_fabrics 00:34:28.083 rmmod nvme_keyring 00:34:28.083 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:28.083 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:34:28.083 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:34:28.083 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:34:28.083 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:28.083 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:28.083 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:28.083 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:34:28.083 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:34:28.083 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:28.083 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:34:28.083 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:28.083 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:28.083 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:28.083 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:28.083 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:29.470 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:29.470 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:34:29.470 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:34:29.470 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:29.470 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:34:29.470 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:29.470 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:34:29.470 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:29.470 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:29.470 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:29.470 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:34:29.470 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:34:29.470 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:34:29.470 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:29.470 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:29.470 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:29.470 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:34:29.470 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:34:29.470 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:29.470 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:34:29.470 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:29.470 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:29.470 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:29.470 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:29.470 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:29.470 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:29.470 00:34:29.470 real 0m9.684s 00:34:29.470 user 0m2.103s 00:34:29.470 sys 0m5.526s 00:34:29.470 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:29.470 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:34:29.470 ************************************ 00:34:29.470 END TEST nvmf_target_multipath 00:34:29.470 ************************************ 00:34:29.471 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:34:29.471 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:29.471 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:29.471 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:29.471 ************************************ 00:34:29.471 START TEST nvmf_zcopy 00:34:29.471 ************************************ 00:34:29.471 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:34:29.471 * Looking for test storage... 00:34:29.471 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:29.471 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:29.471 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:34:29.471 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:29.471 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:29.471 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:29.471 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:29.471 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:29.471 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:34:29.471 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:34:29.471 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:34:29.471 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:34:29.471 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:34:29.471 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:34:29.471 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:34:29.471 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:29.471 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:34:29.471 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:34:29.471 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:29.471 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:29.471 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:34:29.471 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:34:29.471 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:29.471 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:34:29.471 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:34:29.471 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:34:29.471 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:34:29.471 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:29.471 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:34:29.471 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:34:29.471 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:29.471 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:29.471 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:34:29.471 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:29.471 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:29.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:29.471 --rc genhtml_branch_coverage=1 00:34:29.471 --rc genhtml_function_coverage=1 00:34:29.471 --rc genhtml_legend=1 00:34:29.471 --rc geninfo_all_blocks=1 00:34:29.471 --rc geninfo_unexecuted_blocks=1 00:34:29.471 00:34:29.471 ' 00:34:29.471 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:29.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:29.471 --rc genhtml_branch_coverage=1 00:34:29.471 --rc genhtml_function_coverage=1 00:34:29.471 --rc genhtml_legend=1 00:34:29.471 --rc geninfo_all_blocks=1 00:34:29.471 --rc geninfo_unexecuted_blocks=1 00:34:29.471 00:34:29.471 ' 00:34:29.471 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:29.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:29.471 --rc genhtml_branch_coverage=1 00:34:29.471 --rc genhtml_function_coverage=1 00:34:29.471 --rc genhtml_legend=1 00:34:29.471 --rc geninfo_all_blocks=1 00:34:29.471 --rc geninfo_unexecuted_blocks=1 00:34:29.471 00:34:29.471 ' 00:34:29.471 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:29.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:29.471 --rc genhtml_branch_coverage=1 00:34:29.471 --rc genhtml_function_coverage=1 00:34:29.471 --rc genhtml_legend=1 00:34:29.471 --rc geninfo_all_blocks=1 00:34:29.471 --rc geninfo_unexecuted_blocks=1 00:34:29.471 00:34:29.471 ' 00:34:29.471 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:29.471 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:34:29.471 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:29.471 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:29.471 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:29.471 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:29.471 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:29.471 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:29.471 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:29.471 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:29.471 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:29.471 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:29.471 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:29.471 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:29.471 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:29.471 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:29.471 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:29.471 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:29.471 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:29.471 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:34:29.471 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:29.471 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:29.471 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:29.471 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:29.472 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:29.472 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:29.472 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:34:29.472 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:29.472 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:34:29.472 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:29.472 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:29.472 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:29.472 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:29.472 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:29.472 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:29.472 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:29.472 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:29.472 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:29.472 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:29.472 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:34:29.472 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:29.472 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:29.472 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:29.472 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:29.472 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:29.472 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:29.472 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:29.472 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:29.472 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:29.472 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:29.472 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:34:29.472 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:37.610 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:37.610 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:34:37.610 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:37.610 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:37.610 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:37.611 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:37.611 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:37.611 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:34:37.611 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:37.611 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:34:37.611 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:34:37.611 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:34:37.611 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:34:37.611 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:34:37.611 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:34:37.611 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:37.611 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:37.611 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:37.611 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:37.611 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:37.611 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:37.611 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:37.611 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:37.611 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:37.611 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:37.611 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:37.611 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:37.611 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:37.611 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:37.611 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:37.611 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:37.611 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:37.611 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:37.611 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:37.611 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:37.611 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:37.611 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:37.611 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:37.611 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:37.611 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:37.611 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:37.611 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:37.611 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:37.611 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:37.611 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:37.611 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:37.611 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:37.611 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:37.611 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:37.611 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:37.611 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:37.611 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:37.611 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:37.611 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:37.611 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:37.611 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:37.611 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:37.611 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:37.611 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:37.611 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:37.611 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:37.611 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:37.611 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:37.611 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:37.611 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:37.611 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:37.611 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:37.611 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:37.611 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:37.611 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:37.611 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:37.611 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:37.611 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:37.611 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:34:37.611 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:37.611 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:37.611 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:37.612 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:37.612 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:37.612 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:37.612 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:37.612 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:37.612 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:37.612 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:37.612 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:37.612 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:37.612 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:37.612 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:37.612 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:37.612 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:37.612 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:37.612 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:37.612 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:37.612 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:37.612 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:37.612 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:37.612 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:37.612 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:37.612 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:37.612 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:37.612 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:37.612 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.655 ms 00:34:37.612 00:34:37.612 --- 10.0.0.2 ping statistics --- 00:34:37.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:37.612 rtt min/avg/max/mdev = 0.655/0.655/0.655/0.000 ms 00:34:37.612 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:37.612 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:37.612 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.293 ms 00:34:37.612 00:34:37.612 --- 10.0.0.1 ping statistics --- 00:34:37.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:37.612 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:34:37.612 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:37.612 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:34:37.612 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:37.612 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:37.612 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:37.612 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:37.612 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:37.612 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:37.612 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:37.612 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:34:37.612 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:37.612 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:37.612 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:37.612 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=1632006 00:34:37.612 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 1632006 00:34:37.612 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:34:37.612 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 1632006 ']' 00:34:37.612 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:37.612 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:37.612 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:37.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:37.612 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:37.612 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:37.612 [2024-11-20 10:08:07.623854] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:37.612 [2024-11-20 10:08:07.624972] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:34:37.612 [2024-11-20 10:08:07.625018] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:37.612 [2024-11-20 10:08:07.726181] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:37.612 [2024-11-20 10:08:07.776170] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:37.612 [2024-11-20 10:08:07.776223] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:37.612 [2024-11-20 10:08:07.776231] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:37.612 [2024-11-20 10:08:07.776239] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:37.612 [2024-11-20 10:08:07.776245] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:37.612 [2024-11-20 10:08:07.777038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:37.613 [2024-11-20 10:08:07.852972] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:37.613 [2024-11-20 10:08:07.853277] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:37.613 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:37.613 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:34:37.613 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:37.613 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:37.613 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:37.613 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:37.613 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:34:37.613 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:34:37.613 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.613 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:37.613 [2024-11-20 10:08:08.481857] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:37.613 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.613 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:34:37.613 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.613 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:37.613 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.613 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:37.613 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.613 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:37.613 [2024-11-20 10:08:08.510100] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:37.613 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.613 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:37.613 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.613 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:37.874 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.874 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:34:37.874 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.874 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:37.874 malloc0 00:34:37.874 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.874 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:34:37.874 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.874 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:37.874 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.874 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:34:37.874 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:34:37.874 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:34:37.874 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:34:37.874 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:37.874 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:37.874 { 00:34:37.874 "params": { 00:34:37.874 "name": "Nvme$subsystem", 00:34:37.874 "trtype": "$TEST_TRANSPORT", 00:34:37.874 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:37.874 "adrfam": "ipv4", 00:34:37.874 "trsvcid": "$NVMF_PORT", 00:34:37.874 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:37.874 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:37.874 "hdgst": ${hdgst:-false}, 00:34:37.874 "ddgst": ${ddgst:-false} 00:34:37.874 }, 00:34:37.874 "method": "bdev_nvme_attach_controller" 00:34:37.874 } 00:34:37.874 EOF 00:34:37.874 )") 00:34:37.874 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:34:37.874 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:34:37.874 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:34:37.874 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:37.874 "params": { 00:34:37.874 "name": "Nvme1", 00:34:37.874 "trtype": "tcp", 00:34:37.874 "traddr": "10.0.0.2", 00:34:37.874 "adrfam": "ipv4", 00:34:37.874 "trsvcid": "4420", 00:34:37.874 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:37.874 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:37.874 "hdgst": false, 00:34:37.874 "ddgst": false 00:34:37.874 }, 00:34:37.874 "method": "bdev_nvme_attach_controller" 00:34:37.874 }' 00:34:37.874 [2024-11-20 10:08:08.608683] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:34:37.874 [2024-11-20 10:08:08.608734] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1632198 ] 00:34:37.874 [2024-11-20 10:08:08.695872] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:37.874 [2024-11-20 10:08:08.732946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:38.136 Running I/O for 10 seconds... 00:34:40.029 6635.00 IOPS, 51.84 MiB/s [2024-11-20T09:08:12.329Z] 6641.50 IOPS, 51.89 MiB/s [2024-11-20T09:08:13.270Z] 6686.00 IOPS, 52.23 MiB/s [2024-11-20T09:08:14.211Z] 6669.00 IOPS, 52.10 MiB/s [2024-11-20T09:08:15.152Z] 7124.60 IOPS, 55.66 MiB/s [2024-11-20T09:08:16.094Z] 7583.00 IOPS, 59.24 MiB/s [2024-11-20T09:08:17.037Z] 7904.71 IOPS, 61.76 MiB/s [2024-11-20T09:08:17.979Z] 8152.88 IOPS, 63.69 MiB/s [2024-11-20T09:08:19.363Z] 8338.78 IOPS, 65.15 MiB/s [2024-11-20T09:08:19.363Z] 8488.80 IOPS, 66.32 MiB/s 00:34:48.447 Latency(us) 00:34:48.447 [2024-11-20T09:08:19.363Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:48.447 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:34:48.447 Verification LBA range: start 0x0 length 0x1000 00:34:48.447 Nvme1n1 : 10.01 8492.61 66.35 0.00 0.00 15026.79 2239.15 26760.53 00:34:48.447 [2024-11-20T09:08:19.363Z] =================================================================================================================== 00:34:48.447 [2024-11-20T09:08:19.364Z] Total : 8492.61 66.35 0.00 0.00 15026.79 2239.15 26760.53 00:34:48.448 10:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1634063 00:34:48.448 10:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:34:48.448 10:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:48.448 10:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:34:48.448 10:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:34:48.448 10:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:34:48.448 10:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:34:48.448 10:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:48.448 10:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:48.448 { 00:34:48.448 "params": { 00:34:48.448 "name": "Nvme$subsystem", 00:34:48.448 "trtype": "$TEST_TRANSPORT", 00:34:48.448 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:48.448 "adrfam": "ipv4", 00:34:48.448 "trsvcid": "$NVMF_PORT", 00:34:48.448 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:48.448 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:48.448 "hdgst": ${hdgst:-false}, 00:34:48.448 "ddgst": ${ddgst:-false} 00:34:48.448 }, 00:34:48.448 "method": "bdev_nvme_attach_controller" 00:34:48.448 } 00:34:48.448 EOF 00:34:48.448 )") 00:34:48.448 10:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:34:48.448 [2024-11-20 10:08:19.057453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.448 [2024-11-20 10:08:19.057486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.448 10:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:34:48.448 10:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:34:48.448 10:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:48.448 "params": { 00:34:48.448 "name": "Nvme1", 00:34:48.448 "trtype": "tcp", 00:34:48.448 "traddr": "10.0.0.2", 00:34:48.448 "adrfam": "ipv4", 00:34:48.448 "trsvcid": "4420", 00:34:48.448 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:48.448 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:48.448 "hdgst": false, 00:34:48.448 "ddgst": false 00:34:48.448 }, 00:34:48.448 "method": "bdev_nvme_attach_controller" 00:34:48.448 }' 00:34:48.448 [2024-11-20 10:08:19.069416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.448 [2024-11-20 10:08:19.069425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.448 [2024-11-20 10:08:19.081413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.448 [2024-11-20 10:08:19.081422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.448 [2024-11-20 10:08:19.093413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.448 [2024-11-20 10:08:19.093422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.448 [2024-11-20 10:08:19.098895] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:34:48.448 [2024-11-20 10:08:19.098943] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1634063 ] 00:34:48.448 [2024-11-20 10:08:19.105413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.448 [2024-11-20 10:08:19.105422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.448 [2024-11-20 10:08:19.117414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.448 [2024-11-20 10:08:19.117423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.448 [2024-11-20 10:08:19.129414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.448 [2024-11-20 10:08:19.129422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.448 [2024-11-20 10:08:19.141413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.448 [2024-11-20 10:08:19.141421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.448 [2024-11-20 10:08:19.153413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.448 [2024-11-20 10:08:19.153421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.448 [2024-11-20 10:08:19.165413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.448 [2024-11-20 10:08:19.165422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.448 [2024-11-20 10:08:19.177413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.448 [2024-11-20 10:08:19.177421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.448 [2024-11-20 10:08:19.180988] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:48.448 [2024-11-20 10:08:19.189414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.448 [2024-11-20 10:08:19.189424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.448 [2024-11-20 10:08:19.201414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.448 [2024-11-20 10:08:19.201424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.448 [2024-11-20 10:08:19.210883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:48.448 [2024-11-20 10:08:19.213413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.448 [2024-11-20 10:08:19.213423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.448 [2024-11-20 10:08:19.225419] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.448 [2024-11-20 10:08:19.225430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.448 [2024-11-20 10:08:19.237418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.448 [2024-11-20 10:08:19.237431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.448 [2024-11-20 10:08:19.249416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.448 [2024-11-20 10:08:19.249426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.448 [2024-11-20 10:08:19.261415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.448 [2024-11-20 10:08:19.261425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.448 [2024-11-20 10:08:19.273413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.448 [2024-11-20 10:08:19.273423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.448 [2024-11-20 10:08:19.285425] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.448 [2024-11-20 10:08:19.285442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.448 [2024-11-20 10:08:19.297416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.448 [2024-11-20 10:08:19.297427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.448 [2024-11-20 10:08:19.309415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.448 [2024-11-20 10:08:19.309425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.448 [2024-11-20 10:08:19.321416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.448 [2024-11-20 10:08:19.321427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.448 [2024-11-20 10:08:19.333413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.448 [2024-11-20 10:08:19.333422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.448 [2024-11-20 10:08:19.345413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.448 [2024-11-20 10:08:19.345423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.448 [2024-11-20 10:08:19.357416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.448 [2024-11-20 10:08:19.357426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.709 [2024-11-20 10:08:19.369413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.709 [2024-11-20 10:08:19.369422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.709 [2024-11-20 10:08:19.381412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.709 [2024-11-20 10:08:19.381422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.709 [2024-11-20 10:08:19.393413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.709 [2024-11-20 10:08:19.393421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.709 [2024-11-20 10:08:19.405414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.709 [2024-11-20 10:08:19.405425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.709 [2024-11-20 10:08:19.417413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.709 [2024-11-20 10:08:19.417421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.709 [2024-11-20 10:08:19.429413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.709 [2024-11-20 10:08:19.429421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.709 [2024-11-20 10:08:19.441414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.709 [2024-11-20 10:08:19.441423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.709 [2024-11-20 10:08:19.453414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.709 [2024-11-20 10:08:19.453424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.709 [2024-11-20 10:08:19.465412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.709 [2024-11-20 10:08:19.465420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.709 [2024-11-20 10:08:19.477413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.709 [2024-11-20 10:08:19.477422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.709 [2024-11-20 10:08:19.489413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.709 [2024-11-20 10:08:19.489422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.709 [2024-11-20 10:08:19.501421] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.709 [2024-11-20 10:08:19.501439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.709 Running I/O for 5 seconds... 00:34:48.709 [2024-11-20 10:08:19.518185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.709 [2024-11-20 10:08:19.518203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.709 [2024-11-20 10:08:19.532444] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.709 [2024-11-20 10:08:19.532461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.709 [2024-11-20 10:08:19.546108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.709 [2024-11-20 10:08:19.546124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.709 [2024-11-20 10:08:19.560429] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.709 [2024-11-20 10:08:19.560445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.709 [2024-11-20 10:08:19.573482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.709 [2024-11-20 10:08:19.573499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.709 [2024-11-20 10:08:19.586381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.709 [2024-11-20 10:08:19.586397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.709 [2024-11-20 10:08:19.600629] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.709 [2024-11-20 10:08:19.600645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.709 [2024-11-20 10:08:19.613794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.709 [2024-11-20 10:08:19.613809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.970 [2024-11-20 10:08:19.628936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.970 [2024-11-20 10:08:19.628953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.970 [2024-11-20 10:08:19.642212] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.970 [2024-11-20 10:08:19.642228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.970 [2024-11-20 10:08:19.656414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.970 [2024-11-20 10:08:19.656430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.970 [2024-11-20 10:08:19.669405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.970 [2024-11-20 10:08:19.669421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.970 [2024-11-20 10:08:19.682391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.970 [2024-11-20 10:08:19.682410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.970 [2024-11-20 10:08:19.696457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.970 [2024-11-20 10:08:19.696474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.970 [2024-11-20 10:08:19.709407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.970 [2024-11-20 10:08:19.709423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.970 [2024-11-20 10:08:19.722432] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.970 [2024-11-20 10:08:19.722448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.970 [2024-11-20 10:08:19.736695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.970 [2024-11-20 10:08:19.736710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.970 [2024-11-20 10:08:19.750387] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.970 [2024-11-20 10:08:19.750402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.970 [2024-11-20 10:08:19.764282] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.970 [2024-11-20 10:08:19.764297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.970 [2024-11-20 10:08:19.777626] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.970 [2024-11-20 10:08:19.777641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.970 [2024-11-20 10:08:19.790570] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.970 [2024-11-20 10:08:19.790585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.970 [2024-11-20 10:08:19.804491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.970 [2024-11-20 10:08:19.804506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.970 [2024-11-20 10:08:19.817328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.970 [2024-11-20 10:08:19.817344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.970 [2024-11-20 10:08:19.830268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.970 [2024-11-20 10:08:19.830283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.970 [2024-11-20 10:08:19.844369] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.970 [2024-11-20 10:08:19.844385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.970 [2024-11-20 10:08:19.857384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.970 [2024-11-20 10:08:19.857400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.970 [2024-11-20 10:08:19.869748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.970 [2024-11-20 10:08:19.869762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.232 [2024-11-20 10:08:19.884910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.232 [2024-11-20 10:08:19.884926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.232 [2024-11-20 10:08:19.898035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.232 [2024-11-20 10:08:19.898054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.232 [2024-11-20 10:08:19.912639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.232 [2024-11-20 10:08:19.912654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.232 [2024-11-20 10:08:19.925497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.232 [2024-11-20 10:08:19.925512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.232 [2024-11-20 10:08:19.938379] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.232 [2024-11-20 10:08:19.938394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.232 [2024-11-20 10:08:19.952521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.232 [2024-11-20 10:08:19.952537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.232 [2024-11-20 10:08:19.965457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.232 [2024-11-20 10:08:19.965472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.232 [2024-11-20 10:08:19.978178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.232 [2024-11-20 10:08:19.978193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.232 [2024-11-20 10:08:19.992699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.232 [2024-11-20 10:08:19.992715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.232 [2024-11-20 10:08:20.005960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.232 [2024-11-20 10:08:20.005976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.232 [2024-11-20 10:08:20.020619] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.232 [2024-11-20 10:08:20.020635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.232 [2024-11-20 10:08:20.033493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.232 [2024-11-20 10:08:20.033508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.232 [2024-11-20 10:08:20.046355] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.232 [2024-11-20 10:08:20.046369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.232 [2024-11-20 10:08:20.060947] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.232 [2024-11-20 10:08:20.060962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.232 [2024-11-20 10:08:20.074038] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.232 [2024-11-20 10:08:20.074052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.232 [2024-11-20 10:08:20.088977] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.232 [2024-11-20 10:08:20.088993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.232 [2024-11-20 10:08:20.102037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.232 [2024-11-20 10:08:20.102052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.232 [2024-11-20 10:08:20.116499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.232 [2024-11-20 10:08:20.116514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.232 [2024-11-20 10:08:20.129496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.232 [2024-11-20 10:08:20.129511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.232 [2024-11-20 10:08:20.142304] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.232 [2024-11-20 10:08:20.142318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.493 [2024-11-20 10:08:20.157194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.493 [2024-11-20 10:08:20.157217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.493 [2024-11-20 10:08:20.170347] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.493 [2024-11-20 10:08:20.170362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.493 [2024-11-20 10:08:20.184429] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.493 [2024-11-20 10:08:20.184445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.493 [2024-11-20 10:08:20.197724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.493 [2024-11-20 10:08:20.197739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.493 [2024-11-20 10:08:20.212667] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.493 [2024-11-20 10:08:20.212681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.493 [2024-11-20 10:08:20.225641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.493 [2024-11-20 10:08:20.225656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.493 [2024-11-20 10:08:20.238861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.493 [2024-11-20 10:08:20.238876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.493 [2024-11-20 10:08:20.253103] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.493 [2024-11-20 10:08:20.253118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.493 [2024-11-20 10:08:20.266251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.493 [2024-11-20 10:08:20.266267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.493 [2024-11-20 10:08:20.280901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.493 [2024-11-20 10:08:20.280916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.493 [2024-11-20 10:08:20.294043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.493 [2024-11-20 10:08:20.294058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.493 [2024-11-20 10:08:20.308648] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.493 [2024-11-20 10:08:20.308663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.493 [2024-11-20 10:08:20.321644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.493 [2024-11-20 10:08:20.321660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.493 [2024-11-20 10:08:20.333938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.493 [2024-11-20 10:08:20.333953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.493 [2024-11-20 10:08:20.348459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.493 [2024-11-20 10:08:20.348474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.493 [2024-11-20 10:08:20.361219] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.493 [2024-11-20 10:08:20.361235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.493 [2024-11-20 10:08:20.373960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.493 [2024-11-20 10:08:20.373974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.493 [2024-11-20 10:08:20.388368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.493 [2024-11-20 10:08:20.388384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.493 [2024-11-20 10:08:20.401386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.493 [2024-11-20 10:08:20.401401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.754 [2024-11-20 10:08:20.414145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.754 [2024-11-20 10:08:20.414169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.754 [2024-11-20 10:08:20.428561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.754 [2024-11-20 10:08:20.428577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.754 [2024-11-20 10:08:20.441522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.754 [2024-11-20 10:08:20.441537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.754 [2024-11-20 10:08:20.454600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.754 [2024-11-20 10:08:20.454615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.754 [2024-11-20 10:08:20.468866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.754 [2024-11-20 10:08:20.468881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.754 [2024-11-20 10:08:20.481945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.754 [2024-11-20 10:08:20.481959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.754 [2024-11-20 10:08:20.496553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.754 [2024-11-20 10:08:20.496568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.754 [2024-11-20 10:08:20.509645] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.754 [2024-11-20 10:08:20.509659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.754 18989.00 IOPS, 148.35 MiB/s [2024-11-20T09:08:20.670Z] [2024-11-20 10:08:20.522627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.754 [2024-11-20 10:08:20.522641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.754 [2024-11-20 10:08:20.536860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.754 [2024-11-20 10:08:20.536875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.754 [2024-11-20 10:08:20.550117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.754 [2024-11-20 10:08:20.550132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.754 [2024-11-20 10:08:20.565056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.754 [2024-11-20 10:08:20.565071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.754 [2024-11-20 10:08:20.578122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.754 [2024-11-20 10:08:20.578137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.754 [2024-11-20 10:08:20.592620] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.754 [2024-11-20 10:08:20.592635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.754 [2024-11-20 10:08:20.605838] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.754 [2024-11-20 10:08:20.605852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.754 [2024-11-20 10:08:20.620677] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.754 [2024-11-20 10:08:20.620693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.754 [2024-11-20 10:08:20.633856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.754 [2024-11-20 10:08:20.633871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.754 [2024-11-20 10:08:20.649243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.754 [2024-11-20 10:08:20.649258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.754 [2024-11-20 10:08:20.662335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.754 [2024-11-20 10:08:20.662350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.014 [2024-11-20 10:08:20.676242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.014 [2024-11-20 10:08:20.676258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.014 [2024-11-20 10:08:20.689351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.014 [2024-11-20 10:08:20.689367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.014 [2024-11-20 10:08:20.702244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.014 [2024-11-20 10:08:20.702259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.014 [2024-11-20 10:08:20.717360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.014 [2024-11-20 10:08:20.717376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.014 [2024-11-20 10:08:20.730305] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.014 [2024-11-20 10:08:20.730320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.014 [2024-11-20 10:08:20.744325] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.014 [2024-11-20 10:08:20.744340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.014 [2024-11-20 10:08:20.757323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.014 [2024-11-20 10:08:20.757338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.014 [2024-11-20 10:08:20.770508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.014 [2024-11-20 10:08:20.770524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.014 [2024-11-20 10:08:20.784844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.014 [2024-11-20 10:08:20.784860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.015 [2024-11-20 10:08:20.798119] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.015 [2024-11-20 10:08:20.798134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.015 [2024-11-20 10:08:20.812787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.015 [2024-11-20 10:08:20.812803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.015 [2024-11-20 10:08:20.826104] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.015 [2024-11-20 10:08:20.826120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.015 [2024-11-20 10:08:20.840615] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.015 [2024-11-20 10:08:20.840631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.015 [2024-11-20 10:08:20.853705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.015 [2024-11-20 10:08:20.853719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.015 [2024-11-20 10:08:20.868477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.015 [2024-11-20 10:08:20.868492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.015 [2024-11-20 10:08:20.881555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.015 [2024-11-20 10:08:20.881570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.015 [2024-11-20 10:08:20.893939] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.015 [2024-11-20 10:08:20.893953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.015 [2024-11-20 10:08:20.908543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.015 [2024-11-20 10:08:20.908559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.015 [2024-11-20 10:08:20.921888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.015 [2024-11-20 10:08:20.921903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.275 [2024-11-20 10:08:20.936526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.275 [2024-11-20 10:08:20.936542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.275 [2024-11-20 10:08:20.949607] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.275 [2024-11-20 10:08:20.949623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.275 [2024-11-20 10:08:20.962680] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.275 [2024-11-20 10:08:20.962695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.275 [2024-11-20 10:08:20.976370] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.276 [2024-11-20 10:08:20.976386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.276 [2024-11-20 10:08:20.989599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.276 [2024-11-20 10:08:20.989614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.276 [2024-11-20 10:08:21.002574] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.276 [2024-11-20 10:08:21.002589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.276 [2024-11-20 10:08:21.016907] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.276 [2024-11-20 10:08:21.016923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.276 [2024-11-20 10:08:21.029870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.276 [2024-11-20 10:08:21.029885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.276 [2024-11-20 10:08:21.044294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.276 [2024-11-20 10:08:21.044310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.276 [2024-11-20 10:08:21.057359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.276 [2024-11-20 10:08:21.057375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.276 [2024-11-20 10:08:21.070192] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.276 [2024-11-20 10:08:21.070208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.276 [2024-11-20 10:08:21.084553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.276 [2024-11-20 10:08:21.084568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.276 [2024-11-20 10:08:21.097406] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.276 [2024-11-20 10:08:21.097421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.276 [2024-11-20 10:08:21.110907] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.276 [2024-11-20 10:08:21.110923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.276 [2024-11-20 10:08:21.124669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.276 [2024-11-20 10:08:21.124685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.276 [2024-11-20 10:08:21.138076] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.276 [2024-11-20 10:08:21.138091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.276 [2024-11-20 10:08:21.152714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.276 [2024-11-20 10:08:21.152730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.276 [2024-11-20 10:08:21.165704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.276 [2024-11-20 10:08:21.165719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.276 [2024-11-20 10:08:21.180267] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.276 [2024-11-20 10:08:21.180282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.536 [2024-11-20 10:08:21.193202] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.536 [2024-11-20 10:08:21.193218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.536 [2024-11-20 10:08:21.206313] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.536 [2024-11-20 10:08:21.206328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.536 [2024-11-20 10:08:21.220706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.536 [2024-11-20 10:08:21.220721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.537 [2024-11-20 10:08:21.233377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.537 [2024-11-20 10:08:21.233393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.537 [2024-11-20 10:08:21.246341] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.537 [2024-11-20 10:08:21.246356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.537 [2024-11-20 10:08:21.260349] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.537 [2024-11-20 10:08:21.260365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.537 [2024-11-20 10:08:21.273425] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.537 [2024-11-20 10:08:21.273440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.537 [2024-11-20 10:08:21.286429] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.537 [2024-11-20 10:08:21.286444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.537 [2024-11-20 10:08:21.300317] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.537 [2024-11-20 10:08:21.300333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.537 [2024-11-20 10:08:21.313330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.537 [2024-11-20 10:08:21.313346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.537 [2024-11-20 10:08:21.325995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.537 [2024-11-20 10:08:21.326010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.537 [2024-11-20 10:08:21.340684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.537 [2024-11-20 10:08:21.340700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.537 [2024-11-20 10:08:21.353316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.537 [2024-11-20 10:08:21.353333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.537 [2024-11-20 10:08:21.366295] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.537 [2024-11-20 10:08:21.366310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.537 [2024-11-20 10:08:21.380497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.537 [2024-11-20 10:08:21.380512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.537 [2024-11-20 10:08:21.393944] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.537 [2024-11-20 10:08:21.393959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.537 [2024-11-20 10:08:21.408588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.537 [2024-11-20 10:08:21.408603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.537 [2024-11-20 10:08:21.421656] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.537 [2024-11-20 10:08:21.421672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.537 [2024-11-20 10:08:21.434413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.537 [2024-11-20 10:08:21.434429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.537 [2024-11-20 10:08:21.448832] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.537 [2024-11-20 10:08:21.448847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.797 [2024-11-20 10:08:21.462105] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.797 [2024-11-20 10:08:21.462122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.797 [2024-11-20 10:08:21.476603] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.797 [2024-11-20 10:08:21.476619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.797 [2024-11-20 10:08:21.489963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.797 [2024-11-20 10:08:21.489978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.797 [2024-11-20 10:08:21.504520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.797 [2024-11-20 10:08:21.504536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.797 19041.00 IOPS, 148.76 MiB/s [2024-11-20T09:08:21.714Z] [2024-11-20 10:08:21.517602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.798 [2024-11-20 10:08:21.517617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.798 [2024-11-20 10:08:21.530448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.798 [2024-11-20 10:08:21.530463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.798 [2024-11-20 10:08:21.544792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.798 [2024-11-20 10:08:21.544807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.798 [2024-11-20 10:08:21.557864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.798 [2024-11-20 10:08:21.557878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.798 [2024-11-20 10:08:21.572390] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.798 [2024-11-20 10:08:21.572405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.798 [2024-11-20 10:08:21.585469] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.798 [2024-11-20 10:08:21.585484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.798 [2024-11-20 10:08:21.598231] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.798 [2024-11-20 10:08:21.598245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.798 [2024-11-20 10:08:21.613124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.798 [2024-11-20 10:08:21.613139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.798 [2024-11-20 10:08:21.626002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.798 [2024-11-20 10:08:21.626017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.798 [2024-11-20 10:08:21.640871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.798 [2024-11-20 10:08:21.640887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.798 [2024-11-20 10:08:21.653833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.798 [2024-11-20 10:08:21.653847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.798 [2024-11-20 10:08:21.668448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.798 [2024-11-20 10:08:21.668463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.798 [2024-11-20 10:08:21.681312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.798 [2024-11-20 10:08:21.681327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.798 [2024-11-20 10:08:21.694818] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.798 [2024-11-20 10:08:21.694837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.798 [2024-11-20 10:08:21.709066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.798 [2024-11-20 10:08:21.709081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.059 [2024-11-20 10:08:21.722400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.059 [2024-11-20 10:08:21.722416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.059 [2024-11-20 10:08:21.737087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.059 [2024-11-20 10:08:21.737102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.059 [2024-11-20 10:08:21.749879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.059 [2024-11-20 10:08:21.749894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.059 [2024-11-20 10:08:21.764293] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.059 [2024-11-20 10:08:21.764308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.059 [2024-11-20 10:08:21.777567] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.059 [2024-11-20 10:08:21.777582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.059 [2024-11-20 10:08:21.790378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.059 [2024-11-20 10:08:21.790393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.059 [2024-11-20 10:08:21.804506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.059 [2024-11-20 10:08:21.804522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.059 [2024-11-20 10:08:21.817441] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.059 [2024-11-20 10:08:21.817456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.059 [2024-11-20 10:08:21.830641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.059 [2024-11-20 10:08:21.830656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.059 [2024-11-20 10:08:21.844696] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.059 [2024-11-20 10:08:21.844711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.059 [2024-11-20 10:08:21.857862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.059 [2024-11-20 10:08:21.857876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.059 [2024-11-20 10:08:21.872410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.059 [2024-11-20 10:08:21.872426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.059 [2024-11-20 10:08:21.885479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.059 [2024-11-20 10:08:21.885493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.059 [2024-11-20 10:08:21.898311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.059 [2024-11-20 10:08:21.898326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.059 [2024-11-20 10:08:21.912707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.059 [2024-11-20 10:08:21.912722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.059 [2024-11-20 10:08:21.925931] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.059 [2024-11-20 10:08:21.925946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.059 [2024-11-20 10:08:21.940521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.059 [2024-11-20 10:08:21.940536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.059 [2024-11-20 10:08:21.953340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.059 [2024-11-20 10:08:21.953359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.059 [2024-11-20 10:08:21.965719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.059 [2024-11-20 10:08:21.965733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.319 [2024-11-20 10:08:21.980651] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.319 [2024-11-20 10:08:21.980666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.319 [2024-11-20 10:08:21.993510] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.319 [2024-11-20 10:08:21.993526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.319 [2024-11-20 10:08:22.006940] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.319 [2024-11-20 10:08:22.006955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.319 [2024-11-20 10:08:22.020750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.319 [2024-11-20 10:08:22.020766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.319 [2024-11-20 10:08:22.033740] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.319 [2024-11-20 10:08:22.033755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.319 [2024-11-20 10:08:22.048321] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.319 [2024-11-20 10:08:22.048336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.319 [2024-11-20 10:08:22.061424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.319 [2024-11-20 10:08:22.061439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.319 [2024-11-20 10:08:22.074060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.319 [2024-11-20 10:08:22.074075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.319 [2024-11-20 10:08:22.088129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.319 [2024-11-20 10:08:22.088144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.319 [2024-11-20 10:08:22.101112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.319 [2024-11-20 10:08:22.101127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.319 [2024-11-20 10:08:22.114600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.319 [2024-11-20 10:08:22.114615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.320 [2024-11-20 10:08:22.128361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.320 [2024-11-20 10:08:22.128376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.320 [2024-11-20 10:08:22.141466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.320 [2024-11-20 10:08:22.141481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.320 [2024-11-20 10:08:22.154176] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.320 [2024-11-20 10:08:22.154191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.320 [2024-11-20 10:08:22.168818] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.320 [2024-11-20 10:08:22.168833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.320 [2024-11-20 10:08:22.182006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.320 [2024-11-20 10:08:22.182021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.320 [2024-11-20 10:08:22.196864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.320 [2024-11-20 10:08:22.196879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.320 [2024-11-20 10:08:22.209850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.320 [2024-11-20 10:08:22.209868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.320 [2024-11-20 10:08:22.224374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.320 [2024-11-20 10:08:22.224389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.580 [2024-11-20 10:08:22.237700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.580 [2024-11-20 10:08:22.237715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.580 [2024-11-20 10:08:22.252855] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.580 [2024-11-20 10:08:22.252870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.580 [2024-11-20 10:08:22.266110] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.580 [2024-11-20 10:08:22.266125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.580 [2024-11-20 10:08:22.280469] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.580 [2024-11-20 10:08:22.280484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.580 [2024-11-20 10:08:22.293579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.580 [2024-11-20 10:08:22.293594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.580 [2024-11-20 10:08:22.306521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.581 [2024-11-20 10:08:22.306536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.581 [2024-11-20 10:08:22.320608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.581 [2024-11-20 10:08:22.320624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.581 [2024-11-20 10:08:22.333782] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.581 [2024-11-20 10:08:22.333797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.581 [2024-11-20 10:08:22.348286] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.581 [2024-11-20 10:08:22.348302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.581 [2024-11-20 10:08:22.361280] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.581 [2024-11-20 10:08:22.361296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.581 [2024-11-20 10:08:22.373962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.581 [2024-11-20 10:08:22.373977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.581 [2024-11-20 10:08:22.388446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.581 [2024-11-20 10:08:22.388461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.581 [2024-11-20 10:08:22.401536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.581 [2024-11-20 10:08:22.401551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.581 [2024-11-20 10:08:22.414484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.581 [2024-11-20 10:08:22.414499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.581 [2024-11-20 10:08:22.428937] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.581 [2024-11-20 10:08:22.428952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.581 [2024-11-20 10:08:22.441885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.581 [2024-11-20 10:08:22.441899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.581 [2024-11-20 10:08:22.456589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.581 [2024-11-20 10:08:22.456605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.581 [2024-11-20 10:08:22.469827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.581 [2024-11-20 10:08:22.469841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.581 [2024-11-20 10:08:22.484925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.581 [2024-11-20 10:08:22.484940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.843 [2024-11-20 10:08:22.498024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.843 [2024-11-20 10:08:22.498039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.843 [2024-11-20 10:08:22.513090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.843 [2024-11-20 10:08:22.513106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.843 19056.00 IOPS, 148.88 MiB/s [2024-11-20T09:08:22.759Z] [2024-11-20 10:08:22.526413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.843 [2024-11-20 10:08:22.526428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.843 [2024-11-20 10:08:22.541058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.843 [2024-11-20 10:08:22.541074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.843 [2024-11-20 10:08:22.554019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.843 [2024-11-20 10:08:22.554034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.843 [2024-11-20 10:08:22.568542] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.843 [2024-11-20 10:08:22.568558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.843 [2024-11-20 10:08:22.581719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.843 [2024-11-20 10:08:22.581734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.843 [2024-11-20 10:08:22.596341] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.843 [2024-11-20 10:08:22.596356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.843 [2024-11-20 10:08:22.609269] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.843 [2024-11-20 10:08:22.609284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.843 [2024-11-20 10:08:22.622788] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.843 [2024-11-20 10:08:22.622805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.843 [2024-11-20 10:08:22.636607] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.843 [2024-11-20 10:08:22.636623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.843 [2024-11-20 10:08:22.649398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.843 [2024-11-20 10:08:22.649414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.843 [2024-11-20 10:08:22.662253] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.843 [2024-11-20 10:08:22.662268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.843 [2024-11-20 10:08:22.676750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.843 [2024-11-20 10:08:22.676766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.843 [2024-11-20 10:08:22.689885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.843 [2024-11-20 10:08:22.689900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.843 [2024-11-20 10:08:22.704719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.843 [2024-11-20 10:08:22.704734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.843 [2024-11-20 10:08:22.718093] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.843 [2024-11-20 10:08:22.718107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.843 [2024-11-20 10:08:22.732277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.843 [2024-11-20 10:08:22.732292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.843 [2024-11-20 10:08:22.745270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.843 [2024-11-20 10:08:22.745285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.104 [2024-11-20 10:08:22.758278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.104 [2024-11-20 10:08:22.758294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.104 [2024-11-20 10:08:22.772605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.104 [2024-11-20 10:08:22.772621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.104 [2024-11-20 10:08:22.785548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.104 [2024-11-20 10:08:22.785564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.104 [2024-11-20 10:08:22.798591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.104 [2024-11-20 10:08:22.798607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.104 [2024-11-20 10:08:22.812625] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.104 [2024-11-20 10:08:22.812641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.104 [2024-11-20 10:08:22.825748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.104 [2024-11-20 10:08:22.825763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.104 [2024-11-20 10:08:22.840244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.104 [2024-11-20 10:08:22.840259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.104 [2024-11-20 10:08:22.853605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.104 [2024-11-20 10:08:22.853621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.104 [2024-11-20 10:08:22.866210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.104 [2024-11-20 10:08:22.866225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.104 [2024-11-20 10:08:22.880804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.104 [2024-11-20 10:08:22.880820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.104 [2024-11-20 10:08:22.893869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.104 [2024-11-20 10:08:22.893884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.104 [2024-11-20 10:08:22.908652] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.104 [2024-11-20 10:08:22.908667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.104 [2024-11-20 10:08:22.921565] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.104 [2024-11-20 10:08:22.921581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.104 [2024-11-20 10:08:22.934437] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.104 [2024-11-20 10:08:22.934452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.104 [2024-11-20 10:08:22.948670] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.104 [2024-11-20 10:08:22.948687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.104 [2024-11-20 10:08:22.961895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.104 [2024-11-20 10:08:22.961910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.104 [2024-11-20 10:08:22.976784] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.104 [2024-11-20 10:08:22.976804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.104 [2024-11-20 10:08:22.989709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.104 [2024-11-20 10:08:22.989724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.104 [2024-11-20 10:08:23.004459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.104 [2024-11-20 10:08:23.004475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.365 [2024-11-20 10:08:23.017480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.365 [2024-11-20 10:08:23.017496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.365 [2024-11-20 10:08:23.030327] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.365 [2024-11-20 10:08:23.030342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.365 [2024-11-20 10:08:23.045147] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.365 [2024-11-20 10:08:23.045167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.365 [2024-11-20 10:08:23.058412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.365 [2024-11-20 10:08:23.058428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.365 [2024-11-20 10:08:23.072826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.365 [2024-11-20 10:08:23.072842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.365 [2024-11-20 10:08:23.085848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.365 [2024-11-20 10:08:23.085863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.365 [2024-11-20 10:08:23.100827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.365 [2024-11-20 10:08:23.100842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.365 [2024-11-20 10:08:23.114021] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.365 [2024-11-20 10:08:23.114036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.365 [2024-11-20 10:08:23.128261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.365 [2024-11-20 10:08:23.128276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.365 [2024-11-20 10:08:23.141239] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.365 [2024-11-20 10:08:23.141254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.365 [2024-11-20 10:08:23.153942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.365 [2024-11-20 10:08:23.153957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.365 [2024-11-20 10:08:23.168540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.365 [2024-11-20 10:08:23.168556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.365 [2024-11-20 10:08:23.181592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.365 [2024-11-20 10:08:23.181607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.365 [2024-11-20 10:08:23.194562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.365 [2024-11-20 10:08:23.194578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.365 [2024-11-20 10:08:23.208503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.365 [2024-11-20 10:08:23.208518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.365 [2024-11-20 10:08:23.221529] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.365 [2024-11-20 10:08:23.221545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.365 [2024-11-20 10:08:23.234324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.365 [2024-11-20 10:08:23.234344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.365 [2024-11-20 10:08:23.248563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.365 [2024-11-20 10:08:23.248578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.365 [2024-11-20 10:08:23.261829] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.365 [2024-11-20 10:08:23.261844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.365 [2024-11-20 10:08:23.276378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.365 [2024-11-20 10:08:23.276393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.626 [2024-11-20 10:08:23.289549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.626 [2024-11-20 10:08:23.289564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.626 [2024-11-20 10:08:23.302519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.626 [2024-11-20 10:08:23.302534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.626 [2024-11-20 10:08:23.317014] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.626 [2024-11-20 10:08:23.317029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.626 [2024-11-20 10:08:23.330136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.626 [2024-11-20 10:08:23.330151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.626 [2024-11-20 10:08:23.344877] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.626 [2024-11-20 10:08:23.344893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.626 [2024-11-20 10:08:23.357468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.626 [2024-11-20 10:08:23.357484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.626 [2024-11-20 10:08:23.370758] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.626 [2024-11-20 10:08:23.370773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.626 [2024-11-20 10:08:23.384939] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.626 [2024-11-20 10:08:23.384955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.626 [2024-11-20 10:08:23.397467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.626 [2024-11-20 10:08:23.397483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.626 [2024-11-20 10:08:23.410784] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.626 [2024-11-20 10:08:23.410799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.626 [2024-11-20 10:08:23.424654] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.626 [2024-11-20 10:08:23.424670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.626 [2024-11-20 10:08:23.437978] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.626 [2024-11-20 10:08:23.437993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.626 [2024-11-20 10:08:23.452421] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.626 [2024-11-20 10:08:23.452437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.626 [2024-11-20 10:08:23.465402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.626 [2024-11-20 10:08:23.465417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.626 [2024-11-20 10:08:23.478893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.626 [2024-11-20 10:08:23.478907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.626 [2024-11-20 10:08:23.492453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.626 [2024-11-20 10:08:23.492472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.626 [2024-11-20 10:08:23.505513] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.626 [2024-11-20 10:08:23.505528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.626 [2024-11-20 10:08:23.518369] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.626 [2024-11-20 10:08:23.518384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.626 19064.75 IOPS, 148.94 MiB/s [2024-11-20T09:08:23.542Z] [2024-11-20 10:08:23.532578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.626 [2024-11-20 10:08:23.532593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.888 [2024-11-20 10:08:23.545710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.888 [2024-11-20 10:08:23.545725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.888 [2024-11-20 10:08:23.560531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.888 [2024-11-20 10:08:23.560545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.888 [2024-11-20 10:08:23.573540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.888 [2024-11-20 10:08:23.573555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.888 [2024-11-20 10:08:23.586530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.888 [2024-11-20 10:08:23.586545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.888 [2024-11-20 10:08:23.600545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.888 [2024-11-20 10:08:23.600560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.888 [2024-11-20 10:08:23.613798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.888 [2024-11-20 10:08:23.613813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.888 [2024-11-20 10:08:23.628495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.888 [2024-11-20 10:08:23.628511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.888 [2024-11-20 10:08:23.641667] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.888 [2024-11-20 10:08:23.641682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.888 [2024-11-20 10:08:23.654646] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.888 [2024-11-20 10:08:23.654661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.888 [2024-11-20 10:08:23.668505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.888 [2024-11-20 10:08:23.668520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.888 [2024-11-20 10:08:23.681877] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.888 [2024-11-20 10:08:23.681892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.888 [2024-11-20 10:08:23.696454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.888 [2024-11-20 10:08:23.696469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.888 [2024-11-20 10:08:23.709269] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.888 [2024-11-20 10:08:23.709285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.888 [2024-11-20 10:08:23.722373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.888 [2024-11-20 10:08:23.722388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.888 [2024-11-20 10:08:23.736449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.888 [2024-11-20 10:08:23.736464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.888 [2024-11-20 10:08:23.749262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.888 [2024-11-20 10:08:23.749277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.888 [2024-11-20 10:08:23.762653] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.888 [2024-11-20 10:08:23.762668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.888 [2024-11-20 10:08:23.776898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.888 [2024-11-20 10:08:23.776913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.888 [2024-11-20 10:08:23.789942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.888 [2024-11-20 10:08:23.789957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.149 [2024-11-20 10:08:23.804352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.149 [2024-11-20 10:08:23.804367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.149 [2024-11-20 10:08:23.817503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.149 [2024-11-20 10:08:23.817518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.149 [2024-11-20 10:08:23.830402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.149 [2024-11-20 10:08:23.830416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.149 [2024-11-20 10:08:23.844690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.149 [2024-11-20 10:08:23.844705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.149 [2024-11-20 10:08:23.857589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.149 [2024-11-20 10:08:23.857604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.149 [2024-11-20 10:08:23.870400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.149 [2024-11-20 10:08:23.870414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.149 [2024-11-20 10:08:23.884901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.149 [2024-11-20 10:08:23.884917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.149 [2024-11-20 10:08:23.898008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.149 [2024-11-20 10:08:23.898023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.149 [2024-11-20 10:08:23.912562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.149 [2024-11-20 10:08:23.912577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.149 [2024-11-20 10:08:23.925329] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.149 [2024-11-20 10:08:23.925344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.149 [2024-11-20 10:08:23.938413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.149 [2024-11-20 10:08:23.938427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.149 [2024-11-20 10:08:23.952432] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.149 [2024-11-20 10:08:23.952447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.149 [2024-11-20 10:08:23.965370] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.149 [2024-11-20 10:08:23.965384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.149 [2024-11-20 10:08:23.978652] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.149 [2024-11-20 10:08:23.978667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.149 [2024-11-20 10:08:23.992574] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.149 [2024-11-20 10:08:23.992588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.149 [2024-11-20 10:08:24.005356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.149 [2024-11-20 10:08:24.005372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.149 [2024-11-20 10:08:24.018765] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.149 [2024-11-20 10:08:24.018780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.149 [2024-11-20 10:08:24.033372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.149 [2024-11-20 10:08:24.033388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.149 [2024-11-20 10:08:24.046594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.149 [2024-11-20 10:08:24.046609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.149 [2024-11-20 10:08:24.060814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.150 [2024-11-20 10:08:24.060829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.411 [2024-11-20 10:08:24.073702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.411 [2024-11-20 10:08:24.073718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.411 [2024-11-20 10:08:24.086603] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.411 [2024-11-20 10:08:24.086617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.411 [2024-11-20 10:08:24.100724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.411 [2024-11-20 10:08:24.100740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.411 [2024-11-20 10:08:24.113896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.411 [2024-11-20 10:08:24.113911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.411 [2024-11-20 10:08:24.128447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.411 [2024-11-20 10:08:24.128463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.411 [2024-11-20 10:08:24.141591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.411 [2024-11-20 10:08:24.141606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.411 [2024-11-20 10:08:24.154498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.411 [2024-11-20 10:08:24.154513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.411 [2024-11-20 10:08:24.168490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.411 [2024-11-20 10:08:24.168505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.411 [2024-11-20 10:08:24.181310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.411 [2024-11-20 10:08:24.181325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.411 [2024-11-20 10:08:24.193696] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.411 [2024-11-20 10:08:24.193710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.411 [2024-11-20 10:08:24.209070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.411 [2024-11-20 10:08:24.209085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.411 [2024-11-20 10:08:24.222503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.411 [2024-11-20 10:08:24.222518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.411 [2024-11-20 10:08:24.236310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.411 [2024-11-20 10:08:24.236325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.411 [2024-11-20 10:08:24.249174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.411 [2024-11-20 10:08:24.249189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.411 [2024-11-20 10:08:24.262396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.411 [2024-11-20 10:08:24.262411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.411 [2024-11-20 10:08:24.276721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.411 [2024-11-20 10:08:24.276737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.411 [2024-11-20 10:08:24.289720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.411 [2024-11-20 10:08:24.289735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.411 [2024-11-20 10:08:24.304278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.411 [2024-11-20 10:08:24.304294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.411 [2024-11-20 10:08:24.317634] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.411 [2024-11-20 10:08:24.317650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.673 [2024-11-20 10:08:24.330660] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.673 [2024-11-20 10:08:24.330676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.673 [2024-11-20 10:08:24.345052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.673 [2024-11-20 10:08:24.345069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.673 [2024-11-20 10:08:24.358096] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.673 [2024-11-20 10:08:24.358111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.673 [2024-11-20 10:08:24.372597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.673 [2024-11-20 10:08:24.372613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.673 [2024-11-20 10:08:24.385735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.673 [2024-11-20 10:08:24.385750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.673 [2024-11-20 10:08:24.400317] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.673 [2024-11-20 10:08:24.400334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.673 [2024-11-20 10:08:24.412950] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.673 [2024-11-20 10:08:24.412965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.673 [2024-11-20 10:08:24.426362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.673 [2024-11-20 10:08:24.426377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.673 [2024-11-20 10:08:24.440243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.673 [2024-11-20 10:08:24.440259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.673 [2024-11-20 10:08:24.453635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.673 [2024-11-20 10:08:24.453651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.673 [2024-11-20 10:08:24.466924] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.673 [2024-11-20 10:08:24.466939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.673 [2024-11-20 10:08:24.480630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.673 [2024-11-20 10:08:24.480645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.673 [2024-11-20 10:08:24.493751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.673 [2024-11-20 10:08:24.493766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.673 [2024-11-20 10:08:24.508135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.673 [2024-11-20 10:08:24.508151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.673 [2024-11-20 10:08:24.521348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.673 [2024-11-20 10:08:24.521365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.673 19065.00 IOPS, 148.95 MiB/s 00:34:53.673 Latency(us) 00:34:53.673 [2024-11-20T09:08:24.589Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:53.673 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:34:53.673 Nvme1n1 : 5.01 19065.85 148.95 0.00 0.00 6708.19 2512.21 12506.45 00:34:53.673 [2024-11-20T09:08:24.589Z] =================================================================================================================== 00:34:53.673 [2024-11-20T09:08:24.589Z] Total : 19065.85 148.95 0.00 0.00 6708.19 2512.21 12506.45 00:34:53.673 [2024-11-20 10:08:24.529420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.673 [2024-11-20 10:08:24.529435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.673 [2024-11-20 10:08:24.541418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.673 [2024-11-20 10:08:24.541433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.673 [2024-11-20 10:08:24.553424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.673 [2024-11-20 10:08:24.553437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.673 [2024-11-20 10:08:24.565419] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.673 [2024-11-20 10:08:24.565431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.673 [2024-11-20 10:08:24.577417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.673 [2024-11-20 10:08:24.577428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.934 [2024-11-20 10:08:24.589415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.934 [2024-11-20 10:08:24.589426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.934 [2024-11-20 10:08:24.601414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.934 [2024-11-20 10:08:24.601423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.934 [2024-11-20 10:08:24.613418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.934 [2024-11-20 10:08:24.613429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.934 [2024-11-20 10:08:24.625413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.934 [2024-11-20 10:08:24.625422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.934 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1634063) - No such process 00:34:53.934 10:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1634063 00:34:53.934 10:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:53.934 10:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.934 10:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:53.934 10:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.934 10:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:34:53.934 10:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.934 10:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:53.934 delay0 00:34:53.934 10:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.934 10:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:34:53.934 10:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.934 10:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:53.934 10:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.934 10:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:34:53.934 [2024-11-20 10:08:24.791834] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:35:02.077 Initializing NVMe Controllers 00:35:02.077 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:02.077 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:35:02.077 Initialization complete. Launching workers. 00:35:02.077 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 243, failed: 25355 00:35:02.077 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 25466, failed to submit 132 00:35:02.077 success 25398, unsuccessful 68, failed 0 00:35:02.077 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:35:02.077 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:35:02.077 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:02.077 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:35:02.077 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:02.077 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:35:02.077 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:02.077 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:02.077 rmmod nvme_tcp 00:35:02.077 rmmod nvme_fabrics 00:35:02.077 rmmod nvme_keyring 00:35:02.077 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:02.077 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:35:02.077 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:35:02.077 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 1632006 ']' 00:35:02.077 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 1632006 00:35:02.077 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 1632006 ']' 00:35:02.077 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 1632006 00:35:02.077 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:35:02.077 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:02.077 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1632006 00:35:02.077 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:02.077 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:02.077 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1632006' 00:35:02.077 killing process with pid 1632006 00:35:02.077 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 1632006 00:35:02.077 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 1632006 00:35:02.077 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:02.077 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:02.077 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:02.077 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:35:02.077 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:35:02.077 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:02.077 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:35:02.077 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:02.077 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:02.077 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:02.077 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:02.077 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:03.019 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:03.281 00:35:03.281 real 0m33.846s 00:35:03.281 user 0m43.527s 00:35:03.281 sys 0m12.061s 00:35:03.281 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:03.281 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:03.281 ************************************ 00:35:03.281 END TEST nvmf_zcopy 00:35:03.281 ************************************ 00:35:03.281 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:35:03.281 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:03.281 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:03.281 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:03.281 ************************************ 00:35:03.281 START TEST nvmf_nmic 00:35:03.281 ************************************ 00:35:03.281 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:35:03.281 * Looking for test storage... 00:35:03.281 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:03.281 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:03.281 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:35:03.281 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:03.281 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:03.281 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:03.281 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:03.281 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:03.542 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:35:03.542 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:35:03.542 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:35:03.542 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:35:03.542 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:35:03.542 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:35:03.542 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:35:03.542 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:03.542 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:35:03.542 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:35:03.542 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:03.542 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:03.542 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:35:03.542 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:35:03.542 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:03.542 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:35:03.542 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:35:03.542 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:35:03.542 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:35:03.542 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:03.542 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:35:03.542 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:35:03.542 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:03.542 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:03.542 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:35:03.542 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:03.542 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:03.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:03.542 --rc genhtml_branch_coverage=1 00:35:03.542 --rc genhtml_function_coverage=1 00:35:03.542 --rc genhtml_legend=1 00:35:03.542 --rc geninfo_all_blocks=1 00:35:03.542 --rc geninfo_unexecuted_blocks=1 00:35:03.542 00:35:03.542 ' 00:35:03.542 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:03.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:03.542 --rc genhtml_branch_coverage=1 00:35:03.542 --rc genhtml_function_coverage=1 00:35:03.542 --rc genhtml_legend=1 00:35:03.542 --rc geninfo_all_blocks=1 00:35:03.542 --rc geninfo_unexecuted_blocks=1 00:35:03.542 00:35:03.542 ' 00:35:03.542 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:03.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:03.542 --rc genhtml_branch_coverage=1 00:35:03.542 --rc genhtml_function_coverage=1 00:35:03.542 --rc genhtml_legend=1 00:35:03.542 --rc geninfo_all_blocks=1 00:35:03.542 --rc geninfo_unexecuted_blocks=1 00:35:03.542 00:35:03.542 ' 00:35:03.542 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:03.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:03.542 --rc genhtml_branch_coverage=1 00:35:03.542 --rc genhtml_function_coverage=1 00:35:03.542 --rc genhtml_legend=1 00:35:03.542 --rc geninfo_all_blocks=1 00:35:03.542 --rc geninfo_unexecuted_blocks=1 00:35:03.542 00:35:03.542 ' 00:35:03.542 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:03.542 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:35:03.542 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:03.542 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:03.542 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:03.542 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:03.542 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:03.542 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:03.542 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:03.542 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:03.542 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:03.542 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:03.542 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:03.542 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:03.542 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:03.542 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:03.542 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:03.542 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:03.542 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:03.542 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:35:03.542 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:03.542 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:03.542 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:03.542 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:03.543 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:03.543 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:03.543 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:35:03.543 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:03.543 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:35:03.543 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:03.543 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:03.543 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:03.543 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:03.543 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:03.543 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:03.543 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:03.543 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:03.543 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:03.543 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:03.543 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:03.543 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:03.543 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:35:03.543 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:03.543 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:03.543 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:03.543 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:03.543 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:03.543 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:03.543 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:03.543 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:03.543 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:03.543 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:03.543 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:35:03.543 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:11.681 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:11.681 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:35:11.681 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:11.681 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:11.681 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:11.681 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:11.681 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:11.681 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:35:11.681 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:11.681 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:35:11.681 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:35:11.681 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:35:11.682 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:35:11.682 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:35:11.682 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:35:11.682 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:11.682 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:11.682 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:11.682 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:11.682 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:11.682 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:11.682 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:11.682 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:11.682 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:11.682 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:11.682 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:11.682 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:11.682 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:11.682 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:11.682 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:11.682 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:11.682 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:11.682 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:11.682 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:11.682 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:11.682 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:11.682 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:11.682 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:11.682 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:11.682 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:11.682 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:11.682 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:11.682 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:11.682 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:11.682 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:11.682 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:11.682 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:11.682 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:11.682 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:11.682 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:11.682 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:11.682 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:11.682 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:11.682 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:11.682 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:11.682 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:11.682 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:11.682 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:11.682 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:11.682 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:11.682 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:11.682 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:11.682 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:11.682 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:11.682 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:11.682 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:11.682 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:11.682 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:11.682 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:11.682 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:11.682 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:11.682 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:11.682 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:11.682 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:35:11.682 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:11.682 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:11.682 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:11.682 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:11.682 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:11.682 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:11.682 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:11.682 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:11.682 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:11.682 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:11.682 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:11.682 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:11.682 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:11.682 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:11.682 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:11.682 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:11.682 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:11.682 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:11.682 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:11.682 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:11.682 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:11.682 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:11.682 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:11.682 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:11.682 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:11.682 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:11.682 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:11.682 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.688 ms 00:35:11.682 00:35:11.682 --- 10.0.0.2 ping statistics --- 00:35:11.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:11.682 rtt min/avg/max/mdev = 0.688/0.688/0.688/0.000 ms 00:35:11.682 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:11.682 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:11.682 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:35:11.682 00:35:11.682 --- 10.0.0.1 ping statistics --- 00:35:11.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:11.682 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:35:11.682 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:11.682 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:35:11.682 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:11.682 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:11.682 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:11.683 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:11.683 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:11.683 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:11.683 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:11.683 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:35:11.683 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:11.683 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:11.683 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:11.683 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=1640718 00:35:11.683 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 1640718 00:35:11.683 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:35:11.683 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 1640718 ']' 00:35:11.683 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:11.683 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:11.683 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:11.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:11.683 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:11.683 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:11.683 [2024-11-20 10:08:41.665468] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:11.683 [2024-11-20 10:08:41.666590] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:35:11.683 [2024-11-20 10:08:41.666642] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:11.683 [2024-11-20 10:08:41.765410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:11.683 [2024-11-20 10:08:41.812041] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:11.683 [2024-11-20 10:08:41.812076] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:11.683 [2024-11-20 10:08:41.812085] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:11.683 [2024-11-20 10:08:41.812092] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:11.683 [2024-11-20 10:08:41.812098] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:11.683 [2024-11-20 10:08:41.813655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:11.683 [2024-11-20 10:08:41.813804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:11.683 [2024-11-20 10:08:41.813953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:11.683 [2024-11-20 10:08:41.813954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:11.683 [2024-11-20 10:08:41.870148] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:11.683 [2024-11-20 10:08:41.871533] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:35:11.683 [2024-11-20 10:08:41.871699] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:11.683 [2024-11-20 10:08:41.872297] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:11.683 [2024-11-20 10:08:41.872333] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:11.683 10:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:11.683 10:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:35:11.683 10:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:11.683 10:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:11.683 10:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:11.683 10:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:11.683 10:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:11.683 10:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.683 10:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:11.683 [2024-11-20 10:08:42.502704] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:11.683 10:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.683 10:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:11.683 10:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.683 10:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:11.683 Malloc0 00:35:11.683 10:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.683 10:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:35:11.683 10:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.683 10:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:11.683 10:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.683 10:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:11.683 10:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.683 10:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:11.683 10:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.683 10:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:11.683 10:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.683 10:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:11.683 [2024-11-20 10:08:42.582883] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:11.683 10:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.683 10:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:35:11.683 test case1: single bdev can't be used in multiple subsystems 00:35:11.683 10:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:35:11.683 10:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.683 10:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:11.944 10:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.945 10:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:35:11.945 10:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.945 10:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:11.945 10:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.945 10:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:35:11.945 10:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:35:11.945 10:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.945 10:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:11.945 [2024-11-20 10:08:42.618333] bdev.c:8199:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:35:11.945 [2024-11-20 10:08:42.618355] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:35:11.945 [2024-11-20 10:08:42.618364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:11.945 request: 00:35:11.945 { 00:35:11.945 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:35:11.945 "namespace": { 00:35:11.945 "bdev_name": "Malloc0", 00:35:11.945 "no_auto_visible": false 00:35:11.945 }, 00:35:11.945 "method": "nvmf_subsystem_add_ns", 00:35:11.945 "req_id": 1 00:35:11.945 } 00:35:11.945 Got JSON-RPC error response 00:35:11.945 response: 00:35:11.945 { 00:35:11.945 "code": -32602, 00:35:11.945 "message": "Invalid parameters" 00:35:11.945 } 00:35:11.945 10:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:11.945 10:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:35:11.945 10:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:35:11.945 10:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:35:11.945 Adding namespace failed - expected result. 00:35:11.945 10:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:35:11.945 test case2: host connect to nvmf target in multiple paths 00:35:11.945 10:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:35:11.945 10:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.945 10:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:11.945 [2024-11-20 10:08:42.630445] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:35:11.945 10:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.945 10:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:35:12.207 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:35:12.779 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:35:12.779 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:35:12.779 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:35:12.779 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:35:12.779 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:35:14.694 10:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:35:14.694 10:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:35:14.694 10:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:35:14.694 10:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:35:14.694 10:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:35:14.694 10:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:35:14.694 10:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:35:14.694 [global] 00:35:14.694 thread=1 00:35:14.694 invalidate=1 00:35:14.694 rw=write 00:35:14.694 time_based=1 00:35:14.694 runtime=1 00:35:14.694 ioengine=libaio 00:35:14.694 direct=1 00:35:14.694 bs=4096 00:35:14.694 iodepth=1 00:35:14.694 norandommap=0 00:35:14.694 numjobs=1 00:35:14.694 00:35:14.694 verify_dump=1 00:35:14.694 verify_backlog=512 00:35:14.694 verify_state_save=0 00:35:14.694 do_verify=1 00:35:14.694 verify=crc32c-intel 00:35:14.694 [job0] 00:35:14.694 filename=/dev/nvme0n1 00:35:14.985 Could not set queue depth (nvme0n1) 00:35:15.251 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:15.251 fio-3.35 00:35:15.251 Starting 1 thread 00:35:16.635 00:35:16.635 job0: (groupid=0, jobs=1): err= 0: pid=1641601: Wed Nov 20 10:08:47 2024 00:35:16.635 read: IOPS=17, BW=70.9KiB/s (72.6kB/s)(72.0KiB/1016msec) 00:35:16.635 slat (nsec): min=8164, max=29374, avg=27110.33, stdev=4753.75 00:35:16.635 clat (usec): min=1075, max=42229, avg=39353.18, stdev=9565.56 00:35:16.635 lat (usec): min=1103, max=42258, avg=39380.29, stdev=9565.47 00:35:16.635 clat percentiles (usec): 00:35:16.635 | 1.00th=[ 1074], 5.00th=[ 1074], 10.00th=[40633], 20.00th=[41157], 00:35:16.635 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:35:16.635 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:35:16.635 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:35:16.635 | 99.99th=[42206] 00:35:16.635 write: IOPS=503, BW=2016KiB/s (2064kB/s)(2048KiB/1016msec); 0 zone resets 00:35:16.635 slat (usec): min=9, max=27105, avg=83.32, stdev=1196.62 00:35:16.635 clat (usec): min=245, max=835, avg=508.54, stdev=117.68 00:35:16.635 lat (usec): min=256, max=27604, avg=591.85, stdev=1202.47 00:35:16.635 clat percentiles (usec): 00:35:16.635 | 1.00th=[ 281], 5.00th=[ 334], 10.00th=[ 347], 20.00th=[ 404], 00:35:16.635 | 30.00th=[ 445], 40.00th=[ 469], 50.00th=[ 494], 60.00th=[ 529], 00:35:16.635 | 70.00th=[ 578], 80.00th=[ 619], 90.00th=[ 676], 95.00th=[ 709], 00:35:16.635 | 99.00th=[ 750], 99.50th=[ 766], 99.90th=[ 832], 99.95th=[ 832], 00:35:16.635 | 99.99th=[ 832] 00:35:16.635 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:35:16.635 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:16.635 lat (usec) : 250=0.19%, 500=50.57%, 750=44.53%, 1000=1.32% 00:35:16.635 lat (msec) : 2=0.19%, 50=3.21% 00:35:16.635 cpu : usr=1.08%, sys=1.87%, ctx=533, majf=0, minf=1 00:35:16.635 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:16.635 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:16.635 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:16.635 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:16.635 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:16.635 00:35:16.635 Run status group 0 (all jobs): 00:35:16.635 READ: bw=70.9KiB/s (72.6kB/s), 70.9KiB/s-70.9KiB/s (72.6kB/s-72.6kB/s), io=72.0KiB (73.7kB), run=1016-1016msec 00:35:16.635 WRITE: bw=2016KiB/s (2064kB/s), 2016KiB/s-2016KiB/s (2064kB/s-2064kB/s), io=2048KiB (2097kB), run=1016-1016msec 00:35:16.635 00:35:16.635 Disk stats (read/write): 00:35:16.635 nvme0n1: ios=41/512, merge=0/0, ticks=1540/202, in_queue=1742, util=98.60% 00:35:16.635 10:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:35:16.635 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:35:16.635 10:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:35:16.635 10:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:35:16.635 10:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:35:16.636 10:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:16.636 10:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:35:16.636 10:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:16.636 10:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:35:16.636 10:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:35:16.636 10:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:35:16.636 10:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:16.636 10:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:35:16.636 10:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:16.636 10:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:35:16.636 10:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:16.636 10:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:16.636 rmmod nvme_tcp 00:35:16.636 rmmod nvme_fabrics 00:35:16.636 rmmod nvme_keyring 00:35:16.636 10:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:16.636 10:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:35:16.636 10:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:35:16.636 10:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 1640718 ']' 00:35:16.636 10:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 1640718 00:35:16.636 10:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 1640718 ']' 00:35:16.636 10:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 1640718 00:35:16.636 10:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:35:16.636 10:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:16.636 10:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1640718 00:35:16.636 10:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:16.636 10:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:16.636 10:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1640718' 00:35:16.636 killing process with pid 1640718 00:35:16.636 10:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 1640718 00:35:16.636 10:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 1640718 00:35:16.897 10:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:16.897 10:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:16.897 10:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:16.897 10:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:35:16.897 10:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:16.897 10:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:35:16.897 10:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:35:16.897 10:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:16.897 10:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:16.897 10:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:16.897 10:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:16.897 10:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:18.808 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:18.808 00:35:18.808 real 0m15.668s 00:35:18.808 user 0m38.051s 00:35:18.808 sys 0m7.282s 00:35:18.808 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:18.808 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:18.808 ************************************ 00:35:18.808 END TEST nvmf_nmic 00:35:18.808 ************************************ 00:35:19.069 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:35:19.069 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:19.069 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:19.069 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:19.069 ************************************ 00:35:19.069 START TEST nvmf_fio_target 00:35:19.069 ************************************ 00:35:19.069 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:35:19.069 * Looking for test storage... 00:35:19.069 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:19.069 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:19.069 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:35:19.069 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:19.069 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:19.069 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:19.069 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:19.069 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:19.069 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:35:19.069 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:35:19.069 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:35:19.069 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:35:19.069 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:35:19.069 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:35:19.069 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:35:19.069 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:19.069 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:35:19.069 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:35:19.069 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:19.069 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:19.069 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:35:19.069 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:35:19.069 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:19.069 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:35:19.069 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:35:19.069 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:35:19.069 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:35:19.069 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:19.069 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:35:19.069 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:35:19.069 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:19.069 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:19.069 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:35:19.069 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:19.069 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:19.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:19.069 --rc genhtml_branch_coverage=1 00:35:19.069 --rc genhtml_function_coverage=1 00:35:19.069 --rc genhtml_legend=1 00:35:19.069 --rc geninfo_all_blocks=1 00:35:19.069 --rc geninfo_unexecuted_blocks=1 00:35:19.069 00:35:19.069 ' 00:35:19.069 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:19.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:19.069 --rc genhtml_branch_coverage=1 00:35:19.069 --rc genhtml_function_coverage=1 00:35:19.069 --rc genhtml_legend=1 00:35:19.069 --rc geninfo_all_blocks=1 00:35:19.069 --rc geninfo_unexecuted_blocks=1 00:35:19.069 00:35:19.069 ' 00:35:19.069 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:19.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:19.069 --rc genhtml_branch_coverage=1 00:35:19.069 --rc genhtml_function_coverage=1 00:35:19.069 --rc genhtml_legend=1 00:35:19.069 --rc geninfo_all_blocks=1 00:35:19.069 --rc geninfo_unexecuted_blocks=1 00:35:19.069 00:35:19.069 ' 00:35:19.069 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:19.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:19.069 --rc genhtml_branch_coverage=1 00:35:19.069 --rc genhtml_function_coverage=1 00:35:19.069 --rc genhtml_legend=1 00:35:19.069 --rc geninfo_all_blocks=1 00:35:19.069 --rc geninfo_unexecuted_blocks=1 00:35:19.069 00:35:19.069 ' 00:35:19.069 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:19.069 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:35:19.069 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:19.331 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:19.331 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:19.331 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:19.331 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:19.331 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:19.331 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:19.331 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:19.331 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:19.331 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:19.331 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:19.331 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:19.331 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:19.331 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:19.331 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:19.331 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:19.331 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:19.331 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:35:19.331 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:19.331 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:19.331 10:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:19.331 10:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:19.331 10:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:19.331 10:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:19.331 10:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:35:19.331 10:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:19.331 10:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:35:19.331 10:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:19.331 10:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:19.331 10:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:19.331 10:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:19.331 10:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:19.331 10:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:19.331 10:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:19.331 10:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:19.331 10:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:19.331 10:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:19.331 10:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:19.331 10:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:19.331 10:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:19.331 10:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:35:19.331 10:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:19.331 10:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:19.331 10:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:19.331 10:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:19.331 10:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:19.331 10:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:19.331 10:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:19.331 10:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:19.331 10:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:19.331 10:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:19.331 10:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:35:19.331 10:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:35:27.468 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:27.468 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:35:27.468 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:27.468 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:27.468 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:27.468 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:27.468 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:27.468 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:35:27.468 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:27.468 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:35:27.468 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:35:27.468 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:35:27.468 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:35:27.468 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:35:27.468 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:35:27.468 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:27.468 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:27.468 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:27.468 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:27.468 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:27.468 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:27.468 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:27.468 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:27.468 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:27.468 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:27.468 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:27.468 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:27.468 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:27.468 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:27.468 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:27.468 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:27.468 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:27.468 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:27.468 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:27.468 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:27.468 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:27.468 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:27.468 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:27.468 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:27.468 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:27.468 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:27.468 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:27.468 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:27.468 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:27.468 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:27.468 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:27.468 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:27.468 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:27.468 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:27.468 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:27.468 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:27.468 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:27.468 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:27.468 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:27.468 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:27.468 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:27.468 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:27.468 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:27.468 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:27.468 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:27.468 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:27.468 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:27.468 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:27.468 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:27.468 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:27.468 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:27.468 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:27.468 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:27.468 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:27.468 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:27.468 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:27.468 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:27.468 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:27.469 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:35:27.469 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:27.469 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:27.469 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:27.469 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:27.469 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:27.469 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:27.469 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:27.469 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:27.469 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:27.469 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:27.469 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:27.469 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:27.469 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:27.469 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:27.469 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:27.469 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:27.469 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:27.469 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:27.469 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:27.469 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:27.469 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:27.469 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:27.469 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:27.469 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:27.469 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:27.469 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:27.469 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:27.469 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.411 ms 00:35:27.469 00:35:27.469 --- 10.0.0.2 ping statistics --- 00:35:27.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:27.469 rtt min/avg/max/mdev = 0.411/0.411/0.411/0.000 ms 00:35:27.469 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:27.469 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:27.469 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:35:27.469 00:35:27.469 --- 10.0.0.1 ping statistics --- 00:35:27.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:27.469 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:35:27.469 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:27.469 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:35:27.469 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:27.469 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:27.469 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:27.469 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:27.469 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:27.469 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:27.469 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:27.469 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:35:27.469 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:27.469 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:27.469 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:35:27.469 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=1646181 00:35:27.469 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 1646181 00:35:27.469 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:35:27.469 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 1646181 ']' 00:35:27.469 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:27.469 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:27.469 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:27.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:27.469 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:27.469 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:35:27.469 [2024-11-20 10:08:57.575041] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:27.469 [2024-11-20 10:08:57.576152] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:35:27.469 [2024-11-20 10:08:57.576227] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:27.469 [2024-11-20 10:08:57.673903] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:27.469 [2024-11-20 10:08:57.727537] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:27.469 [2024-11-20 10:08:57.727588] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:27.469 [2024-11-20 10:08:57.727596] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:27.469 [2024-11-20 10:08:57.727604] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:27.469 [2024-11-20 10:08:57.727610] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:27.469 [2024-11-20 10:08:57.730035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:27.469 [2024-11-20 10:08:57.730221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:27.469 [2024-11-20 10:08:57.730330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:27.469 [2024-11-20 10:08:57.730331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:27.469 [2024-11-20 10:08:57.807511] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:27.469 [2024-11-20 10:08:57.808497] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:27.469 [2024-11-20 10:08:57.808699] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:35:27.469 [2024-11-20 10:08:57.809101] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:27.469 [2024-11-20 10:08:57.809146] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:27.765 10:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:27.765 10:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:35:27.765 10:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:27.765 10:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:27.765 10:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:35:27.765 10:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:27.765 10:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:35:27.765 [2024-11-20 10:08:58.595424] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:27.765 10:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:28.057 10:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:35:28.057 10:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:28.336 10:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:35:28.336 10:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:28.336 10:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:35:28.336 10:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:28.602 10:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:35:28.602 10:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:35:28.862 10:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:28.862 10:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:35:28.862 10:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:29.122 10:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:35:29.122 10:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:29.382 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:35:29.382 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:35:29.382 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:35:29.644 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:35:29.644 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:29.907 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:35:29.907 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:35:30.177 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:30.177 [2024-11-20 10:09:00.987340] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:30.177 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:35:30.438 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:35:30.699 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:35:30.961 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:35:30.961 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:35:30.961 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:35:30.961 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:35:30.961 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:35:30.961 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:35:33.506 10:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:35:33.506 10:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:35:33.506 10:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:35:33.506 10:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:35:33.506 10:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:35:33.506 10:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:35:33.506 10:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:35:33.506 [global] 00:35:33.506 thread=1 00:35:33.506 invalidate=1 00:35:33.506 rw=write 00:35:33.506 time_based=1 00:35:33.506 runtime=1 00:35:33.506 ioengine=libaio 00:35:33.506 direct=1 00:35:33.506 bs=4096 00:35:33.506 iodepth=1 00:35:33.506 norandommap=0 00:35:33.506 numjobs=1 00:35:33.506 00:35:33.506 verify_dump=1 00:35:33.506 verify_backlog=512 00:35:33.506 verify_state_save=0 00:35:33.506 do_verify=1 00:35:33.506 verify=crc32c-intel 00:35:33.506 [job0] 00:35:33.506 filename=/dev/nvme0n1 00:35:33.506 [job1] 00:35:33.506 filename=/dev/nvme0n2 00:35:33.506 [job2] 00:35:33.506 filename=/dev/nvme0n3 00:35:33.506 [job3] 00:35:33.506 filename=/dev/nvme0n4 00:35:33.506 Could not set queue depth (nvme0n1) 00:35:33.506 Could not set queue depth (nvme0n2) 00:35:33.506 Could not set queue depth (nvme0n3) 00:35:33.506 Could not set queue depth (nvme0n4) 00:35:33.506 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:33.506 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:33.506 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:33.506 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:33.506 fio-3.35 00:35:33.506 Starting 4 threads 00:35:34.892 00:35:34.892 job0: (groupid=0, jobs=1): err= 0: pid=1647653: Wed Nov 20 10:09:05 2024 00:35:34.892 read: IOPS=30, BW=122KiB/s (125kB/s)(124KiB/1016msec) 00:35:34.892 slat (nsec): min=8194, max=28687, avg=25868.19, stdev=4554.84 00:35:34.892 clat (usec): min=807, max=42059, avg=22119.41, stdev=20740.58 00:35:34.892 lat (usec): min=831, max=42087, avg=22145.28, stdev=20741.17 00:35:34.892 clat percentiles (usec): 00:35:34.892 | 1.00th=[ 807], 5.00th=[ 865], 10.00th=[ 898], 20.00th=[ 1004], 00:35:34.892 | 30.00th=[ 1045], 40.00th=[ 1303], 50.00th=[41157], 60.00th=[41681], 00:35:34.892 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:35:34.892 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:35:34.892 | 99.99th=[42206] 00:35:34.892 write: IOPS=503, BW=2016KiB/s (2064kB/s)(2048KiB/1016msec); 0 zone resets 00:35:34.892 slat (nsec): min=9475, max=74047, avg=31962.40, stdev=11265.60 00:35:34.892 clat (usec): min=258, max=1009, avg=599.72, stdev=131.32 00:35:34.892 lat (usec): min=269, max=1020, avg=631.69, stdev=135.46 00:35:34.892 clat percentiles (usec): 00:35:34.892 | 1.00th=[ 297], 5.00th=[ 367], 10.00th=[ 433], 20.00th=[ 482], 00:35:34.892 | 30.00th=[ 529], 40.00th=[ 570], 50.00th=[ 594], 60.00th=[ 635], 00:35:34.892 | 70.00th=[ 685], 80.00th=[ 725], 90.00th=[ 766], 95.00th=[ 799], 00:35:34.892 | 99.00th=[ 848], 99.50th=[ 947], 99.90th=[ 1012], 99.95th=[ 1012], 00:35:34.892 | 99.99th=[ 1012] 00:35:34.892 bw ( KiB/s): min= 4096, max= 4096, per=51.50%, avg=4096.00, stdev= 0.00, samples=1 00:35:34.892 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:34.892 lat (usec) : 500=24.13%, 750=57.83%, 1000=13.26% 00:35:34.892 lat (msec) : 2=1.84%, 50=2.95% 00:35:34.892 cpu : usr=0.89%, sys=2.17%, ctx=544, majf=0, minf=1 00:35:34.892 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:34.892 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.892 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.892 issued rwts: total=31,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:34.892 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:34.892 job1: (groupid=0, jobs=1): err= 0: pid=1647655: Wed Nov 20 10:09:05 2024 00:35:34.892 read: IOPS=21, BW=87.6KiB/s (89.7kB/s)(88.0KiB/1005msec) 00:35:34.892 slat (nsec): min=15090, max=28018, avg=23137.82, stdev=5888.83 00:35:34.892 clat (usec): min=777, max=41969, avg=30425.19, stdev=18460.36 00:35:34.892 lat (usec): min=793, max=41997, avg=30448.33, stdev=18465.15 00:35:34.892 clat percentiles (usec): 00:35:34.892 | 1.00th=[ 775], 5.00th=[ 979], 10.00th=[ 988], 20.00th=[ 1020], 00:35:34.892 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:35:34.892 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:35:34.892 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:35:34.892 | 99.99th=[42206] 00:35:34.892 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:35:34.892 slat (usec): min=5, max=1717, avg=27.44, stdev=114.64 00:35:34.892 clat (usec): min=227, max=1012, avg=619.31, stdev=129.87 00:35:34.892 lat (usec): min=237, max=2391, avg=646.76, stdev=175.04 00:35:34.892 clat percentiles (usec): 00:35:34.892 | 1.00th=[ 314], 5.00th=[ 388], 10.00th=[ 441], 20.00th=[ 510], 00:35:34.892 | 30.00th=[ 562], 40.00th=[ 603], 50.00th=[ 627], 60.00th=[ 660], 00:35:34.892 | 70.00th=[ 701], 80.00th=[ 725], 90.00th=[ 775], 95.00th=[ 807], 00:35:34.892 | 99.00th=[ 914], 99.50th=[ 955], 99.90th=[ 1012], 99.95th=[ 1012], 00:35:34.892 | 99.99th=[ 1012] 00:35:34.892 bw ( KiB/s): min= 4096, max= 4096, per=51.50%, avg=4096.00, stdev= 0.00, samples=1 00:35:34.892 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:34.892 lat (usec) : 250=0.19%, 500=17.23%, 750=63.30%, 1000=15.54% 00:35:34.892 lat (msec) : 2=0.75%, 50=3.00% 00:35:34.892 cpu : usr=0.70%, sys=1.29%, ctx=538, majf=0, minf=1 00:35:34.892 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:34.892 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.893 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.893 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:34.893 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:34.893 job2: (groupid=0, jobs=1): err= 0: pid=1647661: Wed Nov 20 10:09:05 2024 00:35:34.893 read: IOPS=16, BW=66.0KiB/s (67.6kB/s)(68.0KiB/1030msec) 00:35:34.893 slat (nsec): min=26040, max=26791, avg=26312.24, stdev=171.28 00:35:34.893 clat (usec): min=1072, max=42073, avg=39533.03, stdev=9911.74 00:35:34.893 lat (usec): min=1099, max=42099, avg=39559.34, stdev=9911.62 00:35:34.893 clat percentiles (usec): 00:35:34.893 | 1.00th=[ 1074], 5.00th=[ 1074], 10.00th=[41681], 20.00th=[41681], 00:35:34.893 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:35:34.893 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:35:34.893 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:35:34.893 | 99.99th=[42206] 00:35:34.893 write: IOPS=497, BW=1988KiB/s (2036kB/s)(2048KiB/1030msec); 0 zone resets 00:35:34.893 slat (nsec): min=10189, max=81367, avg=31962.32, stdev=9444.53 00:35:34.893 clat (usec): min=218, max=1163, avg=657.43, stdev=140.05 00:35:34.893 lat (usec): min=232, max=1198, avg=689.39, stdev=143.23 00:35:34.893 clat percentiles (usec): 00:35:34.893 | 1.00th=[ 306], 5.00th=[ 416], 10.00th=[ 478], 20.00th=[ 553], 00:35:34.893 | 30.00th=[ 603], 40.00th=[ 627], 50.00th=[ 668], 60.00th=[ 701], 00:35:34.893 | 70.00th=[ 734], 80.00th=[ 758], 90.00th=[ 824], 95.00th=[ 873], 00:35:34.893 | 99.00th=[ 1004], 99.50th=[ 1106], 99.90th=[ 1172], 99.95th=[ 1172], 00:35:34.893 | 99.99th=[ 1172] 00:35:34.893 bw ( KiB/s): min= 4096, max= 4096, per=51.50%, avg=4096.00, stdev= 0.00, samples=1 00:35:34.893 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:34.893 lat (usec) : 250=0.38%, 500=13.61%, 750=60.87%, 1000=20.98% 00:35:34.893 lat (msec) : 2=1.13%, 50=3.02% 00:35:34.893 cpu : usr=0.58%, sys=1.75%, ctx=530, majf=0, minf=1 00:35:34.893 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:34.893 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.893 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.893 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:34.893 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:34.893 job3: (groupid=0, jobs=1): err= 0: pid=1647666: Wed Nov 20 10:09:05 2024 00:35:34.893 read: IOPS=29, BW=118KiB/s (121kB/s)(120KiB/1016msec) 00:35:34.893 slat (nsec): min=26604, max=28667, avg=26953.20, stdev=370.88 00:35:34.893 clat (usec): min=883, max=42002, avg=21901.24, stdev=20496.45 00:35:34.893 lat (usec): min=910, max=42029, avg=21928.19, stdev=20496.48 00:35:34.893 clat percentiles (usec): 00:35:34.893 | 1.00th=[ 881], 5.00th=[ 889], 10.00th=[ 955], 20.00th=[ 1037], 00:35:34.893 | 30.00th=[ 1057], 40.00th=[ 1287], 50.00th=[12780], 60.00th=[41681], 00:35:34.893 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:35:34.893 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:35:34.893 | 99.99th=[42206] 00:35:34.893 write: IOPS=503, BW=2014KiB/s (2062kB/s)(2048KiB/1017msec); 0 zone resets 00:35:34.893 slat (nsec): min=10293, max=76848, avg=33125.75, stdev=9820.22 00:35:34.893 clat (usec): min=241, max=1094, avg=655.74, stdev=149.12 00:35:34.893 lat (usec): min=254, max=1129, avg=688.87, stdev=150.49 00:35:34.893 clat percentiles (usec): 00:35:34.893 | 1.00th=[ 314], 5.00th=[ 396], 10.00th=[ 469], 20.00th=[ 529], 00:35:34.893 | 30.00th=[ 578], 40.00th=[ 619], 50.00th=[ 660], 60.00th=[ 701], 00:35:34.893 | 70.00th=[ 734], 80.00th=[ 766], 90.00th=[ 848], 95.00th=[ 889], 00:35:34.893 | 99.00th=[ 1004], 99.50th=[ 1057], 99.90th=[ 1090], 99.95th=[ 1090], 00:35:34.893 | 99.99th=[ 1090] 00:35:34.893 bw ( KiB/s): min= 4096, max= 4096, per=51.50%, avg=4096.00, stdev= 0.00, samples=1 00:35:34.893 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:34.893 lat (usec) : 250=0.18%, 500=13.47%, 750=56.09%, 1000=24.35% 00:35:34.893 lat (msec) : 2=2.95%, 20=0.18%, 50=2.77% 00:35:34.893 cpu : usr=1.18%, sys=1.38%, ctx=544, majf=0, minf=1 00:35:34.893 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:34.893 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.893 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.893 issued rwts: total=30,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:34.893 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:34.893 00:35:34.893 Run status group 0 (all jobs): 00:35:34.893 READ: bw=388KiB/s (398kB/s), 66.0KiB/s-122KiB/s (67.6kB/s-125kB/s), io=400KiB (410kB), run=1005-1030msec 00:35:34.893 WRITE: bw=7953KiB/s (8144kB/s), 1988KiB/s-2038KiB/s (2036kB/s-2087kB/s), io=8192KiB (8389kB), run=1005-1030msec 00:35:34.893 00:35:34.893 Disk stats (read/write): 00:35:34.893 nvme0n1: ios=68/512, merge=0/0, ticks=527/251, in_queue=778, util=86.77% 00:35:34.893 nvme0n2: ios=69/512, merge=0/0, ticks=614/245, in_queue=859, util=90.82% 00:35:34.893 nvme0n3: ios=35/512, merge=0/0, ticks=1343/322, in_queue=1665, util=91.97% 00:35:34.893 nvme0n4: ios=88/512, merge=0/0, ticks=615/321, in_queue=936, util=97.22% 00:35:34.893 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:35:34.893 [global] 00:35:34.893 thread=1 00:35:34.893 invalidate=1 00:35:34.893 rw=randwrite 00:35:34.893 time_based=1 00:35:34.893 runtime=1 00:35:34.893 ioengine=libaio 00:35:34.893 direct=1 00:35:34.893 bs=4096 00:35:34.893 iodepth=1 00:35:34.893 norandommap=0 00:35:34.893 numjobs=1 00:35:34.893 00:35:34.893 verify_dump=1 00:35:34.893 verify_backlog=512 00:35:34.893 verify_state_save=0 00:35:34.893 do_verify=1 00:35:34.893 verify=crc32c-intel 00:35:34.893 [job0] 00:35:34.893 filename=/dev/nvme0n1 00:35:34.893 [job1] 00:35:34.893 filename=/dev/nvme0n2 00:35:34.893 [job2] 00:35:34.893 filename=/dev/nvme0n3 00:35:34.893 [job3] 00:35:34.893 filename=/dev/nvme0n4 00:35:34.893 Could not set queue depth (nvme0n1) 00:35:34.893 Could not set queue depth (nvme0n2) 00:35:34.893 Could not set queue depth (nvme0n3) 00:35:34.893 Could not set queue depth (nvme0n4) 00:35:35.154 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:35.154 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:35.154 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:35.154 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:35.154 fio-3.35 00:35:35.154 Starting 4 threads 00:35:36.541 00:35:36.541 job0: (groupid=0, jobs=1): err= 0: pid=1648162: Wed Nov 20 10:09:07 2024 00:35:36.541 read: IOPS=408, BW=1634KiB/s (1673kB/s)(1688KiB/1033msec) 00:35:36.541 slat (nsec): min=3994, max=43554, avg=11191.98, stdev=6892.00 00:35:36.541 clat (usec): min=575, max=42103, avg=1620.87, stdev=5153.03 00:35:36.541 lat (usec): min=584, max=42112, avg=1632.06, stdev=5152.98 00:35:36.541 clat percentiles (usec): 00:35:36.541 | 1.00th=[ 685], 5.00th=[ 783], 10.00th=[ 824], 20.00th=[ 881], 00:35:36.541 | 30.00th=[ 906], 40.00th=[ 930], 50.00th=[ 947], 60.00th=[ 971], 00:35:36.541 | 70.00th=[ 996], 80.00th=[ 1020], 90.00th=[ 1106], 95.00th=[ 1188], 00:35:36.541 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:35:36.541 | 99.99th=[42206] 00:35:36.541 write: IOPS=495, BW=1983KiB/s (2030kB/s)(2048KiB/1033msec); 0 zone resets 00:35:36.541 slat (nsec): min=5780, max=53290, avg=29509.06, stdev=9895.67 00:35:36.541 clat (usec): min=166, max=1072, avg=632.42, stdev=149.52 00:35:36.541 lat (usec): min=177, max=1125, avg=661.92, stdev=153.11 00:35:36.541 clat percentiles (usec): 00:35:36.541 | 1.00th=[ 281], 5.00th=[ 388], 10.00th=[ 449], 20.00th=[ 510], 00:35:36.541 | 30.00th=[ 562], 40.00th=[ 594], 50.00th=[ 635], 60.00th=[ 668], 00:35:36.541 | 70.00th=[ 701], 80.00th=[ 742], 90.00th=[ 824], 95.00th=[ 898], 00:35:36.541 | 99.00th=[ 1029], 99.50th=[ 1074], 99.90th=[ 1074], 99.95th=[ 1074], 00:35:36.541 | 99.99th=[ 1074] 00:35:36.541 bw ( KiB/s): min= 4096, max= 4096, per=43.95%, avg=4096.00, stdev= 0.00, samples=1 00:35:36.541 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:36.541 lat (usec) : 250=0.32%, 500=9.74%, 750=35.87%, 1000=40.79% 00:35:36.541 lat (msec) : 2=12.53%, 50=0.75% 00:35:36.541 cpu : usr=1.45%, sys=2.13%, ctx=934, majf=0, minf=1 00:35:36.541 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:36.541 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:36.541 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:36.541 issued rwts: total=422,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:36.541 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:36.541 job1: (groupid=0, jobs=1): err= 0: pid=1648163: Wed Nov 20 10:09:07 2024 00:35:36.541 read: IOPS=344, BW=1379KiB/s (1412kB/s)(1380KiB/1001msec) 00:35:36.541 slat (nsec): min=3445, max=45757, avg=19694.55, stdev=9131.53 00:35:36.541 clat (usec): min=545, max=42022, avg=1924.81, stdev=5846.04 00:35:36.541 lat (usec): min=551, max=42046, avg=1944.51, stdev=5846.70 00:35:36.541 clat percentiles (usec): 00:35:36.541 | 1.00th=[ 652], 5.00th=[ 709], 10.00th=[ 750], 20.00th=[ 857], 00:35:36.541 | 30.00th=[ 963], 40.00th=[ 1020], 50.00th=[ 1074], 60.00th=[ 1139], 00:35:36.541 | 70.00th=[ 1172], 80.00th=[ 1205], 90.00th=[ 1270], 95.00th=[ 1319], 00:35:36.541 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:35:36.541 | 99.99th=[42206] 00:35:36.541 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:35:36.541 slat (nsec): min=6894, max=56252, avg=26504.13, stdev=10041.89 00:35:36.541 clat (usec): min=259, max=2433, avg=606.42, stdev=170.75 00:35:36.541 lat (usec): min=279, max=2473, avg=632.92, stdev=173.66 00:35:36.541 clat percentiles (usec): 00:35:36.541 | 1.00th=[ 293], 5.00th=[ 383], 10.00th=[ 416], 20.00th=[ 482], 00:35:36.541 | 30.00th=[ 529], 40.00th=[ 570], 50.00th=[ 611], 60.00th=[ 635], 00:35:36.541 | 70.00th=[ 668], 80.00th=[ 709], 90.00th=[ 775], 95.00th=[ 832], 00:35:36.541 | 99.00th=[ 955], 99.50th=[ 1647], 99.90th=[ 2442], 99.95th=[ 2442], 00:35:36.541 | 99.99th=[ 2442] 00:35:36.541 bw ( KiB/s): min= 4096, max= 4096, per=43.95%, avg=4096.00, stdev= 0.00, samples=1 00:35:36.541 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:36.541 lat (usec) : 500=14.12%, 750=41.54%, 1000=18.55% 00:35:36.541 lat (msec) : 2=24.74%, 4=0.12%, 50=0.93% 00:35:36.541 cpu : usr=1.40%, sys=1.90%, ctx=857, majf=0, minf=1 00:35:36.541 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:36.541 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:36.541 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:36.541 issued rwts: total=345,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:36.541 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:36.541 job2: (groupid=0, jobs=1): err= 0: pid=1648175: Wed Nov 20 10:09:07 2024 00:35:36.541 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:35:36.541 slat (nsec): min=5687, max=46434, avg=23899.45, stdev=7143.21 00:35:36.541 clat (usec): min=428, max=2904, avg=966.10, stdev=138.19 00:35:36.541 lat (usec): min=435, max=2911, avg=990.00, stdev=141.71 00:35:36.541 clat percentiles (usec): 00:35:36.541 | 1.00th=[ 627], 5.00th=[ 725], 10.00th=[ 807], 20.00th=[ 914], 00:35:36.541 | 30.00th=[ 947], 40.00th=[ 963], 50.00th=[ 988], 60.00th=[ 996], 00:35:36.541 | 70.00th=[ 1012], 80.00th=[ 1037], 90.00th=[ 1074], 95.00th=[ 1106], 00:35:36.541 | 99.00th=[ 1172], 99.50th=[ 1237], 99.90th=[ 2900], 99.95th=[ 2900], 00:35:36.541 | 99.99th=[ 2900] 00:35:36.541 write: IOPS=870, BW=3481KiB/s (3564kB/s)(3484KiB/1001msec); 0 zone resets 00:35:36.541 slat (nsec): min=5999, max=65994, avg=23729.80, stdev=12453.40 00:35:36.541 clat (usec): min=163, max=929, avg=532.13, stdev=140.93 00:35:36.541 lat (usec): min=171, max=961, avg=555.86, stdev=148.81 00:35:36.541 clat percentiles (usec): 00:35:36.541 | 1.00th=[ 192], 5.00th=[ 273], 10.00th=[ 347], 20.00th=[ 404], 00:35:36.541 | 30.00th=[ 461], 40.00th=[ 515], 50.00th=[ 553], 60.00th=[ 578], 00:35:36.541 | 70.00th=[ 611], 80.00th=[ 652], 90.00th=[ 701], 95.00th=[ 750], 00:35:36.541 | 99.00th=[ 848], 99.50th=[ 881], 99.90th=[ 930], 99.95th=[ 930], 00:35:36.541 | 99.99th=[ 930] 00:35:36.541 bw ( KiB/s): min= 4096, max= 4096, per=43.95%, avg=4096.00, stdev= 0.00, samples=1 00:35:36.541 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:36.541 lat (usec) : 250=2.24%, 500=20.82%, 750=39.19%, 1000=23.50% 00:35:36.541 lat (msec) : 2=14.17%, 4=0.07% 00:35:36.541 cpu : usr=2.90%, sys=3.80%, ctx=1383, majf=0, minf=1 00:35:36.541 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:36.541 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:36.541 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:36.541 issued rwts: total=512,871,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:36.541 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:36.541 job3: (groupid=0, jobs=1): err= 0: pid=1648181: Wed Nov 20 10:09:07 2024 00:35:36.541 read: IOPS=240, BW=960KiB/s (983kB/s)(988KiB/1029msec) 00:35:36.541 slat (nsec): min=3961, max=47934, avg=14731.39, stdev=10094.06 00:35:36.541 clat (usec): min=432, max=42158, avg=3087.08, stdev=9042.00 00:35:36.541 lat (usec): min=439, max=42185, avg=3101.81, stdev=9044.99 00:35:36.541 clat percentiles (usec): 00:35:36.541 | 1.00th=[ 529], 5.00th=[ 668], 10.00th=[ 709], 20.00th=[ 766], 00:35:36.541 | 30.00th=[ 824], 40.00th=[ 873], 50.00th=[ 930], 60.00th=[ 979], 00:35:36.541 | 70.00th=[ 1074], 80.00th=[ 1172], 90.00th=[ 1287], 95.00th=[40633], 00:35:36.541 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:35:36.541 | 99.99th=[42206] 00:35:36.541 write: IOPS=497, BW=1990KiB/s (2038kB/s)(2048KiB/1029msec); 0 zone resets 00:35:36.541 slat (nsec): min=6349, max=53151, avg=14614.13, stdev=10929.36 00:35:36.541 clat (usec): min=183, max=938, avg=492.96, stdev=132.99 00:35:36.541 lat (usec): min=193, max=972, avg=507.57, stdev=138.03 00:35:36.541 clat percentiles (usec): 00:35:36.541 | 1.00th=[ 206], 5.00th=[ 262], 10.00th=[ 306], 20.00th=[ 392], 00:35:36.541 | 30.00th=[ 420], 40.00th=[ 461], 50.00th=[ 494], 60.00th=[ 519], 00:35:36.541 | 70.00th=[ 553], 80.00th=[ 594], 90.00th=[ 660], 95.00th=[ 725], 00:35:36.541 | 99.00th=[ 848], 99.50th=[ 865], 99.90th=[ 938], 99.95th=[ 938], 00:35:36.541 | 99.99th=[ 938] 00:35:36.541 bw ( KiB/s): min= 4096, max= 4096, per=43.95%, avg=4096.00, stdev= 0.00, samples=1 00:35:36.541 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:36.541 lat (usec) : 250=2.64%, 500=32.54%, 750=35.57%, 1000=17.13% 00:35:36.541 lat (msec) : 2=10.28%, 10=0.13%, 50=1.71% 00:35:36.541 cpu : usr=0.39%, sys=1.26%, ctx=762, majf=0, minf=1 00:35:36.541 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:36.541 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:36.541 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:36.541 issued rwts: total=247,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:36.541 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:36.541 00:35:36.541 Run status group 0 (all jobs): 00:35:36.541 READ: bw=5909KiB/s (6051kB/s), 960KiB/s-2046KiB/s (983kB/s-2095kB/s), io=6104KiB (6250kB), run=1001-1033msec 00:35:36.541 WRITE: bw=9320KiB/s (9544kB/s), 1983KiB/s-3481KiB/s (2030kB/s-3564kB/s), io=9628KiB (9859kB), run=1001-1033msec 00:35:36.541 00:35:36.541 Disk stats (read/write): 00:35:36.541 nvme0n1: ios=425/512, merge=0/0, ticks=505/246, in_queue=751, util=82.77% 00:35:36.541 nvme0n2: ios=218/512, merge=0/0, ticks=560/295, in_queue=855, util=86.89% 00:35:36.541 nvme0n3: ios=568/521, merge=0/0, ticks=532/228, in_queue=760, util=90.01% 00:35:36.541 nvme0n4: ios=270/512, merge=0/0, ticks=796/248, in_queue=1044, util=96.18% 00:35:36.541 10:09:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:35:36.541 [global] 00:35:36.541 thread=1 00:35:36.541 invalidate=1 00:35:36.541 rw=write 00:35:36.541 time_based=1 00:35:36.541 runtime=1 00:35:36.541 ioengine=libaio 00:35:36.541 direct=1 00:35:36.541 bs=4096 00:35:36.541 iodepth=128 00:35:36.541 norandommap=0 00:35:36.541 numjobs=1 00:35:36.541 00:35:36.541 verify_dump=1 00:35:36.541 verify_backlog=512 00:35:36.542 verify_state_save=0 00:35:36.542 do_verify=1 00:35:36.542 verify=crc32c-intel 00:35:36.542 [job0] 00:35:36.542 filename=/dev/nvme0n1 00:35:36.542 [job1] 00:35:36.542 filename=/dev/nvme0n2 00:35:36.542 [job2] 00:35:36.542 filename=/dev/nvme0n3 00:35:36.542 [job3] 00:35:36.542 filename=/dev/nvme0n4 00:35:36.542 Could not set queue depth (nvme0n1) 00:35:36.542 Could not set queue depth (nvme0n2) 00:35:36.542 Could not set queue depth (nvme0n3) 00:35:36.542 Could not set queue depth (nvme0n4) 00:35:37.110 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:37.110 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:37.110 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:37.110 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:37.110 fio-3.35 00:35:37.110 Starting 4 threads 00:35:38.054 00:35:38.054 job0: (groupid=0, jobs=1): err= 0: pid=1648685: Wed Nov 20 10:09:08 2024 00:35:38.054 read: IOPS=4051, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1011msec) 00:35:38.054 slat (nsec): min=1074, max=18479k, avg=82063.47, stdev=746969.16 00:35:38.054 clat (usec): min=3476, max=46291, avg=12748.50, stdev=5486.93 00:35:38.054 lat (usec): min=3496, max=46296, avg=12830.57, stdev=5526.34 00:35:38.054 clat percentiles (usec): 00:35:38.054 | 1.00th=[ 4752], 5.00th=[ 5997], 10.00th=[ 6521], 20.00th=[ 7767], 00:35:38.054 | 30.00th=[10421], 40.00th=[11338], 50.00th=[12125], 60.00th=[12649], 00:35:38.054 | 70.00th=[14484], 80.00th=[15664], 90.00th=[20579], 95.00th=[23462], 00:35:38.054 | 99.00th=[29492], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:35:38.054 | 99.99th=[46400] 00:35:38.054 write: IOPS=4154, BW=16.2MiB/s (17.0MB/s)(16.4MiB/1011msec); 0 zone resets 00:35:38.054 slat (nsec): min=1810, max=9920.2k, avg=101763.22, stdev=625381.97 00:35:38.054 clat (usec): min=2199, max=96805, avg=18013.03, stdev=17408.35 00:35:38.054 lat (usec): min=2207, max=96815, avg=18114.80, stdev=17484.42 00:35:38.054 clat percentiles (usec): 00:35:38.054 | 1.00th=[ 3425], 5.00th=[ 4817], 10.00th=[ 5866], 20.00th=[ 6652], 00:35:38.054 | 30.00th=[ 7635], 40.00th=[10683], 50.00th=[11994], 60.00th=[13173], 00:35:38.054 | 70.00th=[17171], 80.00th=[25297], 90.00th=[39584], 95.00th=[57934], 00:35:38.054 | 99.00th=[84411], 99.50th=[87557], 99.90th=[95945], 99.95th=[95945], 00:35:38.054 | 99.99th=[96994] 00:35:38.054 bw ( KiB/s): min=12600, max=20168, per=19.02%, avg=16384.00, stdev=5351.38, samples=2 00:35:38.054 iops : min= 3150, max= 5042, avg=4096.00, stdev=1337.85, samples=2 00:35:38.054 lat (msec) : 4=0.69%, 10=33.59%, 20=48.13%, 50=13.38%, 100=4.21% 00:35:38.054 cpu : usr=3.96%, sys=4.75%, ctx=305, majf=0, minf=1 00:35:38.054 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:35:38.054 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:38.054 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:38.054 issued rwts: total=4096,4200,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:38.054 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:38.054 job1: (groupid=0, jobs=1): err= 0: pid=1648686: Wed Nov 20 10:09:08 2024 00:35:38.054 read: IOPS=4051, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1011msec) 00:35:38.054 slat (nsec): min=929, max=10886k, avg=81524.97, stdev=552584.50 00:35:38.054 clat (usec): min=3451, max=42891, avg=9326.72, stdev=3646.57 00:35:38.054 lat (usec): min=3459, max=42900, avg=9408.25, stdev=3712.68 00:35:38.054 clat percentiles (usec): 00:35:38.054 | 1.00th=[ 4490], 5.00th=[ 6390], 10.00th=[ 6652], 20.00th=[ 7111], 00:35:38.054 | 30.00th=[ 7504], 40.00th=[ 7898], 50.00th=[ 8717], 60.00th=[ 9241], 00:35:38.054 | 70.00th=[ 9503], 80.00th=[10683], 90.00th=[12256], 95.00th=[15008], 00:35:38.054 | 99.00th=[28443], 99.50th=[32637], 99.90th=[42730], 99.95th=[42730], 00:35:38.054 | 99.99th=[42730] 00:35:38.054 write: IOPS=4421, BW=17.3MiB/s (18.1MB/s)(17.5MiB/1011msec); 0 zone resets 00:35:38.054 slat (nsec): min=1604, max=29781k, avg=143029.66, stdev=952991.86 00:35:38.054 clat (usec): min=1307, max=112777, avg=19400.92, stdev=24168.27 00:35:38.054 lat (usec): min=1317, max=112786, avg=19543.95, stdev=24331.78 00:35:38.054 clat percentiles (msec): 00:35:38.054 | 1.00th=[ 3], 5.00th=[ 5], 10.00th=[ 6], 20.00th=[ 7], 00:35:38.054 | 30.00th=[ 7], 40.00th=[ 8], 50.00th=[ 9], 60.00th=[ 10], 00:35:38.054 | 70.00th=[ 17], 80.00th=[ 27], 90.00th=[ 53], 95.00th=[ 80], 00:35:38.054 | 99.00th=[ 110], 99.50th=[ 111], 99.90th=[ 113], 99.95th=[ 113], 00:35:38.054 | 99.99th=[ 113] 00:35:38.054 bw ( KiB/s): min=12672, max=22064, per=20.16%, avg=17368.00, stdev=6641.15, samples=2 00:35:38.054 iops : min= 3168, max= 5516, avg=4342.00, stdev=1660.29, samples=2 00:35:38.054 lat (msec) : 2=0.42%, 4=1.67%, 10=65.25%, 20=19.62%, 50=7.41% 00:35:38.054 lat (msec) : 100=4.00%, 250=1.62% 00:35:38.054 cpu : usr=2.87%, sys=4.55%, ctx=364, majf=0, minf=2 00:35:38.054 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:35:38.054 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:38.054 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:38.054 issued rwts: total=4096,4470,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:38.054 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:38.054 job2: (groupid=0, jobs=1): err= 0: pid=1648687: Wed Nov 20 10:09:08 2024 00:35:38.054 read: IOPS=8660, BW=33.8MiB/s (35.5MB/s)(34.0MiB/1005msec) 00:35:38.054 slat (nsec): min=1002, max=12271k, avg=56106.22, stdev=454922.58 00:35:38.054 clat (usec): min=1808, max=20691, avg=7578.26, stdev=2294.53 00:35:38.054 lat (usec): min=2204, max=20698, avg=7634.37, stdev=2309.89 00:35:38.054 clat percentiles (usec): 00:35:38.054 | 1.00th=[ 3195], 5.00th=[ 4948], 10.00th=[ 5538], 20.00th=[ 6063], 00:35:38.054 | 30.00th=[ 6390], 40.00th=[ 6652], 50.00th=[ 7111], 60.00th=[ 7635], 00:35:38.054 | 70.00th=[ 8029], 80.00th=[ 9110], 90.00th=[10421], 95.00th=[11600], 00:35:38.054 | 99.00th=[13698], 99.50th=[20579], 99.90th=[20579], 99.95th=[20579], 00:35:38.054 | 99.99th=[20579] 00:35:38.054 write: IOPS=8959, BW=35.0MiB/s (36.7MB/s)(35.2MiB/1005msec); 0 zone resets 00:35:38.054 slat (nsec): min=1708, max=7580.6k, avg=49397.32, stdev=358665.27 00:35:38.054 clat (usec): min=512, max=41988, avg=6799.78, stdev=4277.73 00:35:38.054 lat (usec): min=1123, max=41997, avg=6849.18, stdev=4298.70 00:35:38.054 clat percentiles (usec): 00:35:38.054 | 1.00th=[ 1696], 5.00th=[ 3654], 10.00th=[ 4047], 20.00th=[ 4948], 00:35:38.054 | 30.00th=[ 5735], 40.00th=[ 6128], 50.00th=[ 6325], 60.00th=[ 6587], 00:35:38.054 | 70.00th=[ 6783], 80.00th=[ 7439], 90.00th=[ 8979], 95.00th=[10290], 00:35:38.054 | 99.00th=[36439], 99.50th=[40633], 99.90th=[41681], 99.95th=[42206], 00:35:38.054 | 99.99th=[42206] 00:35:38.054 bw ( KiB/s): min=33360, max=37648, per=41.22%, avg=35504.00, stdev=3032.07, samples=2 00:35:38.054 iops : min= 8340, max= 9412, avg=8876.00, stdev=758.02, samples=2 00:35:38.054 lat (usec) : 750=0.01% 00:35:38.054 lat (msec) : 2=0.66%, 4=5.20%, 10=85.23%, 20=7.78%, 50=1.12% 00:35:38.054 cpu : usr=6.57%, sys=9.26%, ctx=586, majf=0, minf=2 00:35:38.054 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:35:38.054 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:38.054 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:38.054 issued rwts: total=8704,9004,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:38.054 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:38.054 job3: (groupid=0, jobs=1): err= 0: pid=1648688: Wed Nov 20 10:09:08 2024 00:35:38.054 read: IOPS=4031, BW=15.7MiB/s (16.5MB/s)(15.8MiB/1005msec) 00:35:38.054 slat (nsec): min=1038, max=14064k, avg=106682.32, stdev=787071.53 00:35:38.054 clat (usec): min=4066, max=53442, avg=13157.57, stdev=6288.50 00:35:38.054 lat (usec): min=4154, max=53450, avg=13264.25, stdev=6358.97 00:35:38.054 clat percentiles (usec): 00:35:38.054 | 1.00th=[ 6456], 5.00th=[ 7046], 10.00th=[ 7963], 20.00th=[ 8848], 00:35:38.054 | 30.00th=[ 9765], 40.00th=[11076], 50.00th=[11994], 60.00th=[12518], 00:35:38.054 | 70.00th=[13042], 80.00th=[15139], 90.00th=[19792], 95.00th=[27395], 00:35:38.054 | 99.00th=[38536], 99.50th=[46924], 99.90th=[53216], 99.95th=[53216], 00:35:38.054 | 99.99th=[53216] 00:35:38.054 write: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec); 0 zone resets 00:35:38.054 slat (nsec): min=1735, max=31769k, avg=132090.01, stdev=867126.92 00:35:38.054 clat (usec): min=1189, max=92281, avg=18030.76, stdev=17144.19 00:35:38.054 lat (usec): min=1238, max=92291, avg=18162.85, stdev=17250.66 00:35:38.054 clat percentiles (usec): 00:35:38.054 | 1.00th=[ 3949], 5.00th=[ 6390], 10.00th=[ 6980], 20.00th=[ 7832], 00:35:38.054 | 30.00th=[ 8291], 40.00th=[ 9110], 50.00th=[11338], 60.00th=[12518], 00:35:38.054 | 70.00th=[17171], 80.00th=[25560], 90.00th=[39060], 95.00th=[55313], 00:35:38.054 | 99.00th=[87557], 99.50th=[89654], 99.90th=[92799], 99.95th=[92799], 00:35:38.054 | 99.99th=[92799] 00:35:38.054 bw ( KiB/s): min=16384, max=16384, per=19.02%, avg=16384.00, stdev= 0.00, samples=2 00:35:38.054 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:35:38.054 lat (msec) : 2=0.02%, 4=0.52%, 10=38.82%, 20=44.21%, 50=12.40% 00:35:38.054 lat (msec) : 100=4.04% 00:35:38.054 cpu : usr=3.29%, sys=4.98%, ctx=310, majf=0, minf=1 00:35:38.054 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:35:38.054 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:38.054 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:38.054 issued rwts: total=4052,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:38.054 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:38.054 00:35:38.054 Run status group 0 (all jobs): 00:35:38.054 READ: bw=80.9MiB/s (84.9MB/s), 15.7MiB/s-33.8MiB/s (16.5MB/s-35.5MB/s), io=81.8MiB (85.8MB), run=1005-1011msec 00:35:38.055 WRITE: bw=84.1MiB/s (88.2MB/s), 15.9MiB/s-35.0MiB/s (16.7MB/s-36.7MB/s), io=85.0MiB (89.2MB), run=1005-1011msec 00:35:38.055 00:35:38.055 Disk stats (read/write): 00:35:38.055 nvme0n1: ios=3614/3671, merge=0/0, ticks=43650/57570, in_queue=101220, util=96.49% 00:35:38.055 nvme0n2: ios=3114/3551, merge=0/0, ticks=20749/60898, in_queue=81647, util=91.64% 00:35:38.055 nvme0n3: ios=7219/7383, merge=0/0, ticks=51412/47241, in_queue=98653, util=95.04% 00:35:38.055 nvme0n4: ios=3636/3647, merge=0/0, ticks=45138/53780, in_queue=98918, util=98.93% 00:35:38.055 10:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:35:38.314 [global] 00:35:38.314 thread=1 00:35:38.314 invalidate=1 00:35:38.314 rw=randwrite 00:35:38.314 time_based=1 00:35:38.314 runtime=1 00:35:38.314 ioengine=libaio 00:35:38.314 direct=1 00:35:38.314 bs=4096 00:35:38.314 iodepth=128 00:35:38.314 norandommap=0 00:35:38.314 numjobs=1 00:35:38.314 00:35:38.314 verify_dump=1 00:35:38.314 verify_backlog=512 00:35:38.314 verify_state_save=0 00:35:38.314 do_verify=1 00:35:38.314 verify=crc32c-intel 00:35:38.314 [job0] 00:35:38.314 filename=/dev/nvme0n1 00:35:38.314 [job1] 00:35:38.314 filename=/dev/nvme0n2 00:35:38.314 [job2] 00:35:38.314 filename=/dev/nvme0n3 00:35:38.314 [job3] 00:35:38.315 filename=/dev/nvme0n4 00:35:38.315 Could not set queue depth (nvme0n1) 00:35:38.315 Could not set queue depth (nvme0n2) 00:35:38.315 Could not set queue depth (nvme0n3) 00:35:38.315 Could not set queue depth (nvme0n4) 00:35:38.574 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:38.574 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:38.574 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:38.574 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:38.574 fio-3.35 00:35:38.574 Starting 4 threads 00:35:39.957 00:35:39.957 job0: (groupid=0, jobs=1): err= 0: pid=1649343: Wed Nov 20 10:09:10 2024 00:35:39.957 read: IOPS=7132, BW=27.9MiB/s (29.2MB/s)(28.0MiB/1005msec) 00:35:39.957 slat (nsec): min=907, max=6453.7k, avg=69438.61, stdev=464618.52 00:35:39.957 clat (usec): min=4874, max=18619, avg=9065.45, stdev=1982.36 00:35:39.957 lat (usec): min=4877, max=18625, avg=9134.89, stdev=2027.51 00:35:39.957 clat percentiles (usec): 00:35:39.957 | 1.00th=[ 5604], 5.00th=[ 6718], 10.00th=[ 7111], 20.00th=[ 7439], 00:35:39.957 | 30.00th=[ 7635], 40.00th=[ 7832], 50.00th=[ 8586], 60.00th=[ 9372], 00:35:39.957 | 70.00th=[10159], 80.00th=[10945], 90.00th=[11731], 95.00th=[12911], 00:35:39.957 | 99.00th=[14091], 99.50th=[14222], 99.90th=[18220], 99.95th=[18482], 00:35:39.957 | 99.99th=[18744] 00:35:39.957 write: IOPS=7479, BW=29.2MiB/s (30.6MB/s)(29.4MiB/1005msec); 0 zone resets 00:35:39.957 slat (nsec): min=1499, max=5057.5k, avg=61907.95, stdev=363957.07 00:35:39.957 clat (usec): min=1677, max=17110, avg=8229.04, stdev=2095.63 00:35:39.957 lat (usec): min=4028, max=17113, avg=8290.95, stdev=2127.77 00:35:39.957 clat percentiles (usec): 00:35:39.957 | 1.00th=[ 4621], 5.00th=[ 5604], 10.00th=[ 6652], 20.00th=[ 6980], 00:35:39.957 | 30.00th=[ 7177], 40.00th=[ 7308], 50.00th=[ 7635], 60.00th=[ 8029], 00:35:39.957 | 70.00th=[ 8455], 80.00th=[ 9372], 90.00th=[10814], 95.00th=[13960], 00:35:39.957 | 99.00th=[15008], 99.50th=[15533], 99.90th=[16319], 99.95th=[17171], 00:35:39.957 | 99.99th=[17171] 00:35:39.957 bw ( KiB/s): min=25912, max=33200, per=29.31%, avg=29556.00, stdev=5153.39, samples=2 00:35:39.957 iops : min= 6478, max= 8300, avg=7389.00, stdev=1288.35, samples=2 00:35:39.957 lat (msec) : 2=0.01%, 10=76.60%, 20=23.40% 00:35:39.957 cpu : usr=4.68%, sys=6.97%, ctx=618, majf=0, minf=1 00:35:39.957 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:35:39.957 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:39.957 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:39.957 issued rwts: total=7168,7517,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:39.957 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:39.957 job1: (groupid=0, jobs=1): err= 0: pid=1649355: Wed Nov 20 10:09:10 2024 00:35:39.957 read: IOPS=4917, BW=19.2MiB/s (20.1MB/s)(19.2MiB/1002msec) 00:35:39.957 slat (nsec): min=949, max=12529k, avg=99612.16, stdev=599357.09 00:35:39.957 clat (usec): min=871, max=58616, avg=12847.02, stdev=8287.84 00:35:39.957 lat (usec): min=3356, max=58620, avg=12946.63, stdev=8325.73 00:35:39.957 clat percentiles (usec): 00:35:39.957 | 1.00th=[ 5932], 5.00th=[ 7635], 10.00th=[ 8225], 20.00th=[ 8717], 00:35:39.957 | 30.00th=[ 8979], 40.00th=[ 9241], 50.00th=[ 9634], 60.00th=[10421], 00:35:39.957 | 70.00th=[10683], 80.00th=[15401], 90.00th=[23725], 95.00th=[27657], 00:35:39.957 | 99.00th=[52691], 99.50th=[53216], 99.90th=[58459], 99.95th=[58459], 00:35:39.957 | 99.99th=[58459] 00:35:39.957 write: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec); 0 zone resets 00:35:39.957 slat (nsec): min=1576, max=15675k, avg=95384.68, stdev=629234.86 00:35:39.957 clat (usec): min=5586, max=41532, avg=12276.84, stdev=7508.05 00:35:39.957 lat (usec): min=5594, max=41539, avg=12372.22, stdev=7540.63 00:35:39.957 clat percentiles (usec): 00:35:39.957 | 1.00th=[ 6456], 5.00th=[ 7111], 10.00th=[ 7898], 20.00th=[ 8455], 00:35:39.957 | 30.00th=[ 8586], 40.00th=[ 8717], 50.00th=[ 8848], 60.00th=[ 9110], 00:35:39.957 | 70.00th=[ 9765], 80.00th=[16909], 90.00th=[22676], 95.00th=[30278], 00:35:39.957 | 99.00th=[41157], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:35:39.957 | 99.99th=[41681] 00:35:39.957 bw ( KiB/s): min=12288, max=28672, per=20.31%, avg=20480.00, stdev=11585.24, samples=2 00:35:39.957 iops : min= 3072, max= 7168, avg=5120.00, stdev=2896.31, samples=2 00:35:39.957 lat (usec) : 1000=0.01% 00:35:39.957 lat (msec) : 4=0.32%, 10=63.07%, 20=21.95%, 50=13.96%, 100=0.69% 00:35:39.957 cpu : usr=2.30%, sys=3.80%, ctx=441, majf=0, minf=1 00:35:39.957 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:35:39.957 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:39.958 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:39.958 issued rwts: total=4927,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:39.958 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:39.958 job2: (groupid=0, jobs=1): err= 0: pid=1649370: Wed Nov 20 10:09:10 2024 00:35:39.958 read: IOPS=8143, BW=31.8MiB/s (33.4MB/s)(32.0MiB/1006msec) 00:35:39.958 slat (nsec): min=1462, max=7154.6k, avg=58665.48, stdev=463698.61 00:35:39.958 clat (usec): min=2747, max=13916, avg=8258.22, stdev=1876.54 00:35:39.958 lat (usec): min=2767, max=18182, avg=8316.88, stdev=1896.18 00:35:39.958 clat percentiles (usec): 00:35:39.958 | 1.00th=[ 4015], 5.00th=[ 5669], 10.00th=[ 6390], 20.00th=[ 6783], 00:35:39.958 | 30.00th=[ 7177], 40.00th=[ 7439], 50.00th=[ 7701], 60.00th=[ 8455], 00:35:39.958 | 70.00th=[ 9241], 80.00th=[10028], 90.00th=[10945], 95.00th=[11863], 00:35:39.958 | 99.00th=[13042], 99.50th=[13042], 99.90th=[13304], 99.95th=[13304], 00:35:39.958 | 99.99th=[13960] 00:35:39.958 write: IOPS=8404, BW=32.8MiB/s (34.4MB/s)(33.0MiB/1006msec); 0 zone resets 00:35:39.958 slat (nsec): min=1690, max=6601.3k, avg=55347.03, stdev=440684.29 00:35:39.958 clat (usec): min=1166, max=13251, avg=7017.28, stdev=1841.71 00:35:39.958 lat (usec): min=1177, max=13259, avg=7072.63, stdev=1850.66 00:35:39.958 clat percentiles (usec): 00:35:39.958 | 1.00th=[ 3654], 5.00th=[ 4490], 10.00th=[ 4686], 20.00th=[ 5014], 00:35:39.958 | 30.00th=[ 5932], 40.00th=[ 6915], 50.00th=[ 7242], 60.00th=[ 7439], 00:35:39.958 | 70.00th=[ 7701], 80.00th=[ 7963], 90.00th=[ 9765], 95.00th=[10552], 00:35:39.958 | 99.00th=[12125], 99.50th=[12256], 99.90th=[13173], 99.95th=[13173], 00:35:39.958 | 99.99th=[13304] 00:35:39.958 bw ( KiB/s): min=32768, max=33856, per=33.03%, avg=33312.00, stdev=769.33, samples=2 00:35:39.958 iops : min= 8192, max= 8464, avg=8328.00, stdev=192.33, samples=2 00:35:39.958 lat (msec) : 2=0.05%, 4=1.17%, 10=85.26%, 20=13.53% 00:35:39.958 cpu : usr=6.27%, sys=8.86%, ctx=305, majf=0, minf=2 00:35:39.958 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:35:39.958 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:39.958 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:39.958 issued rwts: total=8192,8455,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:39.958 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:39.958 job3: (groupid=0, jobs=1): err= 0: pid=1649376: Wed Nov 20 10:09:10 2024 00:35:39.958 read: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec) 00:35:39.958 slat (nsec): min=975, max=8081.1k, avg=112135.30, stdev=727281.47 00:35:39.958 clat (usec): min=3909, max=33051, avg=15671.52, stdev=6355.61 00:35:39.958 lat (usec): min=3915, max=36128, avg=15783.65, stdev=6392.58 00:35:39.958 clat percentiles (usec): 00:35:39.958 | 1.00th=[ 6521], 5.00th=[ 8455], 10.00th=[ 8848], 20.00th=[10421], 00:35:39.958 | 30.00th=[11338], 40.00th=[11731], 50.00th=[12911], 60.00th=[15139], 00:35:39.958 | 70.00th=[20317], 80.00th=[23200], 90.00th=[25297], 95.00th=[26346], 00:35:39.958 | 99.00th=[28705], 99.50th=[30540], 99.90th=[33162], 99.95th=[33162], 00:35:39.958 | 99.99th=[33162] 00:35:39.958 write: IOPS=4250, BW=16.6MiB/s (17.4MB/s)(16.7MiB/1005msec); 0 zone resets 00:35:39.958 slat (nsec): min=1623, max=5953.4k, avg=100782.81, stdev=496175.65 00:35:39.958 clat (usec): min=1698, max=35267, avg=14576.55, stdev=6792.93 00:35:39.958 lat (usec): min=3302, max=35269, avg=14677.33, stdev=6839.75 00:35:39.958 clat percentiles (usec): 00:35:39.958 | 1.00th=[ 5669], 5.00th=[ 6521], 10.00th=[ 7570], 20.00th=[ 8979], 00:35:39.958 | 30.00th=[10028], 40.00th=[10945], 50.00th=[12387], 60.00th=[14353], 00:35:39.958 | 70.00th=[16909], 80.00th=[21365], 90.00th=[25297], 95.00th=[28181], 00:35:39.958 | 99.00th=[32113], 99.50th=[32375], 99.90th=[35390], 99.95th=[35390], 00:35:39.958 | 99.99th=[35390] 00:35:39.958 bw ( KiB/s): min=10168, max=22984, per=16.44%, avg=16576.00, stdev=9062.28, samples=2 00:35:39.958 iops : min= 2542, max= 5746, avg=4144.00, stdev=2265.57, samples=2 00:35:39.958 lat (msec) : 2=0.01%, 4=0.17%, 10=23.95%, 20=49.39%, 50=26.48% 00:35:39.958 cpu : usr=3.19%, sys=4.58%, ctx=390, majf=0, minf=2 00:35:39.958 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:35:39.958 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:39.958 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:39.958 issued rwts: total=4096,4272,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:39.958 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:39.958 00:35:39.958 Run status group 0 (all jobs): 00:35:39.958 READ: bw=94.7MiB/s (99.3MB/s), 15.9MiB/s-31.8MiB/s (16.7MB/s-33.4MB/s), io=95.2MiB (99.9MB), run=1002-1006msec 00:35:39.958 WRITE: bw=98.5MiB/s (103MB/s), 16.6MiB/s-32.8MiB/s (17.4MB/s-34.4MB/s), io=99.1MiB (104MB), run=1002-1006msec 00:35:39.958 00:35:39.958 Disk stats (read/write): 00:35:39.958 nvme0n1: ios=6194/6151, merge=0/0, ticks=26288/23176, in_queue=49464, util=86.97% 00:35:39.958 nvme0n2: ios=3668/4096, merge=0/0, ticks=12872/12754, in_queue=25626, util=87.92% 00:35:39.958 nvme0n3: ios=6715/7015, merge=0/0, ticks=53759/46389, in_queue=100148, util=91.42% 00:35:39.958 nvme0n4: ios=3641/3903, merge=0/0, ticks=27780/26884, in_queue=54664, util=93.57% 00:35:39.958 10:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:35:39.958 10:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1649887 00:35:39.958 10:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:35:39.958 10:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:35:39.958 [global] 00:35:39.958 thread=1 00:35:39.958 invalidate=1 00:35:39.958 rw=read 00:35:39.958 time_based=1 00:35:39.958 runtime=10 00:35:39.958 ioengine=libaio 00:35:39.958 direct=1 00:35:39.958 bs=4096 00:35:39.958 iodepth=1 00:35:39.958 norandommap=1 00:35:39.958 numjobs=1 00:35:39.958 00:35:39.958 [job0] 00:35:39.958 filename=/dev/nvme0n1 00:35:39.958 [job1] 00:35:39.958 filename=/dev/nvme0n2 00:35:39.958 [job2] 00:35:39.958 filename=/dev/nvme0n3 00:35:39.958 [job3] 00:35:39.958 filename=/dev/nvme0n4 00:35:39.958 Could not set queue depth (nvme0n1) 00:35:39.958 Could not set queue depth (nvme0n2) 00:35:39.958 Could not set queue depth (nvme0n3) 00:35:39.958 Could not set queue depth (nvme0n4) 00:35:40.217 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:40.217 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:40.217 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:40.217 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:40.217 fio-3.35 00:35:40.217 Starting 4 threads 00:35:42.758 10:09:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:35:43.019 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=7708672, buflen=4096 00:35:43.019 fio: pid=1650150, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:35:43.019 10:09:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:35:43.280 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=6107136, buflen=4096 00:35:43.280 fio: pid=1650149, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:35:43.280 10:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:43.280 10:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:35:43.540 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=11493376, buflen=4096 00:35:43.540 fio: pid=1650128, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:35:43.540 10:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:43.540 10:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:35:43.540 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=8830976, buflen=4096 00:35:43.540 fio: pid=1650144, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:35:43.540 10:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:43.540 10:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:35:43.799 00:35:43.799 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1650128: Wed Nov 20 10:09:14 2024 00:35:43.799 read: IOPS=933, BW=3734KiB/s (3823kB/s)(11.0MiB/3006msec) 00:35:43.799 slat (usec): min=7, max=34507, avg=56.64, stdev=853.07 00:35:43.799 clat (usec): min=526, max=1306, avg=1000.36, stdev=79.24 00:35:43.799 lat (usec): min=535, max=35559, avg=1057.02, stdev=858.93 00:35:43.799 clat percentiles (usec): 00:35:43.799 | 1.00th=[ 775], 5.00th=[ 865], 10.00th=[ 906], 20.00th=[ 947], 00:35:43.799 | 30.00th=[ 971], 40.00th=[ 988], 50.00th=[ 1004], 60.00th=[ 1020], 00:35:43.799 | 70.00th=[ 1037], 80.00th=[ 1057], 90.00th=[ 1090], 95.00th=[ 1123], 00:35:43.799 | 99.00th=[ 1172], 99.50th=[ 1205], 99.90th=[ 1237], 99.95th=[ 1254], 00:35:43.799 | 99.99th=[ 1303] 00:35:43.800 bw ( KiB/s): min= 3800, max= 3928, per=37.05%, avg=3876.80, stdev=63.40, samples=5 00:35:43.800 iops : min= 950, max= 982, avg=969.20, stdev=15.85, samples=5 00:35:43.800 lat (usec) : 750=0.53%, 1000=48.06% 00:35:43.800 lat (msec) : 2=51.37% 00:35:43.800 cpu : usr=1.60%, sys=3.79%, ctx=2812, majf=0, minf=1 00:35:43.800 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:43.800 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.800 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.800 issued rwts: total=2807,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:43.800 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:43.800 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1650144: Wed Nov 20 10:09:14 2024 00:35:43.800 read: IOPS=676, BW=2706KiB/s (2771kB/s)(8624KiB/3187msec) 00:35:43.800 slat (usec): min=7, max=21466, avg=67.81, stdev=812.51 00:35:43.800 clat (usec): min=472, max=42093, avg=1396.59, stdev=3667.43 00:35:43.800 lat (usec): min=498, max=42118, avg=1464.42, stdev=3753.16 00:35:43.800 clat percentiles (usec): 00:35:43.800 | 1.00th=[ 660], 5.00th=[ 824], 10.00th=[ 906], 20.00th=[ 979], 00:35:43.800 | 30.00th=[ 1020], 40.00th=[ 1057], 50.00th=[ 1074], 60.00th=[ 1106], 00:35:43.800 | 70.00th=[ 1123], 80.00th=[ 1156], 90.00th=[ 1188], 95.00th=[ 1221], 00:35:43.800 | 99.00th=[ 1532], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:35:43.800 | 99.99th=[42206] 00:35:43.800 bw ( KiB/s): min= 712, max= 3648, per=26.43%, avg=2765.83, stdev=1221.84, samples=6 00:35:43.800 iops : min= 178, max= 912, avg=691.33, stdev=305.41, samples=6 00:35:43.800 lat (usec) : 500=0.05%, 750=2.36%, 1000=23.18% 00:35:43.800 lat (msec) : 2=73.44%, 10=0.09%, 50=0.83% 00:35:43.800 cpu : usr=0.85%, sys=1.98%, ctx=2163, majf=0, minf=2 00:35:43.800 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:43.800 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.800 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.800 issued rwts: total=2157,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:43.800 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:43.800 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1650149: Wed Nov 20 10:09:14 2024 00:35:43.800 read: IOPS=529, BW=2116KiB/s (2167kB/s)(5964KiB/2818msec) 00:35:43.800 slat (usec): min=7, max=19488, avg=48.25, stdev=648.97 00:35:43.800 clat (usec): min=749, max=42184, avg=1821.10, stdev=4862.05 00:35:43.800 lat (usec): min=775, max=42210, avg=1869.36, stdev=4902.11 00:35:43.800 clat percentiles (usec): 00:35:43.800 | 1.00th=[ 930], 5.00th=[ 1057], 10.00th=[ 1106], 20.00th=[ 1156], 00:35:43.800 | 30.00th=[ 1188], 40.00th=[ 1205], 50.00th=[ 1237], 60.00th=[ 1254], 00:35:43.800 | 70.00th=[ 1287], 80.00th=[ 1303], 90.00th=[ 1352], 95.00th=[ 1401], 00:35:43.800 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:35:43.800 | 99.99th=[42206] 00:35:43.800 bw ( KiB/s): min= 1176, max= 2768, per=19.58%, avg=2048.40, stdev=625.07, samples=5 00:35:43.800 iops : min= 294, max= 692, avg=512.00, stdev=156.17, samples=5 00:35:43.800 lat (usec) : 750=0.07%, 1000=2.48% 00:35:43.800 lat (msec) : 2=95.91%, 50=1.47% 00:35:43.800 cpu : usr=0.60%, sys=1.42%, ctx=1494, majf=0, minf=1 00:35:43.800 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:43.800 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.800 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.800 issued rwts: total=1492,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:43.800 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:43.800 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1650150: Wed Nov 20 10:09:14 2024 00:35:43.800 read: IOPS=714, BW=2858KiB/s (2927kB/s)(7528KiB/2634msec) 00:35:43.800 slat (nsec): min=7793, max=64982, avg=26233.75, stdev=3371.10 00:35:43.800 clat (usec): min=573, max=42163, avg=1352.95, stdev=2932.74 00:35:43.800 lat (usec): min=600, max=42189, avg=1379.19, stdev=2932.75 00:35:43.800 clat percentiles (usec): 00:35:43.800 | 1.00th=[ 799], 5.00th=[ 906], 10.00th=[ 979], 20.00th=[ 1037], 00:35:43.800 | 30.00th=[ 1074], 40.00th=[ 1106], 50.00th=[ 1139], 60.00th=[ 1188], 00:35:43.800 | 70.00th=[ 1221], 80.00th=[ 1254], 90.00th=[ 1303], 95.00th=[ 1336], 00:35:43.800 | 99.00th=[ 1418], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:35:43.800 | 99.99th=[42206] 00:35:43.800 bw ( KiB/s): min= 2144, max= 3600, per=27.33%, avg=2859.80, stdev=608.47, samples=5 00:35:43.800 iops : min= 536, max= 900, avg=714.80, stdev=152.26, samples=5 00:35:43.800 lat (usec) : 750=0.37%, 1000=12.85% 00:35:43.800 lat (msec) : 2=86.19%, 50=0.53% 00:35:43.800 cpu : usr=1.03%, sys=1.94%, ctx=1883, majf=0, minf=2 00:35:43.800 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:43.800 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.800 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.800 issued rwts: total=1883,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:43.800 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:43.800 00:35:43.800 Run status group 0 (all jobs): 00:35:43.800 READ: bw=10.2MiB/s (10.7MB/s), 2116KiB/s-3734KiB/s (2167kB/s-3823kB/s), io=32.6MiB (34.1MB), run=2634-3187msec 00:35:43.800 00:35:43.800 Disk stats (read/write): 00:35:43.800 nvme0n1: ios=2695/0, merge=0/0, ticks=2534/0, in_queue=2534, util=93.16% 00:35:43.800 nvme0n2: ios=2139/0, merge=0/0, ticks=2848/0, in_queue=2848, util=92.93% 00:35:43.800 nvme0n3: ios=1348/0, merge=0/0, ticks=2502/0, in_queue=2502, util=96.07% 00:35:43.800 nvme0n4: ios=1862/0, merge=0/0, ticks=2468/0, in_queue=2468, util=96.42% 00:35:43.800 10:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:43.800 10:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:35:44.060 10:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:44.060 10:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:35:44.320 10:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:44.320 10:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:35:44.320 10:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:44.320 10:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:35:44.613 10:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:35:44.613 10:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 1649887 00:35:44.613 10:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:35:44.613 10:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:35:44.613 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:44.613 10:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:35:44.613 10:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:35:44.613 10:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:35:44.613 10:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:44.613 10:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:35:44.613 10:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:44.613 10:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:35:44.613 10:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:35:44.613 10:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:35:44.613 nvmf hotplug test: fio failed as expected 00:35:44.613 10:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:44.872 10:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:35:44.872 10:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:35:44.872 10:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:35:44.872 10:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:35:44.872 10:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:35:44.872 10:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:44.872 10:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:35:44.872 10:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:44.872 10:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:35:44.872 10:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:44.872 10:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:44.872 rmmod nvme_tcp 00:35:44.872 rmmod nvme_fabrics 00:35:44.872 rmmod nvme_keyring 00:35:44.872 10:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:44.872 10:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:35:44.872 10:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:35:44.872 10:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 1646181 ']' 00:35:44.872 10:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 1646181 00:35:44.873 10:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 1646181 ']' 00:35:44.873 10:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 1646181 00:35:44.873 10:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:35:44.873 10:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:44.873 10:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1646181 00:35:45.133 10:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:45.133 10:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:45.133 10:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1646181' 00:35:45.133 killing process with pid 1646181 00:35:45.133 10:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 1646181 00:35:45.133 10:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 1646181 00:35:45.133 10:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:45.133 10:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:45.133 10:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:45.133 10:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:35:45.133 10:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:35:45.133 10:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:45.133 10:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:35:45.133 10:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:45.133 10:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:45.133 10:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:45.133 10:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:45.133 10:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:47.674 10:09:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:47.674 00:35:47.674 real 0m28.216s 00:35:47.674 user 2m15.284s 00:35:47.674 sys 0m12.281s 00:35:47.674 10:09:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:47.674 10:09:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:35:47.674 ************************************ 00:35:47.674 END TEST nvmf_fio_target 00:35:47.674 ************************************ 00:35:47.674 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:35:47.674 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:47.674 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:47.674 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:47.674 ************************************ 00:35:47.674 START TEST nvmf_bdevio 00:35:47.674 ************************************ 00:35:47.674 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:35:47.674 * Looking for test storage... 00:35:47.674 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:47.674 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:47.674 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:35:47.674 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:47.674 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:47.674 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:47.674 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:47.674 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:47.674 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:35:47.674 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:35:47.675 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:35:47.675 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:35:47.675 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:35:47.675 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:35:47.675 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:35:47.675 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:47.675 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:35:47.675 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:35:47.675 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:47.675 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:47.675 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:35:47.675 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:35:47.675 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:47.675 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:35:47.675 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:35:47.675 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:35:47.675 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:35:47.675 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:47.675 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:35:47.675 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:35:47.675 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:47.675 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:47.675 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:35:47.675 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:47.675 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:47.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:47.675 --rc genhtml_branch_coverage=1 00:35:47.675 --rc genhtml_function_coverage=1 00:35:47.675 --rc genhtml_legend=1 00:35:47.675 --rc geninfo_all_blocks=1 00:35:47.675 --rc geninfo_unexecuted_blocks=1 00:35:47.675 00:35:47.675 ' 00:35:47.675 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:47.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:47.675 --rc genhtml_branch_coverage=1 00:35:47.675 --rc genhtml_function_coverage=1 00:35:47.675 --rc genhtml_legend=1 00:35:47.675 --rc geninfo_all_blocks=1 00:35:47.675 --rc geninfo_unexecuted_blocks=1 00:35:47.675 00:35:47.675 ' 00:35:47.675 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:47.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:47.675 --rc genhtml_branch_coverage=1 00:35:47.675 --rc genhtml_function_coverage=1 00:35:47.675 --rc genhtml_legend=1 00:35:47.675 --rc geninfo_all_blocks=1 00:35:47.675 --rc geninfo_unexecuted_blocks=1 00:35:47.675 00:35:47.675 ' 00:35:47.675 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:47.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:47.675 --rc genhtml_branch_coverage=1 00:35:47.675 --rc genhtml_function_coverage=1 00:35:47.675 --rc genhtml_legend=1 00:35:47.675 --rc geninfo_all_blocks=1 00:35:47.675 --rc geninfo_unexecuted_blocks=1 00:35:47.675 00:35:47.675 ' 00:35:47.675 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:47.675 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:35:47.675 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:47.675 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:47.675 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:47.675 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:47.675 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:47.675 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:47.675 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:47.675 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:47.675 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:47.675 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:47.675 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:47.675 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:47.675 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:47.675 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:47.675 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:47.675 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:47.675 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:47.675 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:35:47.675 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:47.675 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:47.675 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:47.675 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:47.675 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:47.675 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:47.675 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:35:47.675 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:47.675 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:35:47.675 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:47.675 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:47.675 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:47.675 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:47.675 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:47.675 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:47.675 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:47.675 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:47.675 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:47.675 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:47.675 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:47.675 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:47.675 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:35:47.675 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:47.675 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:47.675 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:47.676 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:47.676 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:47.676 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:47.676 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:47.676 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:47.676 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:47.676 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:47.676 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:35:47.676 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:55.813 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:55.813 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:35:55.813 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:55.813 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:55.813 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:55.813 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:55.813 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:55.813 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:35:55.813 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:55.814 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:35:55.814 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:35:55.814 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:35:55.814 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:35:55.814 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:35:55.814 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:35:55.814 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:55.814 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:55.814 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:55.814 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:55.814 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:55.814 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:55.814 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:55.814 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:55.814 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:55.814 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:55.814 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:55.814 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:55.814 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:55.814 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:55.814 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:55.814 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:55.814 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:55.814 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:55.814 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:55.814 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:55.814 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:55.814 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:55.814 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:55.814 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:55.814 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:55.814 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:55.814 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:55.814 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:55.814 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:55.814 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:55.814 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:55.814 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:55.814 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:55.814 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:55.814 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:55.814 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:55.814 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:55.814 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:55.814 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:55.814 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:55.814 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:55.814 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:55.814 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:55.814 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:55.814 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:55.814 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:55.814 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:55.814 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:55.814 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:55.814 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:55.814 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:55.814 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:55.814 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:55.814 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:55.814 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:55.814 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:55.814 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:55.814 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:55.814 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:35:55.814 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:55.814 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:55.814 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:55.814 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:55.814 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:55.814 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:55.814 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:55.814 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:55.814 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:55.814 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:55.814 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:55.814 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:55.814 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:55.814 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:55.814 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:55.814 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:55.814 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:55.814 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:55.814 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:55.814 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:55.814 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:55.814 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:55.814 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:55.814 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:55.814 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:55.814 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:55.814 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:55.814 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.642 ms 00:35:55.814 00:35:55.814 --- 10.0.0.2 ping statistics --- 00:35:55.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:55.814 rtt min/avg/max/mdev = 0.642/0.642/0.642/0.000 ms 00:35:55.814 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:55.814 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:55.814 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:35:55.814 00:35:55.814 --- 10.0.0.1 ping statistics --- 00:35:55.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:55.814 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:35:55.814 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:55.814 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:35:55.815 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:55.815 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:55.815 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:55.815 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:55.815 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:55.815 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:55.815 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:55.815 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:35:55.815 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:55.815 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:55.815 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:55.815 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=1655214 00:35:55.815 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 1655214 00:35:55.815 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:35:55.815 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 1655214 ']' 00:35:55.815 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:55.815 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:55.815 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:55.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:55.815 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:55.815 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:55.815 [2024-11-20 10:09:25.858994] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:55.815 [2024-11-20 10:09:25.860138] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:35:55.815 [2024-11-20 10:09:25.860198] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:55.815 [2024-11-20 10:09:25.958319] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:55.815 [2024-11-20 10:09:26.010117] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:55.815 [2024-11-20 10:09:26.010175] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:55.815 [2024-11-20 10:09:26.010184] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:55.815 [2024-11-20 10:09:26.010191] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:55.815 [2024-11-20 10:09:26.010198] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:55.815 [2024-11-20 10:09:26.012563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:35:55.815 [2024-11-20 10:09:26.012725] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:35:55.815 [2024-11-20 10:09:26.012884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:35:55.815 [2024-11-20 10:09:26.012885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:55.815 [2024-11-20 10:09:26.089723] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:55.815 [2024-11-20 10:09:26.090704] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:55.815 [2024-11-20 10:09:26.090923] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:35:55.815 [2024-11-20 10:09:26.091371] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:55.815 [2024-11-20 10:09:26.091423] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:55.815 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:55.815 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:35:55.815 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:55.815 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:55.815 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:55.815 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:55.815 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:55.815 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.815 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:55.815 [2024-11-20 10:09:26.717869] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:56.075 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.075 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:56.075 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.075 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:56.075 Malloc0 00:35:56.075 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.075 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:56.075 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.075 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:56.075 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.075 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:56.075 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.075 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:56.075 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.075 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:56.075 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.075 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:56.075 [2024-11-20 10:09:26.810148] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:56.075 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.075 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:35:56.075 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:35:56.075 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:35:56.075 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:35:56.075 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:56.075 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:56.075 { 00:35:56.075 "params": { 00:35:56.075 "name": "Nvme$subsystem", 00:35:56.075 "trtype": "$TEST_TRANSPORT", 00:35:56.075 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:56.075 "adrfam": "ipv4", 00:35:56.075 "trsvcid": "$NVMF_PORT", 00:35:56.075 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:56.075 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:56.075 "hdgst": ${hdgst:-false}, 00:35:56.075 "ddgst": ${ddgst:-false} 00:35:56.075 }, 00:35:56.075 "method": "bdev_nvme_attach_controller" 00:35:56.075 } 00:35:56.075 EOF 00:35:56.075 )") 00:35:56.075 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:35:56.075 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:35:56.075 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:35:56.075 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:56.075 "params": { 00:35:56.075 "name": "Nvme1", 00:35:56.075 "trtype": "tcp", 00:35:56.075 "traddr": "10.0.0.2", 00:35:56.075 "adrfam": "ipv4", 00:35:56.075 "trsvcid": "4420", 00:35:56.075 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:56.075 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:56.075 "hdgst": false, 00:35:56.075 "ddgst": false 00:35:56.075 }, 00:35:56.075 "method": "bdev_nvme_attach_controller" 00:35:56.075 }' 00:35:56.075 [2024-11-20 10:09:26.880420] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:35:56.075 [2024-11-20 10:09:26.880486] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1655525 ] 00:35:56.075 [2024-11-20 10:09:26.977043] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:56.335 [2024-11-20 10:09:27.033518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:56.335 [2024-11-20 10:09:27.033682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:56.335 [2024-11-20 10:09:27.033682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:56.335 I/O targets: 00:35:56.335 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:35:56.335 00:35:56.335 00:35:56.335 CUnit - A unit testing framework for C - Version 2.1-3 00:35:56.335 http://cunit.sourceforge.net/ 00:35:56.335 00:35:56.335 00:35:56.335 Suite: bdevio tests on: Nvme1n1 00:35:56.335 Test: blockdev write read block ...passed 00:35:56.595 Test: blockdev write zeroes read block ...passed 00:35:56.595 Test: blockdev write zeroes read no split ...passed 00:35:56.595 Test: blockdev write zeroes read split ...passed 00:35:56.595 Test: blockdev write zeroes read split partial ...passed 00:35:56.595 Test: blockdev reset ...[2024-11-20 10:09:27.405208] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:35:56.595 [2024-11-20 10:09:27.405307] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2345970 (9): Bad file descriptor 00:35:56.595 [2024-11-20 10:09:27.417914] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:35:56.595 passed 00:35:56.595 Test: blockdev write read 8 blocks ...passed 00:35:56.595 Test: blockdev write read size > 128k ...passed 00:35:56.595 Test: blockdev write read invalid size ...passed 00:35:56.595 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:35:56.595 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:35:56.595 Test: blockdev write read max offset ...passed 00:35:56.856 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:35:56.856 Test: blockdev writev readv 8 blocks ...passed 00:35:56.856 Test: blockdev writev readv 30 x 1block ...passed 00:35:56.856 Test: blockdev writev readv block ...passed 00:35:56.856 Test: blockdev writev readv size > 128k ...passed 00:35:56.856 Test: blockdev writev readv size > 128k in two iovs ...passed 00:35:56.856 Test: blockdev comparev and writev ...[2024-11-20 10:09:27.685764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:56.856 [2024-11-20 10:09:27.685812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:56.856 [2024-11-20 10:09:27.685828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:56.856 [2024-11-20 10:09:27.685837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:56.856 [2024-11-20 10:09:27.686496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:56.856 [2024-11-20 10:09:27.686511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:56.856 [2024-11-20 10:09:27.686526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:56.856 [2024-11-20 10:09:27.686534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:56.856 [2024-11-20 10:09:27.687150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:56.856 [2024-11-20 10:09:27.687168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:56.856 [2024-11-20 10:09:27.687183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:56.856 [2024-11-20 10:09:27.687192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:56.856 [2024-11-20 10:09:27.687802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:56.856 [2024-11-20 10:09:27.687816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:56.856 [2024-11-20 10:09:27.687830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:56.856 [2024-11-20 10:09:27.687838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:56.856 passed 00:35:57.117 Test: blockdev nvme passthru rw ...passed 00:35:57.117 Test: blockdev nvme passthru vendor specific ...[2024-11-20 10:09:27.771846] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:57.117 [2024-11-20 10:09:27.771863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:57.117 [2024-11-20 10:09:27.772248] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:57.117 [2024-11-20 10:09:27.772262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:57.117 [2024-11-20 10:09:27.772645] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:57.117 [2024-11-20 10:09:27.772658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:57.117 [2024-11-20 10:09:27.773033] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:57.118 [2024-11-20 10:09:27.773046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:57.118 passed 00:35:57.118 Test: blockdev nvme admin passthru ...passed 00:35:57.118 Test: blockdev copy ...passed 00:35:57.118 00:35:57.118 Run Summary: Type Total Ran Passed Failed Inactive 00:35:57.118 suites 1 1 n/a 0 0 00:35:57.118 tests 23 23 23 0 0 00:35:57.118 asserts 152 152 152 0 n/a 00:35:57.118 00:35:57.118 Elapsed time = 1.275 seconds 00:35:57.118 10:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:57.118 10:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.118 10:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:57.118 10:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.118 10:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:35:57.118 10:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:35:57.118 10:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:57.118 10:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:35:57.118 10:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:57.118 10:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:35:57.118 10:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:57.118 10:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:57.118 rmmod nvme_tcp 00:35:57.118 rmmod nvme_fabrics 00:35:57.118 rmmod nvme_keyring 00:35:57.118 10:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:57.379 10:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:35:57.379 10:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:35:57.379 10:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 1655214 ']' 00:35:57.379 10:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 1655214 00:35:57.379 10:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 1655214 ']' 00:35:57.379 10:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 1655214 00:35:57.379 10:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:35:57.379 10:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:57.379 10:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1655214 00:35:57.379 10:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:35:57.379 10:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:35:57.379 10:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1655214' 00:35:57.379 killing process with pid 1655214 00:35:57.379 10:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 1655214 00:35:57.379 10:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 1655214 00:35:57.379 10:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:57.380 10:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:57.380 10:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:57.380 10:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:35:57.380 10:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:35:57.380 10:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:57.380 10:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:35:57.380 10:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:57.380 10:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:57.380 10:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:57.380 10:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:57.380 10:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:59.925 10:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:59.925 00:35:59.925 real 0m12.298s 00:35:59.925 user 0m9.810s 00:35:59.925 sys 0m6.452s 00:35:59.925 10:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:59.925 10:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:59.925 ************************************ 00:35:59.925 END TEST nvmf_bdevio 00:35:59.925 ************************************ 00:35:59.925 10:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:35:59.925 00:35:59.925 real 5m0.450s 00:35:59.925 user 10m20.651s 00:35:59.925 sys 2m5.337s 00:35:59.925 10:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:59.925 10:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:59.925 ************************************ 00:35:59.925 END TEST nvmf_target_core_interrupt_mode 00:35:59.925 ************************************ 00:35:59.925 10:09:30 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:35:59.925 10:09:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:59.925 10:09:30 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:59.925 10:09:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:59.925 ************************************ 00:35:59.925 START TEST nvmf_interrupt 00:35:59.925 ************************************ 00:35:59.925 10:09:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:35:59.925 * Looking for test storage... 00:35:59.925 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:59.925 10:09:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:59.925 10:09:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lcov --version 00:35:59.926 10:09:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:59.926 10:09:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:59.926 10:09:30 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:59.926 10:09:30 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:59.926 10:09:30 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:59.926 10:09:30 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:35:59.926 10:09:30 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:35:59.926 10:09:30 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:35:59.926 10:09:30 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:35:59.926 10:09:30 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:35:59.926 10:09:30 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:35:59.926 10:09:30 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:35:59.926 10:09:30 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:59.926 10:09:30 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:35:59.926 10:09:30 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:35:59.926 10:09:30 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:59.926 10:09:30 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:59.926 10:09:30 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:35:59.926 10:09:30 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:35:59.926 10:09:30 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:59.926 10:09:30 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:35:59.926 10:09:30 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:35:59.926 10:09:30 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:35:59.926 10:09:30 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:35:59.926 10:09:30 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:59.926 10:09:30 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:35:59.926 10:09:30 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:35:59.926 10:09:30 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:59.926 10:09:30 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:59.926 10:09:30 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:35:59.926 10:09:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:59.926 10:09:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:59.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:59.926 --rc genhtml_branch_coverage=1 00:35:59.926 --rc genhtml_function_coverage=1 00:35:59.926 --rc genhtml_legend=1 00:35:59.926 --rc geninfo_all_blocks=1 00:35:59.926 --rc geninfo_unexecuted_blocks=1 00:35:59.926 00:35:59.926 ' 00:35:59.926 10:09:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:59.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:59.926 --rc genhtml_branch_coverage=1 00:35:59.926 --rc genhtml_function_coverage=1 00:35:59.926 --rc genhtml_legend=1 00:35:59.926 --rc geninfo_all_blocks=1 00:35:59.926 --rc geninfo_unexecuted_blocks=1 00:35:59.926 00:35:59.926 ' 00:35:59.926 10:09:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:59.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:59.926 --rc genhtml_branch_coverage=1 00:35:59.926 --rc genhtml_function_coverage=1 00:35:59.926 --rc genhtml_legend=1 00:35:59.926 --rc geninfo_all_blocks=1 00:35:59.926 --rc geninfo_unexecuted_blocks=1 00:35:59.926 00:35:59.926 ' 00:35:59.926 10:09:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:59.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:59.926 --rc genhtml_branch_coverage=1 00:35:59.926 --rc genhtml_function_coverage=1 00:35:59.926 --rc genhtml_legend=1 00:35:59.926 --rc geninfo_all_blocks=1 00:35:59.926 --rc geninfo_unexecuted_blocks=1 00:35:59.926 00:35:59.926 ' 00:35:59.926 10:09:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:59.926 10:09:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:35:59.926 10:09:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:59.926 10:09:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:59.926 10:09:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:59.926 10:09:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:59.926 10:09:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:59.926 10:09:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:59.926 10:09:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:59.926 10:09:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:59.926 10:09:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:59.926 10:09:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:59.926 10:09:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:59.926 10:09:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:59.926 10:09:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:59.926 10:09:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:59.926 10:09:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:59.926 10:09:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:59.926 10:09:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:59.926 10:09:30 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:35:59.926 10:09:30 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:59.926 10:09:30 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:59.926 10:09:30 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:59.926 10:09:30 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:59.926 10:09:30 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:59.926 10:09:30 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:59.926 10:09:30 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:35:59.926 10:09:30 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:59.926 10:09:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:35:59.926 10:09:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:59.926 10:09:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:59.926 10:09:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:59.926 10:09:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:59.926 10:09:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:59.926 10:09:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:59.926 10:09:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:59.926 10:09:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:59.926 10:09:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:59.926 10:09:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:59.926 10:09:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:35:59.926 10:09:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:35:59.926 10:09:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:35:59.926 10:09:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:59.926 10:09:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:59.926 10:09:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:59.926 10:09:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:59.926 10:09:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:59.926 10:09:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:59.926 10:09:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:59.926 10:09:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:59.926 10:09:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:59.926 10:09:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:59.926 10:09:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:35:59.926 10:09:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:08.062 10:09:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:08.062 10:09:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:36:08.062 10:09:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:08.062 10:09:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:08.062 10:09:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:08.062 10:09:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:08.062 10:09:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:08.062 10:09:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:36:08.062 10:09:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:08.062 10:09:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:36:08.062 10:09:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:36:08.062 10:09:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:36:08.062 10:09:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:36:08.063 10:09:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:36:08.063 10:09:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:36:08.063 10:09:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:08.063 10:09:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:08.063 10:09:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:08.063 10:09:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:08.063 10:09:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:08.063 10:09:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:08.063 10:09:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:08.063 10:09:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:08.063 10:09:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:08.063 10:09:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:08.063 10:09:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:08.063 10:09:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:08.063 10:09:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:08.063 10:09:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:08.063 10:09:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:08.063 10:09:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:08.063 10:09:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:08.063 10:09:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:08.063 10:09:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:08.063 10:09:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:08.063 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:08.063 10:09:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:08.063 10:09:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:08.063 10:09:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:08.063 10:09:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:08.063 10:09:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:08.063 10:09:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:08.063 10:09:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:08.063 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:08.063 10:09:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:08.063 10:09:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:08.063 10:09:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:08.063 10:09:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:08.063 10:09:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:08.063 10:09:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:08.063 10:09:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:08.063 10:09:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:08.063 10:09:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:08.063 10:09:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:08.063 10:09:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:08.063 10:09:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:08.063 10:09:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:08.063 10:09:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:08.063 10:09:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:08.063 10:09:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:08.063 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:08.063 10:09:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:08.063 10:09:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:08.063 10:09:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:08.063 10:09:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:08.063 10:09:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:08.063 10:09:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:08.063 10:09:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:08.063 10:09:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:08.063 10:09:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:08.063 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:08.063 10:09:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:08.063 10:09:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:08.063 10:09:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:36:08.063 10:09:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:08.063 10:09:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:08.063 10:09:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:08.063 10:09:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:08.063 10:09:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:08.063 10:09:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:08.063 10:09:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:08.063 10:09:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:08.063 10:09:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:08.063 10:09:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:08.063 10:09:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:08.063 10:09:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:08.063 10:09:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:08.063 10:09:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:08.063 10:09:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:08.063 10:09:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:08.063 10:09:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:08.063 10:09:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:08.063 10:09:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:08.063 10:09:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:08.063 10:09:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:08.063 10:09:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:08.063 10:09:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:08.063 10:09:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:08.063 10:09:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:08.063 10:09:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:08.063 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:08.063 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.649 ms 00:36:08.063 00:36:08.063 --- 10.0.0.2 ping statistics --- 00:36:08.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:08.063 rtt min/avg/max/mdev = 0.649/0.649/0.649/0.000 ms 00:36:08.063 10:09:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:08.063 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:08.063 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:36:08.063 00:36:08.063 --- 10.0.0.1 ping statistics --- 00:36:08.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:08.063 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:36:08.063 10:09:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:08.063 10:09:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:36:08.063 10:09:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:08.063 10:09:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:08.063 10:09:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:08.063 10:09:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:08.063 10:09:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:08.063 10:09:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:08.063 10:09:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:08.063 10:09:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:36:08.063 10:09:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:08.063 10:09:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:08.063 10:09:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:08.063 10:09:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=1659915 00:36:08.063 10:09:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 1659915 00:36:08.063 10:09:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:36:08.063 10:09:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 1659915 ']' 00:36:08.063 10:09:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:08.063 10:09:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:08.063 10:09:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:08.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:08.063 10:09:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:08.063 10:09:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:08.063 [2024-11-20 10:09:38.330261] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:08.064 [2024-11-20 10:09:38.331377] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:36:08.064 [2024-11-20 10:09:38.331429] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:08.064 [2024-11-20 10:09:38.431502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:36:08.064 [2024-11-20 10:09:38.483012] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:08.064 [2024-11-20 10:09:38.483065] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:08.064 [2024-11-20 10:09:38.483074] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:08.064 [2024-11-20 10:09:38.483082] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:08.064 [2024-11-20 10:09:38.483088] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:08.064 [2024-11-20 10:09:38.484859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:08.064 [2024-11-20 10:09:38.484864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:08.064 [2024-11-20 10:09:38.561267] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:08.064 [2024-11-20 10:09:38.561736] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:08.064 [2024-11-20 10:09:38.562077] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:08.324 10:09:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:08.324 10:09:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:36:08.324 10:09:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:08.324 10:09:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:08.324 10:09:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:08.324 10:09:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:08.324 10:09:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:36:08.324 10:09:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:36:08.324 10:09:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:36:08.324 10:09:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:36:08.324 5000+0 records in 00:36:08.324 5000+0 records out 00:36:08.324 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0181318 s, 565 MB/s 00:36:08.324 10:09:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:36:08.324 10:09:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.324 10:09:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:08.585 AIO0 00:36:08.585 10:09:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.585 10:09:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:36:08.585 10:09:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.585 10:09:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:08.585 [2024-11-20 10:09:39.269918] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:08.585 10:09:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.585 10:09:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:36:08.585 10:09:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.585 10:09:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:08.585 10:09:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.585 10:09:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:36:08.585 10:09:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.585 10:09:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:08.585 10:09:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.585 10:09:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:08.585 10:09:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.585 10:09:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:08.585 [2024-11-20 10:09:39.314355] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:08.585 10:09:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.585 10:09:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:36:08.585 10:09:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1659915 0 00:36:08.585 10:09:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1659915 0 idle 00:36:08.585 10:09:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1659915 00:36:08.585 10:09:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:36:08.585 10:09:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:08.585 10:09:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:36:08.585 10:09:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:08.585 10:09:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:36:08.585 10:09:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:36:08.585 10:09:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:08.585 10:09:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:08.585 10:09:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:08.585 10:09:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1659915 -w 256 00:36:08.585 10:09:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:36:08.585 10:09:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1659915 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.31 reactor_0' 00:36:08.845 10:09:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1659915 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.31 reactor_0 00:36:08.845 10:09:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:08.845 10:09:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:08.845 10:09:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:36:08.845 10:09:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:36:08.845 10:09:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:36:08.845 10:09:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:36:08.845 10:09:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:36:08.845 10:09:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:08.845 10:09:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:36:08.845 10:09:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1659915 1 00:36:08.845 10:09:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1659915 1 idle 00:36:08.845 10:09:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1659915 00:36:08.845 10:09:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:36:08.845 10:09:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:08.845 10:09:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:36:08.845 10:09:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:08.845 10:09:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:36:08.845 10:09:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:36:08.845 10:09:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:08.845 10:09:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:08.845 10:09:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:08.845 10:09:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1659915 -w 256 00:36:08.845 10:09:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:36:08.845 10:09:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1659920 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.00 reactor_1' 00:36:08.845 10:09:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1659920 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.00 reactor_1 00:36:08.845 10:09:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:08.845 10:09:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:08.845 10:09:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:36:08.845 10:09:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:36:08.845 10:09:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:36:08.845 10:09:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:36:08.845 10:09:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:36:08.845 10:09:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:08.845 10:09:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:36:08.845 10:09:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=1660242 00:36:08.845 10:09:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:36:08.845 10:09:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:36:08.845 10:09:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:36:08.845 10:09:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1659915 0 00:36:08.845 10:09:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1659915 0 busy 00:36:08.845 10:09:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1659915 00:36:08.845 10:09:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:36:08.845 10:09:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:36:08.845 10:09:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:36:08.845 10:09:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:08.845 10:09:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:36:08.845 10:09:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:08.845 10:09:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:08.845 10:09:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:08.845 10:09:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1659915 -w 256 00:36:08.845 10:09:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:36:09.106 10:09:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1659915 root 20 0 128.2g 43776 32256 S 6.7 0.0 0:00.33 reactor_0' 00:36:09.106 10:09:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1659915 root 20 0 128.2g 43776 32256 S 6.7 0.0 0:00.33 reactor_0 00:36:09.106 10:09:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:09.106 10:09:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:09.106 10:09:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:36:09.106 10:09:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:36:09.106 10:09:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:36:09.106 10:09:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:36:09.106 10:09:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:36:10.047 10:09:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:36:10.047 10:09:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:10.047 10:09:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1659915 -w 256 00:36:10.047 10:09:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:36:10.308 10:09:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1659915 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:02.62 reactor_0' 00:36:10.308 10:09:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1659915 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:02.62 reactor_0 00:36:10.308 10:09:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:10.308 10:09:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:10.308 10:09:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:36:10.308 10:09:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:36:10.308 10:09:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:36:10.308 10:09:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:36:10.308 10:09:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:36:10.308 10:09:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:10.308 10:09:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:36:10.308 10:09:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:36:10.308 10:09:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1659915 1 00:36:10.308 10:09:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1659915 1 busy 00:36:10.308 10:09:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1659915 00:36:10.308 10:09:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:36:10.308 10:09:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:36:10.308 10:09:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:36:10.308 10:09:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:10.308 10:09:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:36:10.308 10:09:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:10.308 10:09:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:10.308 10:09:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:10.308 10:09:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1659915 -w 256 00:36:10.308 10:09:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:36:10.570 10:09:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1659920 root 20 0 128.2g 44928 32256 R 93.3 0.0 0:01.33 reactor_1' 00:36:10.570 10:09:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1659920 root 20 0 128.2g 44928 32256 R 93.3 0.0 0:01.33 reactor_1 00:36:10.570 10:09:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:10.570 10:09:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:10.570 10:09:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=93.3 00:36:10.570 10:09:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=93 00:36:10.570 10:09:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:36:10.570 10:09:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:36:10.570 10:09:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:36:10.570 10:09:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:10.570 10:09:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 1660242 00:36:20.572 Initializing NVMe Controllers 00:36:20.572 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:20.572 Controller IO queue size 256, less than required. 00:36:20.572 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:36:20.572 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:36:20.572 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:36:20.572 Initialization complete. Launching workers. 00:36:20.572 ======================================================== 00:36:20.572 Latency(us) 00:36:20.572 Device Information : IOPS MiB/s Average min max 00:36:20.572 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 19005.70 74.24 13474.31 4712.23 32151.01 00:36:20.572 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 19893.50 77.71 12869.91 7488.53 30048.51 00:36:20.573 ======================================================== 00:36:20.573 Total : 38899.20 151.95 13165.21 4712.23 32151.01 00:36:20.573 00:36:20.573 10:09:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:36:20.573 10:09:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1659915 0 00:36:20.573 10:09:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1659915 0 idle 00:36:20.573 10:09:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1659915 00:36:20.573 10:09:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:36:20.573 10:09:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:20.573 10:09:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:36:20.573 10:09:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:20.573 10:09:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:36:20.573 10:09:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:36:20.573 10:09:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:20.573 10:09:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:20.573 10:09:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:20.573 10:09:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1659915 -w 256 00:36:20.573 10:09:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:36:20.573 10:09:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1659915 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:20.30 reactor_0' 00:36:20.573 10:09:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1659915 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:20.30 reactor_0 00:36:20.573 10:09:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:20.573 10:09:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:20.573 10:09:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:36:20.573 10:09:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:36:20.573 10:09:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:36:20.573 10:09:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:36:20.573 10:09:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:36:20.573 10:09:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:20.573 10:09:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:36:20.573 10:09:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1659915 1 00:36:20.573 10:09:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1659915 1 idle 00:36:20.573 10:09:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1659915 00:36:20.573 10:09:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:36:20.573 10:09:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:20.573 10:09:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:36:20.573 10:09:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:20.573 10:09:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:36:20.573 10:09:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:36:20.573 10:09:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:20.573 10:09:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:20.573 10:09:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:20.573 10:09:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1659915 -w 256 00:36:20.573 10:09:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:36:20.573 10:09:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1659920 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.00 reactor_1' 00:36:20.573 10:09:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1659920 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.00 reactor_1 00:36:20.573 10:09:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:20.573 10:09:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:20.573 10:09:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:36:20.573 10:09:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:36:20.573 10:09:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:36:20.573 10:09:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:36:20.573 10:09:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:36:20.573 10:09:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:20.573 10:09:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:36:20.573 10:09:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:36:20.573 10:09:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:36:20.573 10:09:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:36:20.573 10:09:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:36:20.573 10:09:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:36:22.489 10:09:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:36:22.489 10:09:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:36:22.489 10:09:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:36:22.489 10:09:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:36:22.489 10:09:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:36:22.489 10:09:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:36:22.489 10:09:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:36:22.489 10:09:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1659915 0 00:36:22.489 10:09:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1659915 0 idle 00:36:22.489 10:09:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1659915 00:36:22.489 10:09:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:36:22.489 10:09:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:22.489 10:09:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:36:22.489 10:09:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:22.489 10:09:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:36:22.489 10:09:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:36:22.489 10:09:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:22.489 10:09:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:22.489 10:09:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:22.489 10:09:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1659915 -w 256 00:36:22.489 10:09:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:36:22.489 10:09:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1659915 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.71 reactor_0' 00:36:22.489 10:09:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1659915 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.71 reactor_0 00:36:22.489 10:09:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:22.489 10:09:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:22.489 10:09:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:36:22.489 10:09:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:36:22.489 10:09:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:36:22.489 10:09:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:36:22.489 10:09:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:36:22.489 10:09:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:22.489 10:09:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:36:22.489 10:09:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1659915 1 00:36:22.489 10:09:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1659915 1 idle 00:36:22.489 10:09:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1659915 00:36:22.489 10:09:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:36:22.489 10:09:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:22.489 10:09:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:36:22.489 10:09:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:22.489 10:09:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:36:22.489 10:09:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:36:22.489 10:09:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:22.489 10:09:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:22.489 10:09:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:22.489 10:09:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1659915 -w 256 00:36:22.489 10:09:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:36:22.750 10:09:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1659920 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.16 reactor_1' 00:36:22.750 10:09:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1659920 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.16 reactor_1 00:36:22.750 10:09:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:22.750 10:09:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:22.750 10:09:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:36:22.750 10:09:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:36:22.750 10:09:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:36:22.750 10:09:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:36:22.750 10:09:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:36:22.750 10:09:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:22.750 10:09:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:36:22.750 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:36:22.750 10:09:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:36:22.750 10:09:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:36:22.750 10:09:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:36:22.750 10:09:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:36:23.012 10:09:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:36:23.012 10:09:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:36:23.012 10:09:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:36:23.012 10:09:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:36:23.012 10:09:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:36:23.012 10:09:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:23.012 10:09:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:36:23.012 10:09:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:23.012 10:09:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:36:23.012 10:09:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:23.012 10:09:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:23.012 rmmod nvme_tcp 00:36:23.012 rmmod nvme_fabrics 00:36:23.012 rmmod nvme_keyring 00:36:23.012 10:09:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:23.012 10:09:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:36:23.012 10:09:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:36:23.012 10:09:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 1659915 ']' 00:36:23.012 10:09:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 1659915 00:36:23.012 10:09:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 1659915 ']' 00:36:23.012 10:09:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 1659915 00:36:23.012 10:09:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:36:23.012 10:09:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:23.012 10:09:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1659915 00:36:23.012 10:09:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:23.012 10:09:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:23.012 10:09:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1659915' 00:36:23.012 killing process with pid 1659915 00:36:23.012 10:09:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 1659915 00:36:23.012 10:09:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 1659915 00:36:23.273 10:09:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:23.273 10:09:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:23.273 10:09:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:23.273 10:09:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:36:23.273 10:09:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:23.273 10:09:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:36:23.273 10:09:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:36:23.273 10:09:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:23.273 10:09:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:23.273 10:09:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:23.273 10:09:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:23.273 10:09:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:25.185 10:09:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:25.185 00:36:25.185 real 0m25.595s 00:36:25.185 user 0m40.449s 00:36:25.185 sys 0m9.848s 00:36:25.185 10:09:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:25.185 10:09:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:25.185 ************************************ 00:36:25.185 END TEST nvmf_interrupt 00:36:25.185 ************************************ 00:36:25.446 00:36:25.447 real 30m8.943s 00:36:25.447 user 61m45.379s 00:36:25.447 sys 10m16.817s 00:36:25.447 10:09:56 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:25.447 10:09:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:25.447 ************************************ 00:36:25.447 END TEST nvmf_tcp 00:36:25.447 ************************************ 00:36:25.447 10:09:56 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:36:25.447 10:09:56 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:36:25.447 10:09:56 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:25.447 10:09:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:25.447 10:09:56 -- common/autotest_common.sh@10 -- # set +x 00:36:25.447 ************************************ 00:36:25.447 START TEST spdkcli_nvmf_tcp 00:36:25.447 ************************************ 00:36:25.447 10:09:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:36:25.447 * Looking for test storage... 00:36:25.447 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:36:25.447 10:09:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:25.447 10:09:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:36:25.447 10:09:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:25.709 10:09:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:25.709 10:09:56 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:25.709 10:09:56 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:25.709 10:09:56 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:25.709 10:09:56 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:36:25.709 10:09:56 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:36:25.709 10:09:56 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:36:25.709 10:09:56 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:36:25.709 10:09:56 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:36:25.709 10:09:56 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:36:25.709 10:09:56 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:36:25.709 10:09:56 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:25.709 10:09:56 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:36:25.709 10:09:56 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:36:25.709 10:09:56 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:25.709 10:09:56 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:25.709 10:09:56 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:36:25.709 10:09:56 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:36:25.709 10:09:56 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:25.709 10:09:56 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:36:25.709 10:09:56 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:36:25.709 10:09:56 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:36:25.709 10:09:56 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:36:25.709 10:09:56 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:25.709 10:09:56 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:36:25.709 10:09:56 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:36:25.709 10:09:56 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:25.709 10:09:56 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:25.709 10:09:56 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:36:25.709 10:09:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:25.709 10:09:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:25.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:25.709 --rc genhtml_branch_coverage=1 00:36:25.709 --rc genhtml_function_coverage=1 00:36:25.709 --rc genhtml_legend=1 00:36:25.709 --rc geninfo_all_blocks=1 00:36:25.709 --rc geninfo_unexecuted_blocks=1 00:36:25.709 00:36:25.709 ' 00:36:25.709 10:09:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:25.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:25.709 --rc genhtml_branch_coverage=1 00:36:25.709 --rc genhtml_function_coverage=1 00:36:25.709 --rc genhtml_legend=1 00:36:25.709 --rc geninfo_all_blocks=1 00:36:25.709 --rc geninfo_unexecuted_blocks=1 00:36:25.709 00:36:25.709 ' 00:36:25.709 10:09:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:25.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:25.709 --rc genhtml_branch_coverage=1 00:36:25.709 --rc genhtml_function_coverage=1 00:36:25.709 --rc genhtml_legend=1 00:36:25.709 --rc geninfo_all_blocks=1 00:36:25.709 --rc geninfo_unexecuted_blocks=1 00:36:25.709 00:36:25.709 ' 00:36:25.709 10:09:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:25.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:25.709 --rc genhtml_branch_coverage=1 00:36:25.709 --rc genhtml_function_coverage=1 00:36:25.709 --rc genhtml_legend=1 00:36:25.709 --rc geninfo_all_blocks=1 00:36:25.709 --rc geninfo_unexecuted_blocks=1 00:36:25.709 00:36:25.709 ' 00:36:25.709 10:09:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:36:25.709 10:09:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:36:25.709 10:09:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:36:25.709 10:09:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:25.709 10:09:56 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:36:25.709 10:09:56 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:25.709 10:09:56 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:25.709 10:09:56 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:25.709 10:09:56 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:25.709 10:09:56 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:25.709 10:09:56 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:25.710 10:09:56 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:25.710 10:09:56 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:25.710 10:09:56 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:25.710 10:09:56 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:25.710 10:09:56 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:25.710 10:09:56 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:25.710 10:09:56 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:25.710 10:09:56 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:25.710 10:09:56 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:25.710 10:09:56 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:25.710 10:09:56 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:25.710 10:09:56 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:36:25.710 10:09:56 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:25.710 10:09:56 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:25.710 10:09:56 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:25.710 10:09:56 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:25.710 10:09:56 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:25.710 10:09:56 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:25.710 10:09:56 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:36:25.710 10:09:56 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:25.710 10:09:56 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:36:25.710 10:09:56 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:25.710 10:09:56 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:25.710 10:09:56 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:25.710 10:09:56 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:25.710 10:09:56 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:25.710 10:09:56 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:25.710 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:25.710 10:09:56 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:25.710 10:09:56 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:25.710 10:09:56 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:25.710 10:09:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:36:25.710 10:09:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:36:25.710 10:09:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:36:25.710 10:09:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:36:25.710 10:09:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:25.710 10:09:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:25.710 10:09:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:36:25.710 10:09:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1663472 00:36:25.710 10:09:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1663472 00:36:25.710 10:09:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 1663472 ']' 00:36:25.710 10:09:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:25.710 10:09:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:36:25.710 10:09:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:25.710 10:09:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:25.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:25.710 10:09:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:25.710 10:09:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:25.710 [2024-11-20 10:09:56.521131] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:36:25.710 [2024-11-20 10:09:56.521212] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1663472 ] 00:36:25.710 [2024-11-20 10:09:56.616548] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:36:25.971 [2024-11-20 10:09:56.670942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:25.971 [2024-11-20 10:09:56.670947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:26.543 10:09:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:26.543 10:09:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:36:26.543 10:09:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:36:26.543 10:09:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:26.543 10:09:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:26.543 10:09:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:36:26.543 10:09:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:36:26.543 10:09:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:36:26.543 10:09:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:26.543 10:09:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:26.543 10:09:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:36:26.543 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:36:26.543 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:36:26.543 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:36:26.543 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:36:26.543 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:36:26.543 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:36:26.543 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:36:26.543 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:36:26.543 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:36:26.543 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:36:26.543 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:26.543 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:36:26.543 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:36:26.543 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:26.543 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:36:26.543 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:36:26.543 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:36:26.543 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:36:26.543 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:26.543 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:36:26.543 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:36:26.543 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:36:26.543 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:36:26.543 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:26.543 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:36:26.543 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:36:26.543 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:36:26.543 ' 00:36:29.851 [2024-11-20 10:10:00.061188] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:30.524 [2024-11-20 10:10:01.421500] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:36:33.084 [2024-11-20 10:10:03.944509] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:36:35.628 [2024-11-20 10:10:06.174885] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:36:37.012 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:36:37.012 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:36:37.012 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:36:37.012 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:36:37.012 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:36:37.012 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:36:37.012 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:36:37.012 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:36:37.012 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:36:37.012 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:36:37.012 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:36:37.012 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:37.012 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:36:37.012 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:36:37.012 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:37.012 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:36:37.012 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:36:37.012 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:36:37.012 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:36:37.012 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:37.012 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:36:37.012 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:36:37.012 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:36:37.012 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:36:37.012 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:37.012 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:36:37.012 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:36:37.012 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:36:37.273 10:10:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:36:37.273 10:10:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:37.273 10:10:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:37.273 10:10:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:36:37.273 10:10:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:37.273 10:10:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:37.273 10:10:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:36:37.273 10:10:08 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:36:37.534 10:10:08 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:36:37.534 10:10:08 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:36:37.534 10:10:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:36:37.534 10:10:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:37.534 10:10:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:37.795 10:10:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:36:37.795 10:10:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:37.795 10:10:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:37.795 10:10:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:36:37.795 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:36:37.795 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:36:37.795 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:36:37.795 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:36:37.795 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:36:37.795 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:36:37.795 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:36:37.795 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:36:37.795 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:36:37.795 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:36:37.795 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:36:37.795 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:36:37.795 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:36:37.795 ' 00:36:44.383 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:36:44.383 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:36:44.383 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:36:44.383 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:36:44.383 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:36:44.383 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:36:44.383 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:36:44.384 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:36:44.384 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:36:44.384 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:36:44.384 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:36:44.384 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:36:44.384 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:36:44.384 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:36:44.384 10:10:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:36:44.384 10:10:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:44.384 10:10:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:44.384 10:10:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1663472 00:36:44.384 10:10:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 1663472 ']' 00:36:44.384 10:10:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 1663472 00:36:44.384 10:10:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:36:44.384 10:10:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:44.384 10:10:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1663472 00:36:44.384 10:10:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:44.384 10:10:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:44.384 10:10:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1663472' 00:36:44.384 killing process with pid 1663472 00:36:44.384 10:10:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 1663472 00:36:44.384 10:10:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 1663472 00:36:44.384 10:10:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:36:44.384 10:10:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:36:44.384 10:10:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1663472 ']' 00:36:44.384 10:10:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1663472 00:36:44.384 10:10:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 1663472 ']' 00:36:44.384 10:10:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 1663472 00:36:44.384 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1663472) - No such process 00:36:44.384 10:10:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 1663472 is not found' 00:36:44.384 Process with pid 1663472 is not found 00:36:44.384 10:10:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:36:44.384 10:10:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:36:44.384 10:10:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:36:44.384 00:36:44.384 real 0m18.145s 00:36:44.384 user 0m40.304s 00:36:44.384 sys 0m0.866s 00:36:44.384 10:10:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:44.384 10:10:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:44.384 ************************************ 00:36:44.384 END TEST spdkcli_nvmf_tcp 00:36:44.384 ************************************ 00:36:44.384 10:10:14 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:36:44.384 10:10:14 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:44.384 10:10:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:44.384 10:10:14 -- common/autotest_common.sh@10 -- # set +x 00:36:44.384 ************************************ 00:36:44.384 START TEST nvmf_identify_passthru 00:36:44.384 ************************************ 00:36:44.384 10:10:14 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:36:44.384 * Looking for test storage... 00:36:44.384 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:44.384 10:10:14 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:44.384 10:10:14 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lcov --version 00:36:44.384 10:10:14 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:44.384 10:10:14 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:44.384 10:10:14 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:44.384 10:10:14 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:44.384 10:10:14 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:44.384 10:10:14 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:36:44.384 10:10:14 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:36:44.384 10:10:14 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:36:44.384 10:10:14 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:36:44.384 10:10:14 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:36:44.384 10:10:14 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:36:44.384 10:10:14 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:36:44.384 10:10:14 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:44.384 10:10:14 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:36:44.384 10:10:14 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:36:44.384 10:10:14 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:44.384 10:10:14 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:44.384 10:10:14 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:36:44.384 10:10:14 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:36:44.384 10:10:14 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:44.384 10:10:14 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:36:44.384 10:10:14 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:36:44.384 10:10:14 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:36:44.384 10:10:14 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:36:44.384 10:10:14 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:44.384 10:10:14 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:36:44.384 10:10:14 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:36:44.384 10:10:14 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:44.384 10:10:14 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:44.384 10:10:14 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:36:44.384 10:10:14 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:44.384 10:10:14 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:44.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:44.384 --rc genhtml_branch_coverage=1 00:36:44.384 --rc genhtml_function_coverage=1 00:36:44.384 --rc genhtml_legend=1 00:36:44.384 --rc geninfo_all_blocks=1 00:36:44.384 --rc geninfo_unexecuted_blocks=1 00:36:44.384 00:36:44.384 ' 00:36:44.384 10:10:14 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:44.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:44.384 --rc genhtml_branch_coverage=1 00:36:44.384 --rc genhtml_function_coverage=1 00:36:44.384 --rc genhtml_legend=1 00:36:44.384 --rc geninfo_all_blocks=1 00:36:44.384 --rc geninfo_unexecuted_blocks=1 00:36:44.384 00:36:44.384 ' 00:36:44.384 10:10:14 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:44.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:44.384 --rc genhtml_branch_coverage=1 00:36:44.384 --rc genhtml_function_coverage=1 00:36:44.384 --rc genhtml_legend=1 00:36:44.384 --rc geninfo_all_blocks=1 00:36:44.384 --rc geninfo_unexecuted_blocks=1 00:36:44.384 00:36:44.384 ' 00:36:44.384 10:10:14 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:44.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:44.384 --rc genhtml_branch_coverage=1 00:36:44.384 --rc genhtml_function_coverage=1 00:36:44.384 --rc genhtml_legend=1 00:36:44.384 --rc geninfo_all_blocks=1 00:36:44.384 --rc geninfo_unexecuted_blocks=1 00:36:44.384 00:36:44.384 ' 00:36:44.384 10:10:14 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:44.384 10:10:14 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:36:44.384 10:10:14 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:44.384 10:10:14 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:44.384 10:10:14 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:44.384 10:10:14 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:44.384 10:10:14 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:44.385 10:10:14 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:44.385 10:10:14 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:44.385 10:10:14 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:44.385 10:10:14 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:44.385 10:10:14 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:44.385 10:10:14 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:44.385 10:10:14 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:44.385 10:10:14 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:44.385 10:10:14 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:44.385 10:10:14 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:44.385 10:10:14 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:44.385 10:10:14 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:44.385 10:10:14 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:36:44.385 10:10:14 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:44.385 10:10:14 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:44.385 10:10:14 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:44.385 10:10:14 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:44.385 10:10:14 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:44.385 10:10:14 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:44.385 10:10:14 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:36:44.385 10:10:14 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:44.385 10:10:14 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:36:44.385 10:10:14 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:44.385 10:10:14 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:44.385 10:10:14 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:44.385 10:10:14 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:44.385 10:10:14 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:44.385 10:10:14 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:44.385 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:44.385 10:10:14 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:44.385 10:10:14 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:44.385 10:10:14 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:44.385 10:10:14 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:44.385 10:10:14 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:36:44.385 10:10:14 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:44.385 10:10:14 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:44.385 10:10:14 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:44.385 10:10:14 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:44.385 10:10:14 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:44.385 10:10:14 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:44.385 10:10:14 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:36:44.385 10:10:14 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:44.385 10:10:14 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:36:44.385 10:10:14 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:44.385 10:10:14 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:44.385 10:10:14 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:44.385 10:10:14 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:44.385 10:10:14 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:44.385 10:10:14 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:44.385 10:10:14 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:44.385 10:10:14 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:44.385 10:10:14 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:44.385 10:10:14 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:44.385 10:10:14 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:36:44.385 10:10:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:50.974 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:50.974 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:36:50.974 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:50.974 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:50.974 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:50.974 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:50.974 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:50.974 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:36:50.974 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:50.974 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:36:50.974 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:36:50.974 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:36:50.974 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:36:50.974 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:36:50.974 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:36:50.974 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:50.974 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:50.974 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:50.974 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:50.974 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:50.974 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:50.974 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:50.974 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:50.974 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:50.974 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:50.974 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:50.974 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:50.974 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:50.974 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:50.974 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:50.974 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:50.974 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:50.974 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:50.974 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:50.974 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:50.974 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:50.974 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:50.974 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:50.974 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:50.974 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:50.974 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:50.974 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:50.974 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:50.974 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:50.974 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:50.974 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:50.974 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:50.974 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:50.974 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:50.974 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:50.974 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:50.974 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:50.974 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:50.974 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:50.974 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:50.974 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:50.974 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:50.974 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:50.974 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:50.974 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:50.974 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:50.974 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:50.974 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:50.974 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:50.974 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:50.974 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:50.974 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:50.974 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:50.974 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:50.974 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:50.974 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:50.974 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:50.974 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:50.974 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:36:50.974 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:50.974 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:50.974 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:50.974 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:50.974 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:50.974 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:50.974 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:50.974 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:50.974 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:50.974 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:50.974 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:50.974 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:50.974 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:50.974 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:50.974 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:50.974 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:50.974 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:50.974 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:50.974 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:50.974 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:50.974 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:50.974 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:51.236 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:51.236 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:51.236 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:51.236 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:51.236 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:51.236 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.613 ms 00:36:51.236 00:36:51.236 --- 10.0.0.2 ping statistics --- 00:36:51.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:51.236 rtt min/avg/max/mdev = 0.613/0.613/0.613/0.000 ms 00:36:51.236 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:51.236 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:51.236 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:36:51.236 00:36:51.237 --- 10.0.0.1 ping statistics --- 00:36:51.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:51.237 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:36:51.237 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:51.237 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:36:51.237 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:51.237 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:51.237 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:51.237 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:51.237 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:51.237 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:51.237 10:10:21 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:51.237 10:10:22 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:36:51.237 10:10:22 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:51.237 10:10:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:51.237 10:10:22 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:36:51.237 10:10:22 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:36:51.237 10:10:22 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:36:51.237 10:10:22 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:36:51.237 10:10:22 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:36:51.237 10:10:22 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:36:51.237 10:10:22 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:36:51.237 10:10:22 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:36:51.237 10:10:22 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:36:51.237 10:10:22 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:36:51.499 10:10:22 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:36:51.499 10:10:22 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:36:51.499 10:10:22 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:65:00.0 00:36:51.499 10:10:22 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:36:51.499 10:10:22 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:36:51.499 10:10:22 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:36:51.499 10:10:22 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:36:51.499 10:10:22 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:36:51.760 10:10:22 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605487 00:36:51.760 10:10:22 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:36:51.760 10:10:22 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:36:51.760 10:10:22 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:36:52.332 10:10:23 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:36:52.332 10:10:23 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:36:52.332 10:10:23 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:52.332 10:10:23 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:52.332 10:10:23 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:36:52.332 10:10:23 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:52.332 10:10:23 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:52.332 10:10:23 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1670893 00:36:52.332 10:10:23 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:36:52.332 10:10:23 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:36:52.332 10:10:23 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1670893 00:36:52.332 10:10:23 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 1670893 ']' 00:36:52.332 10:10:23 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:52.332 10:10:23 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:52.332 10:10:23 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:52.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:52.332 10:10:23 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:52.332 10:10:23 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:52.593 [2024-11-20 10:10:23.277618] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:36:52.593 [2024-11-20 10:10:23.277672] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:52.593 [2024-11-20 10:10:23.372056] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:52.593 [2024-11-20 10:10:23.409453] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:52.593 [2024-11-20 10:10:23.409487] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:52.593 [2024-11-20 10:10:23.409495] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:52.593 [2024-11-20 10:10:23.409502] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:52.594 [2024-11-20 10:10:23.409508] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:52.594 [2024-11-20 10:10:23.411045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:52.594 [2024-11-20 10:10:23.411209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:52.594 [2024-11-20 10:10:23.411481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:52.594 [2024-11-20 10:10:23.411482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:53.166 10:10:24 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:53.166 10:10:24 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:36:53.166 10:10:24 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:36:53.166 10:10:24 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.166 10:10:24 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:53.166 INFO: Log level set to 20 00:36:53.166 INFO: Requests: 00:36:53.166 { 00:36:53.166 "jsonrpc": "2.0", 00:36:53.166 "method": "nvmf_set_config", 00:36:53.166 "id": 1, 00:36:53.166 "params": { 00:36:53.166 "admin_cmd_passthru": { 00:36:53.166 "identify_ctrlr": true 00:36:53.166 } 00:36:53.166 } 00:36:53.166 } 00:36:53.166 00:36:53.166 INFO: response: 00:36:53.166 { 00:36:53.166 "jsonrpc": "2.0", 00:36:53.166 "id": 1, 00:36:53.166 "result": true 00:36:53.166 } 00:36:53.166 00:36:53.166 10:10:24 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.166 10:10:24 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:36:53.427 10:10:24 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.427 10:10:24 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:53.427 INFO: Setting log level to 20 00:36:53.427 INFO: Setting log level to 20 00:36:53.427 INFO: Log level set to 20 00:36:53.427 INFO: Log level set to 20 00:36:53.427 INFO: Requests: 00:36:53.427 { 00:36:53.427 "jsonrpc": "2.0", 00:36:53.427 "method": "framework_start_init", 00:36:53.427 "id": 1 00:36:53.427 } 00:36:53.427 00:36:53.427 INFO: Requests: 00:36:53.427 { 00:36:53.427 "jsonrpc": "2.0", 00:36:53.427 "method": "framework_start_init", 00:36:53.427 "id": 1 00:36:53.427 } 00:36:53.427 00:36:53.427 [2024-11-20 10:10:24.138109] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:36:53.427 INFO: response: 00:36:53.427 { 00:36:53.427 "jsonrpc": "2.0", 00:36:53.427 "id": 1, 00:36:53.427 "result": true 00:36:53.427 } 00:36:53.427 00:36:53.427 INFO: response: 00:36:53.427 { 00:36:53.427 "jsonrpc": "2.0", 00:36:53.427 "id": 1, 00:36:53.427 "result": true 00:36:53.427 } 00:36:53.427 00:36:53.427 10:10:24 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.427 10:10:24 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:53.427 10:10:24 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.427 10:10:24 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:53.427 INFO: Setting log level to 40 00:36:53.427 INFO: Setting log level to 40 00:36:53.427 INFO: Setting log level to 40 00:36:53.427 [2024-11-20 10:10:24.151490] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:53.427 10:10:24 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.427 10:10:24 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:36:53.427 10:10:24 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:53.427 10:10:24 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:53.427 10:10:24 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:36:53.427 10:10:24 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.427 10:10:24 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:53.689 Nvme0n1 00:36:53.689 10:10:24 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.689 10:10:24 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:36:53.690 10:10:24 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.690 10:10:24 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:53.690 10:10:24 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.690 10:10:24 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:36:53.690 10:10:24 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.690 10:10:24 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:53.690 10:10:24 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.690 10:10:24 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:53.690 10:10:24 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.690 10:10:24 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:53.690 [2024-11-20 10:10:24.553118] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:53.690 10:10:24 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.690 10:10:24 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:36:53.690 10:10:24 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.690 10:10:24 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:53.690 [ 00:36:53.690 { 00:36:53.690 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:36:53.690 "subtype": "Discovery", 00:36:53.690 "listen_addresses": [], 00:36:53.690 "allow_any_host": true, 00:36:53.690 "hosts": [] 00:36:53.690 }, 00:36:53.690 { 00:36:53.690 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:36:53.690 "subtype": "NVMe", 00:36:53.690 "listen_addresses": [ 00:36:53.690 { 00:36:53.690 "trtype": "TCP", 00:36:53.690 "adrfam": "IPv4", 00:36:53.690 "traddr": "10.0.0.2", 00:36:53.690 "trsvcid": "4420" 00:36:53.690 } 00:36:53.690 ], 00:36:53.690 "allow_any_host": true, 00:36:53.690 "hosts": [], 00:36:53.690 "serial_number": "SPDK00000000000001", 00:36:53.690 "model_number": "SPDK bdev Controller", 00:36:53.690 "max_namespaces": 1, 00:36:53.690 "min_cntlid": 1, 00:36:53.690 "max_cntlid": 65519, 00:36:53.690 "namespaces": [ 00:36:53.690 { 00:36:53.690 "nsid": 1, 00:36:53.690 "bdev_name": "Nvme0n1", 00:36:53.690 "name": "Nvme0n1", 00:36:53.690 "nguid": "36344730526054870025384500000044", 00:36:53.690 "uuid": "36344730-5260-5487-0025-384500000044" 00:36:53.690 } 00:36:53.690 ] 00:36:53.690 } 00:36:53.690 ] 00:36:53.690 10:10:24 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.690 10:10:24 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:36:53.690 10:10:24 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:36:53.690 10:10:24 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:36:54.263 10:10:24 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605487 00:36:54.263 10:10:24 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:36:54.263 10:10:24 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:36:54.263 10:10:24 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:36:54.263 10:10:25 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:36:54.263 10:10:25 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605487 '!=' S64GNE0R605487 ']' 00:36:54.263 10:10:25 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:36:54.263 10:10:25 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:54.263 10:10:25 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.263 10:10:25 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:54.263 10:10:25 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.263 10:10:25 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:36:54.263 10:10:25 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:36:54.263 10:10:25 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:54.263 10:10:25 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:36:54.263 10:10:25 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:54.263 10:10:25 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:36:54.263 10:10:25 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:54.263 10:10:25 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:54.263 rmmod nvme_tcp 00:36:54.263 rmmod nvme_fabrics 00:36:54.263 rmmod nvme_keyring 00:36:54.263 10:10:25 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:54.263 10:10:25 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:36:54.263 10:10:25 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:36:54.263 10:10:25 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 1670893 ']' 00:36:54.263 10:10:25 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 1670893 00:36:54.263 10:10:25 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 1670893 ']' 00:36:54.263 10:10:25 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 1670893 00:36:54.263 10:10:25 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:36:54.263 10:10:25 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:54.263 10:10:25 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1670893 00:36:54.524 10:10:25 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:54.524 10:10:25 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:54.524 10:10:25 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1670893' 00:36:54.524 killing process with pid 1670893 00:36:54.524 10:10:25 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 1670893 00:36:54.524 10:10:25 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 1670893 00:36:54.524 10:10:25 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:54.524 10:10:25 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:54.524 10:10:25 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:54.524 10:10:25 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:36:54.524 10:10:25 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:36:54.524 10:10:25 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:54.524 10:10:25 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:36:54.524 10:10:25 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:54.524 10:10:25 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:54.784 10:10:25 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:54.784 10:10:25 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:54.784 10:10:25 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:56.699 10:10:27 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:56.699 00:36:56.699 real 0m13.072s 00:36:56.699 user 0m10.322s 00:36:56.699 sys 0m6.620s 00:36:56.699 10:10:27 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:56.699 10:10:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:56.699 ************************************ 00:36:56.699 END TEST nvmf_identify_passthru 00:36:56.699 ************************************ 00:36:56.699 10:10:27 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:36:56.699 10:10:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:56.699 10:10:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:56.699 10:10:27 -- common/autotest_common.sh@10 -- # set +x 00:36:56.699 ************************************ 00:36:56.699 START TEST nvmf_dif 00:36:56.699 ************************************ 00:36:56.699 10:10:27 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:36:56.961 * Looking for test storage... 00:36:56.961 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:56.961 10:10:27 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:56.961 10:10:27 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:56.961 10:10:27 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:36:56.961 10:10:27 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:56.961 10:10:27 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:56.961 10:10:27 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:56.961 10:10:27 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:56.961 10:10:27 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:36:56.961 10:10:27 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:36:56.961 10:10:27 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:36:56.961 10:10:27 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:36:56.961 10:10:27 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:36:56.961 10:10:27 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:36:56.961 10:10:27 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:36:56.961 10:10:27 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:56.961 10:10:27 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:36:56.961 10:10:27 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:36:56.961 10:10:27 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:56.961 10:10:27 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:56.961 10:10:27 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:36:56.961 10:10:27 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:36:56.961 10:10:27 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:56.961 10:10:27 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:36:56.962 10:10:27 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:36:56.962 10:10:27 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:36:56.962 10:10:27 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:36:56.962 10:10:27 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:56.962 10:10:27 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:36:56.962 10:10:27 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:36:56.962 10:10:27 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:56.962 10:10:27 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:56.962 10:10:27 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:36:56.962 10:10:27 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:56.962 10:10:27 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:56.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:56.962 --rc genhtml_branch_coverage=1 00:36:56.962 --rc genhtml_function_coverage=1 00:36:56.962 --rc genhtml_legend=1 00:36:56.962 --rc geninfo_all_blocks=1 00:36:56.962 --rc geninfo_unexecuted_blocks=1 00:36:56.962 00:36:56.962 ' 00:36:56.962 10:10:27 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:56.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:56.962 --rc genhtml_branch_coverage=1 00:36:56.962 --rc genhtml_function_coverage=1 00:36:56.962 --rc genhtml_legend=1 00:36:56.962 --rc geninfo_all_blocks=1 00:36:56.962 --rc geninfo_unexecuted_blocks=1 00:36:56.962 00:36:56.962 ' 00:36:56.962 10:10:27 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:56.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:56.962 --rc genhtml_branch_coverage=1 00:36:56.962 --rc genhtml_function_coverage=1 00:36:56.962 --rc genhtml_legend=1 00:36:56.962 --rc geninfo_all_blocks=1 00:36:56.962 --rc geninfo_unexecuted_blocks=1 00:36:56.962 00:36:56.962 ' 00:36:56.962 10:10:27 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:56.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:56.962 --rc genhtml_branch_coverage=1 00:36:56.962 --rc genhtml_function_coverage=1 00:36:56.962 --rc genhtml_legend=1 00:36:56.962 --rc geninfo_all_blocks=1 00:36:56.962 --rc geninfo_unexecuted_blocks=1 00:36:56.962 00:36:56.962 ' 00:36:56.962 10:10:27 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:56.962 10:10:27 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:36:56.962 10:10:27 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:56.962 10:10:27 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:56.962 10:10:27 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:56.962 10:10:27 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:56.962 10:10:27 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:56.962 10:10:27 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:56.962 10:10:27 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:56.962 10:10:27 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:56.962 10:10:27 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:56.962 10:10:27 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:56.962 10:10:27 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:56.962 10:10:27 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:56.962 10:10:27 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:56.962 10:10:27 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:56.962 10:10:27 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:56.962 10:10:27 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:56.962 10:10:27 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:56.962 10:10:27 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:36:56.962 10:10:27 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:56.962 10:10:27 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:56.962 10:10:27 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:56.962 10:10:27 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:56.962 10:10:27 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:56.962 10:10:27 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:56.962 10:10:27 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:36:56.962 10:10:27 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:56.962 10:10:27 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:36:56.962 10:10:27 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:56.962 10:10:27 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:56.962 10:10:27 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:56.962 10:10:27 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:56.962 10:10:27 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:56.962 10:10:27 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:56.962 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:56.962 10:10:27 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:56.962 10:10:27 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:56.962 10:10:27 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:56.962 10:10:27 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:36:56.962 10:10:27 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:36:56.962 10:10:27 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:36:56.962 10:10:27 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:36:56.962 10:10:27 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:36:56.962 10:10:27 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:56.962 10:10:27 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:56.962 10:10:27 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:56.962 10:10:27 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:56.962 10:10:27 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:56.962 10:10:27 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:56.962 10:10:27 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:56.962 10:10:27 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:56.962 10:10:27 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:56.962 10:10:27 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:56.962 10:10:27 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:36:56.962 10:10:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:05.106 10:10:34 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:05.106 10:10:34 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:37:05.106 10:10:34 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:05.106 10:10:34 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:05.106 10:10:34 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:05.106 10:10:34 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:05.106 10:10:34 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:05.106 10:10:34 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:37:05.106 10:10:34 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:05.106 10:10:34 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:37:05.106 10:10:34 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:37:05.106 10:10:34 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:37:05.106 10:10:34 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:37:05.106 10:10:34 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:37:05.106 10:10:34 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:37:05.106 10:10:34 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:05.106 10:10:34 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:05.106 10:10:34 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:05.106 10:10:34 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:05.106 10:10:34 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:05.106 10:10:34 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:05.106 10:10:34 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:05.106 10:10:34 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:05.106 10:10:34 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:05.106 10:10:34 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:05.106 10:10:34 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:05.106 10:10:34 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:05.106 10:10:34 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:05.106 10:10:34 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:05.106 10:10:34 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:05.106 10:10:34 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:05.106 10:10:34 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:05.106 10:10:34 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:05.106 10:10:34 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:05.106 10:10:34 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:37:05.106 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:37:05.106 10:10:34 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:05.106 10:10:34 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:05.106 10:10:34 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:05.106 10:10:34 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:05.106 10:10:34 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:05.106 10:10:34 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:05.106 10:10:34 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:37:05.106 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:37:05.106 10:10:34 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:05.106 10:10:34 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:05.106 10:10:34 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:05.106 10:10:34 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:05.106 10:10:34 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:05.106 10:10:34 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:05.107 10:10:34 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:05.107 10:10:34 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:05.107 10:10:34 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:05.107 10:10:34 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:05.107 10:10:34 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:05.107 10:10:34 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:05.107 10:10:34 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:05.107 10:10:34 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:05.107 10:10:34 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:05.107 10:10:34 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:37:05.107 Found net devices under 0000:4b:00.0: cvl_0_0 00:37:05.107 10:10:34 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:05.107 10:10:34 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:05.107 10:10:34 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:05.107 10:10:34 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:05.107 10:10:34 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:05.107 10:10:34 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:05.107 10:10:34 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:05.107 10:10:34 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:05.107 10:10:34 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:37:05.107 Found net devices under 0000:4b:00.1: cvl_0_1 00:37:05.107 10:10:34 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:05.107 10:10:34 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:05.107 10:10:34 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:37:05.107 10:10:34 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:05.107 10:10:34 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:05.107 10:10:34 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:05.107 10:10:34 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:05.107 10:10:34 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:05.107 10:10:34 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:05.107 10:10:34 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:05.107 10:10:34 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:05.107 10:10:34 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:05.107 10:10:34 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:05.107 10:10:34 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:05.107 10:10:34 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:05.107 10:10:34 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:05.107 10:10:34 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:05.107 10:10:34 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:05.107 10:10:34 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:05.107 10:10:34 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:05.107 10:10:34 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:05.107 10:10:34 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:05.107 10:10:34 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:05.107 10:10:34 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:05.107 10:10:34 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:05.107 10:10:35 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:05.107 10:10:35 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:05.107 10:10:35 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:05.107 10:10:35 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:05.107 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:05.107 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.604 ms 00:37:05.107 00:37:05.107 --- 10.0.0.2 ping statistics --- 00:37:05.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:05.107 rtt min/avg/max/mdev = 0.604/0.604/0.604/0.000 ms 00:37:05.107 10:10:35 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:05.107 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:05.107 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.398 ms 00:37:05.107 00:37:05.107 --- 10.0.0.1 ping statistics --- 00:37:05.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:05.107 rtt min/avg/max/mdev = 0.398/0.398/0.398/0.000 ms 00:37:05.107 10:10:35 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:05.107 10:10:35 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:37:05.107 10:10:35 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:37:05.107 10:10:35 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:07.652 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:37:07.652 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:37:07.652 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:37:07.652 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:37:07.652 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:37:07.652 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:37:07.652 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:37:07.652 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:37:07.652 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:37:07.652 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:37:07.652 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:37:07.652 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:37:07.652 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:37:07.652 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:37:07.652 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:37:07.652 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:37:07.652 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:37:08.223 10:10:38 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:08.223 10:10:38 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:08.223 10:10:38 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:08.223 10:10:38 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:08.223 10:10:38 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:08.223 10:10:38 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:08.223 10:10:38 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:37:08.223 10:10:38 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:37:08.223 10:10:38 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:08.223 10:10:38 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:08.223 10:10:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:08.223 10:10:38 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=1676776 00:37:08.223 10:10:38 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 1676776 00:37:08.223 10:10:38 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:37:08.223 10:10:38 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 1676776 ']' 00:37:08.223 10:10:38 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:08.223 10:10:38 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:08.223 10:10:38 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:08.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:08.223 10:10:38 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:08.223 10:10:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:08.223 [2024-11-20 10:10:38.979541] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:37:08.223 [2024-11-20 10:10:38.979596] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:08.223 [2024-11-20 10:10:39.075908] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:08.223 [2024-11-20 10:10:39.110832] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:08.223 [2024-11-20 10:10:39.110866] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:08.223 [2024-11-20 10:10:39.110874] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:08.223 [2024-11-20 10:10:39.110881] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:08.223 [2024-11-20 10:10:39.110886] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:08.223 [2024-11-20 10:10:39.111489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:09.173 10:10:39 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:09.173 10:10:39 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:37:09.173 10:10:39 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:09.173 10:10:39 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:09.173 10:10:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:09.173 10:10:39 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:09.173 10:10:39 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:37:09.173 10:10:39 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:37:09.173 10:10:39 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:09.173 10:10:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:09.173 [2024-11-20 10:10:39.825465] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:09.173 10:10:39 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:09.173 10:10:39 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:37:09.173 10:10:39 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:09.173 10:10:39 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:09.173 10:10:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:09.173 ************************************ 00:37:09.173 START TEST fio_dif_1_default 00:37:09.173 ************************************ 00:37:09.173 10:10:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:37:09.173 10:10:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:37:09.173 10:10:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:37:09.173 10:10:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:37:09.173 10:10:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:37:09.173 10:10:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:37:09.173 10:10:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:37:09.173 10:10:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:09.173 10:10:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:09.173 bdev_null0 00:37:09.173 10:10:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:09.173 10:10:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:09.173 10:10:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:09.173 10:10:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:09.173 10:10:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:09.173 10:10:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:09.173 10:10:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:09.173 10:10:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:09.173 10:10:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:09.174 10:10:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:09.174 10:10:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:09.174 10:10:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:09.174 [2024-11-20 10:10:39.913825] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:09.174 10:10:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:09.174 10:10:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:37:09.174 10:10:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:37:09.174 10:10:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:37:09.174 10:10:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:37:09.174 10:10:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:09.174 10:10:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:37:09.174 10:10:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:09.174 10:10:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:09.174 10:10:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:09.174 { 00:37:09.174 "params": { 00:37:09.174 "name": "Nvme$subsystem", 00:37:09.174 "trtype": "$TEST_TRANSPORT", 00:37:09.174 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:09.174 "adrfam": "ipv4", 00:37:09.174 "trsvcid": "$NVMF_PORT", 00:37:09.174 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:09.174 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:09.174 "hdgst": ${hdgst:-false}, 00:37:09.174 "ddgst": ${ddgst:-false} 00:37:09.174 }, 00:37:09.174 "method": "bdev_nvme_attach_controller" 00:37:09.174 } 00:37:09.174 EOF 00:37:09.174 )") 00:37:09.174 10:10:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:37:09.174 10:10:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:09.174 10:10:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:09.174 10:10:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:37:09.174 10:10:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:09.174 10:10:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:37:09.174 10:10:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:09.174 10:10:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:37:09.174 10:10:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:09.174 10:10:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:09.174 10:10:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:37:09.174 10:10:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:09.174 10:10:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:37:09.174 10:10:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:37:09.174 10:10:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:37:09.174 10:10:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:09.174 10:10:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:37:09.174 10:10:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:37:09.174 10:10:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:09.174 "params": { 00:37:09.174 "name": "Nvme0", 00:37:09.174 "trtype": "tcp", 00:37:09.174 "traddr": "10.0.0.2", 00:37:09.174 "adrfam": "ipv4", 00:37:09.174 "trsvcid": "4420", 00:37:09.174 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:09.174 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:09.174 "hdgst": false, 00:37:09.174 "ddgst": false 00:37:09.174 }, 00:37:09.174 "method": "bdev_nvme_attach_controller" 00:37:09.174 }' 00:37:09.174 10:10:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:09.174 10:10:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:09.174 10:10:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:09.174 10:10:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:09.174 10:10:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:37:09.174 10:10:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:09.174 10:10:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:09.174 10:10:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:09.174 10:10:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:09.174 10:10:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:09.433 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:37:09.433 fio-3.35 00:37:09.433 Starting 1 thread 00:37:21.663 00:37:21.663 filename0: (groupid=0, jobs=1): err= 0: pid=1677313: Wed Nov 20 10:10:50 2024 00:37:21.663 read: IOPS=304, BW=1219KiB/s (1248kB/s)(11.9MiB/10030msec) 00:37:21.663 slat (nsec): min=5404, max=57761, avg=6968.58, stdev=1735.68 00:37:21.663 clat (usec): min=544, max=44826, avg=13109.14, stdev=18566.71 00:37:21.663 lat (usec): min=552, max=44861, avg=13116.11, stdev=18566.22 00:37:21.663 clat percentiles (usec): 00:37:21.663 | 1.00th=[ 611], 5.00th=[ 717], 10.00th=[ 783], 20.00th=[ 832], 00:37:21.663 | 30.00th=[ 865], 40.00th=[ 914], 50.00th=[ 971], 60.00th=[ 1012], 00:37:21.663 | 70.00th=[40633], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:37:21.663 | 99.00th=[42206], 99.50th=[42730], 99.90th=[44827], 99.95th=[44827], 00:37:21.663 | 99.99th=[44827] 00:37:21.663 bw ( KiB/s): min= 768, max= 5120, per=100.00%, avg=1220.80, stdev=1081.65, samples=20 00:37:21.663 iops : min= 192, max= 1280, avg=305.20, stdev=270.41, samples=20 00:37:21.663 lat (usec) : 750=6.81%, 1000=50.10% 00:37:21.663 lat (msec) : 2=12.86%, 50=30.24% 00:37:21.663 cpu : usr=93.11%, sys=6.62%, ctx=14, majf=0, minf=253 00:37:21.663 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:21.663 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:21.663 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:21.663 issued rwts: total=3056,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:21.663 latency : target=0, window=0, percentile=100.00%, depth=4 00:37:21.663 00:37:21.663 Run status group 0 (all jobs): 00:37:21.663 READ: bw=1219KiB/s (1248kB/s), 1219KiB/s-1219KiB/s (1248kB/s-1248kB/s), io=11.9MiB (12.5MB), run=10030-10030msec 00:37:21.663 10:10:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:37:21.663 10:10:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:37:21.663 10:10:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:37:21.663 10:10:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:21.663 10:10:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:37:21.663 10:10:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:21.663 10:10:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:21.663 10:10:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:21.663 10:10:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:21.663 10:10:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:21.663 10:10:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:21.663 10:10:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:21.663 10:10:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:21.663 00:37:21.663 real 0m11.236s 00:37:21.663 user 0m23.760s 00:37:21.663 sys 0m1.068s 00:37:21.663 10:10:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:21.663 10:10:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:21.663 ************************************ 00:37:21.663 END TEST fio_dif_1_default 00:37:21.663 ************************************ 00:37:21.663 10:10:51 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:37:21.663 10:10:51 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:21.663 10:10:51 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:21.663 10:10:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:21.663 ************************************ 00:37:21.663 START TEST fio_dif_1_multi_subsystems 00:37:21.663 ************************************ 00:37:21.663 10:10:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:37:21.663 10:10:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:37:21.663 10:10:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:37:21.663 10:10:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:37:21.663 10:10:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:37:21.663 10:10:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:37:21.663 10:10:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:37:21.663 10:10:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:37:21.663 10:10:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:21.663 10:10:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:21.663 bdev_null0 00:37:21.663 10:10:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:21.663 10:10:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:21.663 10:10:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:21.663 10:10:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:21.664 10:10:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:21.664 10:10:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:21.664 10:10:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:21.664 10:10:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:21.664 10:10:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:21.664 10:10:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:21.664 10:10:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:21.664 10:10:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:21.664 [2024-11-20 10:10:51.234438] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:21.664 10:10:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:21.664 10:10:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:37:21.664 10:10:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:37:21.664 10:10:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:37:21.664 10:10:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:37:21.664 10:10:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:21.664 10:10:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:21.664 bdev_null1 00:37:21.664 10:10:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:21.664 10:10:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:37:21.664 10:10:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:21.664 10:10:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:21.664 10:10:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:21.664 10:10:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:37:21.664 10:10:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:21.664 10:10:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:21.664 10:10:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:21.664 10:10:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:21.664 10:10:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:21.664 10:10:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:21.664 10:10:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:21.664 10:10:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:37:21.664 10:10:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:37:21.664 10:10:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:37:21.664 10:10:51 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:37:21.664 10:10:51 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:37:21.664 10:10:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:21.664 10:10:51 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:21.664 10:10:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:21.664 10:10:51 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:21.664 { 00:37:21.664 "params": { 00:37:21.664 "name": "Nvme$subsystem", 00:37:21.664 "trtype": "$TEST_TRANSPORT", 00:37:21.664 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:21.664 "adrfam": "ipv4", 00:37:21.664 "trsvcid": "$NVMF_PORT", 00:37:21.664 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:21.664 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:21.664 "hdgst": ${hdgst:-false}, 00:37:21.664 "ddgst": ${ddgst:-false} 00:37:21.664 }, 00:37:21.664 "method": "bdev_nvme_attach_controller" 00:37:21.664 } 00:37:21.664 EOF 00:37:21.664 )") 00:37:21.664 10:10:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:37:21.664 10:10:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:21.664 10:10:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:21.664 10:10:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:37:21.664 10:10:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:21.664 10:10:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:37:21.664 10:10:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:21.664 10:10:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:37:21.664 10:10:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:21.664 10:10:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:21.664 10:10:51 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:37:21.664 10:10:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:21.664 10:10:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:37:21.664 10:10:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:37:21.664 10:10:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:37:21.664 10:10:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:21.664 10:10:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:37:21.664 10:10:51 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:21.664 10:10:51 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:21.664 { 00:37:21.664 "params": { 00:37:21.664 "name": "Nvme$subsystem", 00:37:21.664 "trtype": "$TEST_TRANSPORT", 00:37:21.664 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:21.664 "adrfam": "ipv4", 00:37:21.664 "trsvcid": "$NVMF_PORT", 00:37:21.664 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:21.664 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:21.664 "hdgst": ${hdgst:-false}, 00:37:21.664 "ddgst": ${ddgst:-false} 00:37:21.664 }, 00:37:21.664 "method": "bdev_nvme_attach_controller" 00:37:21.664 } 00:37:21.664 EOF 00:37:21.664 )") 00:37:21.664 10:10:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:37:21.664 10:10:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:37:21.664 10:10:51 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:37:21.664 10:10:51 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:37:21.664 10:10:51 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:37:21.664 10:10:51 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:21.664 "params": { 00:37:21.664 "name": "Nvme0", 00:37:21.664 "trtype": "tcp", 00:37:21.664 "traddr": "10.0.0.2", 00:37:21.664 "adrfam": "ipv4", 00:37:21.664 "trsvcid": "4420", 00:37:21.664 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:21.664 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:21.664 "hdgst": false, 00:37:21.664 "ddgst": false 00:37:21.664 }, 00:37:21.664 "method": "bdev_nvme_attach_controller" 00:37:21.664 },{ 00:37:21.664 "params": { 00:37:21.664 "name": "Nvme1", 00:37:21.664 "trtype": "tcp", 00:37:21.664 "traddr": "10.0.0.2", 00:37:21.664 "adrfam": "ipv4", 00:37:21.664 "trsvcid": "4420", 00:37:21.664 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:21.664 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:21.664 "hdgst": false, 00:37:21.664 "ddgst": false 00:37:21.664 }, 00:37:21.664 "method": "bdev_nvme_attach_controller" 00:37:21.664 }' 00:37:21.664 10:10:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:21.664 10:10:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:21.664 10:10:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:21.664 10:10:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:21.664 10:10:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:37:21.664 10:10:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:21.664 10:10:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:21.664 10:10:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:21.664 10:10:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:21.664 10:10:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:21.664 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:37:21.664 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:37:21.664 fio-3.35 00:37:21.664 Starting 2 threads 00:37:31.661 00:37:31.661 filename0: (groupid=0, jobs=1): err= 0: pid=1679796: Wed Nov 20 10:11:02 2024 00:37:31.661 read: IOPS=189, BW=759KiB/s (777kB/s)(7616KiB/10040msec) 00:37:31.661 slat (nsec): min=5410, max=31662, avg=6212.35, stdev=1602.96 00:37:31.661 clat (usec): min=587, max=42157, avg=21073.79, stdev=20159.43 00:37:31.661 lat (usec): min=592, max=42187, avg=21080.00, stdev=20159.40 00:37:31.661 clat percentiles (usec): 00:37:31.661 | 1.00th=[ 635], 5.00th=[ 791], 10.00th=[ 807], 20.00th=[ 832], 00:37:31.661 | 30.00th=[ 848], 40.00th=[ 865], 50.00th=[41157], 60.00th=[41157], 00:37:31.661 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:37:31.661 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:37:31.661 | 99.99th=[42206] 00:37:31.661 bw ( KiB/s): min= 672, max= 768, per=50.09%, avg=760.00, stdev=25.16, samples=20 00:37:31.661 iops : min= 168, max= 192, avg=190.00, stdev= 6.29, samples=20 00:37:31.661 lat (usec) : 750=1.89%, 1000=47.69% 00:37:31.661 lat (msec) : 2=0.21%, 50=50.21% 00:37:31.661 cpu : usr=95.88%, sys=3.91%, ctx=9, majf=0, minf=65 00:37:31.661 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:31.661 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:31.661 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:31.661 issued rwts: total=1904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:31.661 latency : target=0, window=0, percentile=100.00%, depth=4 00:37:31.661 filename1: (groupid=0, jobs=1): err= 0: pid=1679797: Wed Nov 20 10:11:02 2024 00:37:31.661 read: IOPS=189, BW=759KiB/s (777kB/s)(7616KiB/10038msec) 00:37:31.661 slat (nsec): min=5401, max=31938, avg=6257.10, stdev=1705.59 00:37:31.661 clat (usec): min=565, max=42241, avg=21069.81, stdev=20160.15 00:37:31.661 lat (usec): min=571, max=42272, avg=21076.07, stdev=20160.20 00:37:31.661 clat percentiles (usec): 00:37:31.661 | 1.00th=[ 619], 5.00th=[ 725], 10.00th=[ 783], 20.00th=[ 840], 00:37:31.661 | 30.00th=[ 857], 40.00th=[ 881], 50.00th=[40633], 60.00th=[41157], 00:37:31.661 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:37:31.661 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:37:31.661 | 99.99th=[42206] 00:37:31.661 bw ( KiB/s): min= 672, max= 768, per=50.09%, avg=760.00, stdev=22.92, samples=20 00:37:31.661 iops : min= 168, max= 192, avg=190.00, stdev= 5.73, samples=20 00:37:31.661 lat (usec) : 750=7.14%, 1000=42.23% 00:37:31.661 lat (msec) : 2=0.42%, 50=50.21% 00:37:31.661 cpu : usr=95.57%, sys=4.22%, ctx=8, majf=0, minf=185 00:37:31.661 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:31.661 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:31.661 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:31.661 issued rwts: total=1904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:31.661 latency : target=0, window=0, percentile=100.00%, depth=4 00:37:31.661 00:37:31.661 Run status group 0 (all jobs): 00:37:31.661 READ: bw=1517KiB/s (1554kB/s), 759KiB/s-759KiB/s (777kB/s-777kB/s), io=14.9MiB (15.6MB), run=10038-10040msec 00:37:31.922 10:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:37:31.922 10:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:37:31.922 10:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:37:31.922 10:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:31.922 10:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:37:31.922 10:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:31.922 10:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:31.922 10:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:31.922 10:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:31.922 10:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:31.923 10:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:31.923 10:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:31.923 10:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:31.923 10:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:37:31.923 10:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:31.923 10:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:37:31.923 10:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:31.923 10:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:31.923 10:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:31.923 10:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:31.923 10:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:31.923 10:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:31.923 10:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:31.923 10:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:31.923 00:37:31.923 real 0m11.448s 00:37:31.923 user 0m35.987s 00:37:31.923 sys 0m1.162s 00:37:31.923 10:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:31.923 10:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:31.923 ************************************ 00:37:31.923 END TEST fio_dif_1_multi_subsystems 00:37:31.923 ************************************ 00:37:31.923 10:11:02 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:37:31.923 10:11:02 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:31.923 10:11:02 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:31.923 10:11:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:31.923 ************************************ 00:37:31.923 START TEST fio_dif_rand_params 00:37:31.923 ************************************ 00:37:31.923 10:11:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:37:31.923 10:11:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:37:31.923 10:11:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:37:31.923 10:11:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:37:31.923 10:11:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:37:31.923 10:11:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:37:31.923 10:11:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:37:31.923 10:11:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:37:31.923 10:11:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:37:31.923 10:11:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:37:31.923 10:11:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:31.923 10:11:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:37:31.923 10:11:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:37:31.923 10:11:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:37:31.923 10:11:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:31.923 10:11:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:31.923 bdev_null0 00:37:31.923 10:11:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:31.923 10:11:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:31.923 10:11:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:31.923 10:11:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:31.923 10:11:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:31.923 10:11:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:31.923 10:11:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:31.923 10:11:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:31.923 10:11:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:31.923 10:11:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:31.923 10:11:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:31.923 10:11:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:31.923 [2024-11-20 10:11:02.769875] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:31.923 10:11:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:31.923 10:11:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:37:31.923 10:11:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:37:31.923 10:11:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:37:31.923 10:11:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:31.923 10:11:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:37:31.923 10:11:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:37:31.923 10:11:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:31.923 10:11:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:31.923 10:11:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:31.923 10:11:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:37:31.923 10:11:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:31.923 { 00:37:31.923 "params": { 00:37:31.923 "name": "Nvme$subsystem", 00:37:31.923 "trtype": "$TEST_TRANSPORT", 00:37:31.923 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:31.923 "adrfam": "ipv4", 00:37:31.923 "trsvcid": "$NVMF_PORT", 00:37:31.923 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:31.923 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:31.923 "hdgst": ${hdgst:-false}, 00:37:31.923 "ddgst": ${ddgst:-false} 00:37:31.923 }, 00:37:31.923 "method": "bdev_nvme_attach_controller" 00:37:31.923 } 00:37:31.923 EOF 00:37:31.923 )") 00:37:31.923 10:11:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:31.923 10:11:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:37:31.923 10:11:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:31.923 10:11:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:37:31.923 10:11:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:31.923 10:11:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:37:31.923 10:11:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:31.923 10:11:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:31.923 10:11:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:31.923 10:11:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:31.923 10:11:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:37:31.923 10:11:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:37:31.923 10:11:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:31.923 10:11:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:31.923 10:11:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:37:31.923 10:11:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:37:31.923 10:11:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:31.923 "params": { 00:37:31.923 "name": "Nvme0", 00:37:31.923 "trtype": "tcp", 00:37:31.923 "traddr": "10.0.0.2", 00:37:31.923 "adrfam": "ipv4", 00:37:31.923 "trsvcid": "4420", 00:37:31.923 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:31.923 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:31.923 "hdgst": false, 00:37:31.923 "ddgst": false 00:37:31.923 }, 00:37:31.923 "method": "bdev_nvme_attach_controller" 00:37:31.923 }' 00:37:31.923 10:11:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:31.923 10:11:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:31.923 10:11:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:31.923 10:11:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:31.923 10:11:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:37:31.923 10:11:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:32.214 10:11:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:32.214 10:11:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:32.214 10:11:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:32.214 10:11:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:32.475 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:37:32.475 ... 00:37:32.475 fio-3.35 00:37:32.475 Starting 3 threads 00:37:39.057 00:37:39.057 filename0: (groupid=0, jobs=1): err= 0: pid=1681997: Wed Nov 20 10:11:08 2024 00:37:39.057 read: IOPS=297, BW=37.2MiB/s (39.0MB/s)(188MiB/5046msec) 00:37:39.057 slat (nsec): min=5427, max=45338, avg=7940.66, stdev=2009.51 00:37:39.057 clat (usec): min=4407, max=89354, avg=10032.38, stdev=8101.04 00:37:39.057 lat (usec): min=4415, max=89366, avg=10040.32, stdev=8101.20 00:37:39.057 clat percentiles (usec): 00:37:39.057 | 1.00th=[ 5145], 5.00th=[ 6259], 10.00th=[ 6849], 20.00th=[ 7439], 00:37:39.057 | 30.00th=[ 7832], 40.00th=[ 8291], 50.00th=[ 8586], 60.00th=[ 8979], 00:37:39.057 | 70.00th=[ 9241], 80.00th=[ 9765], 90.00th=[10290], 95.00th=[11076], 00:37:39.057 | 99.00th=[48497], 99.50th=[50070], 99.90th=[88605], 99.95th=[89654], 00:37:39.057 | 99.99th=[89654] 00:37:39.057 bw ( KiB/s): min=21504, max=46592, per=31.93%, avg=38425.60, stdev=8199.51, samples=10 00:37:39.057 iops : min= 168, max= 364, avg=300.20, stdev=64.06, samples=10 00:37:39.057 lat (msec) : 10=85.56%, 20=10.65%, 50=3.26%, 100=0.53% 00:37:39.057 cpu : usr=94.37%, sys=5.37%, ctx=7, majf=0, minf=48 00:37:39.057 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:39.057 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.057 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.057 issued rwts: total=1503,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:39.057 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:39.057 filename0: (groupid=0, jobs=1): err= 0: pid=1681998: Wed Nov 20 10:11:08 2024 00:37:39.057 read: IOPS=326, BW=40.8MiB/s (42.8MB/s)(206MiB/5044msec) 00:37:39.057 slat (nsec): min=5630, max=45514, avg=8618.04, stdev=1551.21 00:37:39.057 clat (usec): min=4492, max=54239, avg=9151.01, stdev=6359.60 00:37:39.057 lat (usec): min=4501, max=54271, avg=9159.63, stdev=6359.87 00:37:39.057 clat percentiles (usec): 00:37:39.057 | 1.00th=[ 4948], 5.00th=[ 5932], 10.00th=[ 6521], 20.00th=[ 7177], 00:37:39.057 | 30.00th=[ 7570], 40.00th=[ 8029], 50.00th=[ 8356], 60.00th=[ 8586], 00:37:39.057 | 70.00th=[ 8979], 80.00th=[ 9241], 90.00th=[ 9765], 95.00th=[10290], 00:37:39.057 | 99.00th=[47973], 99.50th=[49021], 99.90th=[52691], 99.95th=[54264], 00:37:39.057 | 99.99th=[54264] 00:37:39.057 bw ( KiB/s): min=33024, max=49664, per=34.99%, avg=42112.00, stdev=4939.39, samples=10 00:37:39.057 iops : min= 258, max= 388, avg=329.00, stdev=38.59, samples=10 00:37:39.057 lat (msec) : 10=93.14%, 20=4.37%, 50=2.19%, 100=0.30% 00:37:39.057 cpu : usr=94.01%, sys=5.75%, ctx=11, majf=0, minf=122 00:37:39.057 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:39.057 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.057 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.057 issued rwts: total=1647,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:39.057 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:39.057 filename0: (groupid=0, jobs=1): err= 0: pid=1681999: Wed Nov 20 10:11:08 2024 00:37:39.057 read: IOPS=316, BW=39.5MiB/s (41.4MB/s)(199MiB/5044msec) 00:37:39.057 slat (nsec): min=5476, max=33204, avg=8048.45, stdev=1711.00 00:37:39.057 clat (usec): min=5022, max=49666, avg=9457.03, stdev=5374.48 00:37:39.057 lat (usec): min=5028, max=49672, avg=9465.08, stdev=5374.58 00:37:39.057 clat percentiles (usec): 00:37:39.057 | 1.00th=[ 5604], 5.00th=[ 6325], 10.00th=[ 6980], 20.00th=[ 7635], 00:37:39.057 | 30.00th=[ 8094], 40.00th=[ 8586], 50.00th=[ 8848], 60.00th=[ 9241], 00:37:39.057 | 70.00th=[ 9503], 80.00th=[10028], 90.00th=[10552], 95.00th=[11076], 00:37:39.057 | 99.00th=[47449], 99.50th=[48497], 99.90th=[49021], 99.95th=[49546], 00:37:39.057 | 99.99th=[49546] 00:37:39.057 bw ( KiB/s): min=31551, max=44288, per=33.85%, avg=40735.90, stdev=4305.05, samples=10 00:37:39.057 iops : min= 246, max= 346, avg=318.20, stdev=33.75, samples=10 00:37:39.057 lat (msec) : 10=80.55%, 20=17.63%, 50=1.82% 00:37:39.057 cpu : usr=94.41%, sys=5.35%, ctx=7, majf=0, minf=102 00:37:39.057 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:39.057 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.057 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.057 issued rwts: total=1594,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:39.057 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:39.057 00:37:39.057 Run status group 0 (all jobs): 00:37:39.057 READ: bw=118MiB/s (123MB/s), 37.2MiB/s-40.8MiB/s (39.0MB/s-42.8MB/s), io=593MiB (622MB), run=5044-5046msec 00:37:39.057 10:11:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:37:39.057 10:11:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:39.057 10:11:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:39.057 10:11:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:39.057 10:11:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:39.057 10:11:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:39.057 10:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:39.057 10:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:39.057 10:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:39.057 10:11:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:39.057 10:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:39.057 10:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:39.057 10:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:39.057 10:11:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:37:39.057 10:11:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:37:39.057 10:11:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:37:39.057 10:11:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:37:39.057 10:11:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:37:39.057 10:11:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:37:39.057 10:11:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:37:39.057 10:11:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:37:39.057 10:11:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:39.057 10:11:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:37:39.057 10:11:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:37:39.058 10:11:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:37:39.058 10:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:39.058 10:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:39.058 bdev_null0 00:37:39.058 10:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:39.058 10:11:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:39.058 10:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:39.058 10:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:39.058 10:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:39.058 10:11:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:39.058 10:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:39.058 10:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:39.058 10:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:39.058 10:11:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:39.058 10:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:39.058 10:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:39.058 [2024-11-20 10:11:08.953755] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:39.058 10:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:39.058 10:11:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:39.058 10:11:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:37:39.058 10:11:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:37:39.058 10:11:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:37:39.058 10:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:39.058 10:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:39.058 bdev_null1 00:37:39.058 10:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:39.058 10:11:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:37:39.058 10:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:39.058 10:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:39.058 10:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:39.058 10:11:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:37:39.058 10:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:39.058 10:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:39.058 10:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:39.058 10:11:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:39.058 10:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:39.058 10:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:39.058 10:11:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:39.058 10:11:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:39.058 10:11:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:37:39.058 10:11:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:37:39.058 10:11:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:37:39.058 10:11:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:39.058 10:11:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:39.058 bdev_null2 00:37:39.058 10:11:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:39.058 10:11:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:37:39.058 10:11:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:39.058 10:11:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:39.058 10:11:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:39.058 10:11:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:37:39.058 10:11:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:39.058 10:11:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:39.058 10:11:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:39.058 10:11:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:37:39.058 10:11:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:39.058 10:11:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:39.058 10:11:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:39.058 10:11:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:37:39.058 10:11:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:37:39.058 10:11:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:37:39.058 10:11:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:37:39.058 10:11:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:39.058 10:11:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:37:39.058 10:11:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:39.058 10:11:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:39.058 10:11:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:39.058 10:11:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:39.058 { 00:37:39.058 "params": { 00:37:39.058 "name": "Nvme$subsystem", 00:37:39.058 "trtype": "$TEST_TRANSPORT", 00:37:39.058 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:39.058 "adrfam": "ipv4", 00:37:39.058 "trsvcid": "$NVMF_PORT", 00:37:39.058 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:39.058 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:39.058 "hdgst": ${hdgst:-false}, 00:37:39.058 "ddgst": ${ddgst:-false} 00:37:39.058 }, 00:37:39.058 "method": "bdev_nvme_attach_controller" 00:37:39.058 } 00:37:39.058 EOF 00:37:39.058 )") 00:37:39.058 10:11:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:37:39.058 10:11:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:39.058 10:11:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:39.058 10:11:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:37:39.058 10:11:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:39.058 10:11:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:37:39.058 10:11:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:37:39.058 10:11:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:39.058 10:11:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:39.058 10:11:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:39.058 10:11:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:39.058 10:11:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:37:39.058 10:11:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:37:39.058 10:11:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:39.058 10:11:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:39.058 10:11:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:37:39.058 10:11:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:39.058 10:11:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:39.058 { 00:37:39.058 "params": { 00:37:39.058 "name": "Nvme$subsystem", 00:37:39.058 "trtype": "$TEST_TRANSPORT", 00:37:39.058 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:39.058 "adrfam": "ipv4", 00:37:39.058 "trsvcid": "$NVMF_PORT", 00:37:39.058 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:39.058 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:39.058 "hdgst": ${hdgst:-false}, 00:37:39.058 "ddgst": ${ddgst:-false} 00:37:39.058 }, 00:37:39.058 "method": "bdev_nvme_attach_controller" 00:37:39.058 } 00:37:39.058 EOF 00:37:39.058 )") 00:37:39.058 10:11:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:37:39.058 10:11:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:39.058 10:11:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:37:39.058 10:11:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:39.058 10:11:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:37:39.058 10:11:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:39.058 10:11:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:39.058 10:11:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:39.058 { 00:37:39.058 "params": { 00:37:39.058 "name": "Nvme$subsystem", 00:37:39.058 "trtype": "$TEST_TRANSPORT", 00:37:39.058 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:39.058 "adrfam": "ipv4", 00:37:39.058 "trsvcid": "$NVMF_PORT", 00:37:39.058 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:39.058 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:39.058 "hdgst": ${hdgst:-false}, 00:37:39.058 "ddgst": ${ddgst:-false} 00:37:39.058 }, 00:37:39.058 "method": "bdev_nvme_attach_controller" 00:37:39.058 } 00:37:39.058 EOF 00:37:39.058 )") 00:37:39.058 10:11:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:39.058 10:11:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:37:39.059 10:11:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:37:39.059 10:11:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:39.059 "params": { 00:37:39.059 "name": "Nvme0", 00:37:39.059 "trtype": "tcp", 00:37:39.059 "traddr": "10.0.0.2", 00:37:39.059 "adrfam": "ipv4", 00:37:39.059 "trsvcid": "4420", 00:37:39.059 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:39.059 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:39.059 "hdgst": false, 00:37:39.059 "ddgst": false 00:37:39.059 }, 00:37:39.059 "method": "bdev_nvme_attach_controller" 00:37:39.059 },{ 00:37:39.059 "params": { 00:37:39.059 "name": "Nvme1", 00:37:39.059 "trtype": "tcp", 00:37:39.059 "traddr": "10.0.0.2", 00:37:39.059 "adrfam": "ipv4", 00:37:39.059 "trsvcid": "4420", 00:37:39.059 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:39.059 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:39.059 "hdgst": false, 00:37:39.059 "ddgst": false 00:37:39.059 }, 00:37:39.059 "method": "bdev_nvme_attach_controller" 00:37:39.059 },{ 00:37:39.059 "params": { 00:37:39.059 "name": "Nvme2", 00:37:39.059 "trtype": "tcp", 00:37:39.059 "traddr": "10.0.0.2", 00:37:39.059 "adrfam": "ipv4", 00:37:39.059 "trsvcid": "4420", 00:37:39.059 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:37:39.059 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:37:39.059 "hdgst": false, 00:37:39.059 "ddgst": false 00:37:39.059 }, 00:37:39.059 "method": "bdev_nvme_attach_controller" 00:37:39.059 }' 00:37:39.059 10:11:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:39.059 10:11:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:39.059 10:11:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:39.059 10:11:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:39.059 10:11:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:37:39.059 10:11:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:39.059 10:11:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:39.059 10:11:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:39.059 10:11:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:39.059 10:11:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:39.059 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:37:39.059 ... 00:37:39.059 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:37:39.059 ... 00:37:39.059 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:37:39.059 ... 00:37:39.059 fio-3.35 00:37:39.059 Starting 24 threads 00:37:51.289 00:37:51.289 filename0: (groupid=0, jobs=1): err= 0: pid=1683497: Wed Nov 20 10:11:20 2024 00:37:51.289 read: IOPS=683, BW=2732KiB/s (2798kB/s)(26.7MiB/10009msec) 00:37:51.289 slat (usec): min=5, max=115, avg=10.77, stdev= 8.30 00:37:51.289 clat (usec): min=5519, max=41732, avg=23334.85, stdev=2681.41 00:37:51.289 lat (usec): min=5525, max=41763, avg=23345.62, stdev=2680.81 00:37:51.289 clat percentiles (usec): 00:37:51.289 | 1.00th=[ 8160], 5.00th=[19530], 10.00th=[22938], 20.00th=[23462], 00:37:51.289 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:37:51.289 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24511], 95.00th=[24773], 00:37:51.289 | 99.00th=[29492], 99.50th=[30278], 99.90th=[31589], 99.95th=[41681], 00:37:51.289 | 99.99th=[41681] 00:37:51.289 bw ( KiB/s): min= 2554, max= 3064, per=4.22%, avg=2736.63, stdev=116.11, samples=19 00:37:51.289 iops : min= 638, max= 766, avg=684.11, stdev=29.09, samples=19 00:37:51.289 lat (msec) : 10=1.24%, 20=4.14%, 50=94.62% 00:37:51.289 cpu : usr=99.15%, sys=0.59%, ctx=13, majf=0, minf=9 00:37:51.289 IO depths : 1=5.5%, 2=11.4%, 4=23.9%, 8=52.2%, 16=7.0%, 32=0.0%, >=64=0.0% 00:37:51.289 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:51.289 complete : 0=0.0%, 4=93.9%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:51.289 issued rwts: total=6837,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:51.289 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:51.289 filename0: (groupid=0, jobs=1): err= 0: pid=1683498: Wed Nov 20 10:11:20 2024 00:37:51.289 read: IOPS=676, BW=2705KiB/s (2770kB/s)(26.4MiB/10009msec) 00:37:51.289 slat (nsec): min=5591, max=77546, avg=15935.40, stdev=11228.88 00:37:51.289 clat (usec): min=6924, max=31435, avg=23526.75, stdev=1759.18 00:37:51.289 lat (usec): min=6941, max=31441, avg=23542.69, stdev=1758.71 00:37:51.289 clat percentiles (usec): 00:37:51.289 | 1.00th=[10552], 5.00th=[22938], 10.00th=[23200], 20.00th=[23462], 00:37:51.289 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:37:51.289 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24773], 00:37:51.289 | 99.00th=[25297], 99.50th=[25560], 99.90th=[27919], 99.95th=[28705], 00:37:51.289 | 99.99th=[31327] 00:37:51.289 bw ( KiB/s): min= 2554, max= 2944, per=4.18%, avg=2707.58, stdev=88.75, samples=19 00:37:51.289 iops : min= 638, max= 736, avg=676.84, stdev=22.24, samples=19 00:37:51.289 lat (msec) : 10=0.81%, 20=1.08%, 50=98.11% 00:37:51.289 cpu : usr=98.93%, sys=0.79%, ctx=34, majf=0, minf=9 00:37:51.289 IO depths : 1=5.9%, 2=12.1%, 4=24.8%, 8=50.6%, 16=6.6%, 32=0.0%, >=64=0.0% 00:37:51.289 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:51.289 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:51.289 issued rwts: total=6768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:51.289 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:51.289 filename0: (groupid=0, jobs=1): err= 0: pid=1683499: Wed Nov 20 10:11:20 2024 00:37:51.289 read: IOPS=671, BW=2685KiB/s (2749kB/s)(26.2MiB/10011msec) 00:37:51.289 slat (usec): min=5, max=109, avg=31.21, stdev=15.86 00:37:51.289 clat (usec): min=15689, max=32804, avg=23558.32, stdev=805.38 00:37:51.289 lat (usec): min=15696, max=32816, avg=23589.53, stdev=804.84 00:37:51.289 clat percentiles (usec): 00:37:51.289 | 1.00th=[22152], 5.00th=[22938], 10.00th=[22938], 20.00th=[23200], 00:37:51.289 | 30.00th=[23200], 40.00th=[23462], 50.00th=[23462], 60.00th=[23725], 00:37:51.289 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:37:51.289 | 99.00th=[25035], 99.50th=[25560], 99.90th=[29492], 99.95th=[29492], 00:37:51.289 | 99.99th=[32900] 00:37:51.289 bw ( KiB/s): min= 2554, max= 2688, per=4.13%, avg=2680.00, stdev=30.59, samples=19 00:37:51.289 iops : min= 638, max= 672, avg=669.89, stdev= 7.76, samples=19 00:37:51.289 lat (msec) : 20=0.86%, 50=99.14% 00:37:51.289 cpu : usr=98.68%, sys=0.93%, ctx=37, majf=0, minf=9 00:37:51.289 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:51.289 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:51.289 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:51.289 issued rwts: total=6720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:51.289 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:51.289 filename0: (groupid=0, jobs=1): err= 0: pid=1683500: Wed Nov 20 10:11:20 2024 00:37:51.289 read: IOPS=660, BW=2644KiB/s (2707kB/s)(25.8MiB/10004msec) 00:37:51.289 slat (nsec): min=5419, max=98255, avg=18078.56, stdev=15453.86 00:37:51.289 clat (usec): min=5487, max=54753, avg=24121.76, stdev=3514.68 00:37:51.289 lat (usec): min=5493, max=54777, avg=24139.84, stdev=3514.45 00:37:51.289 clat percentiles (usec): 00:37:51.289 | 1.00th=[12911], 5.00th=[20055], 10.00th=[22938], 20.00th=[23462], 00:37:51.289 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:37:51.289 | 70.00th=[23987], 80.00th=[24511], 90.00th=[26084], 95.00th=[30540], 00:37:51.289 | 99.00th=[39060], 99.50th=[40633], 99.90th=[44303], 99.95th=[54789], 00:37:51.289 | 99.99th=[54789] 00:37:51.289 bw ( KiB/s): min= 2436, max= 2736, per=4.06%, avg=2631.37, stdev=84.71, samples=19 00:37:51.289 iops : min= 609, max= 684, avg=657.74, stdev=21.24, samples=19 00:37:51.289 lat (msec) : 10=0.24%, 20=4.75%, 50=94.93%, 100=0.08% 00:37:51.289 cpu : usr=98.92%, sys=0.78%, ctx=30, majf=0, minf=9 00:37:51.289 IO depths : 1=0.4%, 2=0.9%, 4=4.0%, 8=78.3%, 16=16.3%, 32=0.0%, >=64=0.0% 00:37:51.289 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:51.289 complete : 0=0.0%, 4=87.5%, 8=10.8%, 16=1.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:51.289 issued rwts: total=6612,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:51.289 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:51.289 filename0: (groupid=0, jobs=1): err= 0: pid=1683501: Wed Nov 20 10:11:20 2024 00:37:51.289 read: IOPS=674, BW=2698KiB/s (2763kB/s)(26.4MiB/10005msec) 00:37:51.289 slat (nsec): min=5459, max=76714, avg=20583.83, stdev=12674.94 00:37:51.289 clat (usec): min=5335, max=41688, avg=23548.12, stdev=1927.66 00:37:51.289 lat (usec): min=5341, max=41707, avg=23568.70, stdev=1928.49 00:37:51.289 clat percentiles (usec): 00:37:51.289 | 1.00th=[14091], 5.00th=[22676], 10.00th=[23200], 20.00th=[23200], 00:37:51.289 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:37:51.289 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24773], 00:37:51.289 | 99.00th=[26346], 99.50th=[28443], 99.90th=[41681], 99.95th=[41681], 00:37:51.289 | 99.99th=[41681] 00:37:51.289 bw ( KiB/s): min= 2432, max= 2816, per=4.14%, avg=2682.21, stdev=77.77, samples=19 00:37:51.289 iops : min= 608, max= 704, avg=670.42, stdev=19.44, samples=19 00:37:51.289 lat (msec) : 10=0.47%, 20=1.73%, 50=97.79% 00:37:51.289 cpu : usr=99.06%, sys=0.67%, ctx=54, majf=0, minf=9 00:37:51.289 IO depths : 1=2.9%, 2=9.0%, 4=24.4%, 8=54.0%, 16=9.6%, 32=0.0%, >=64=0.0% 00:37:51.290 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:51.290 complete : 0=0.0%, 4=94.1%, 8=0.2%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:51.290 issued rwts: total=6748,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:51.290 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:51.290 filename0: (groupid=0, jobs=1): err= 0: pid=1683502: Wed Nov 20 10:11:20 2024 00:37:51.290 read: IOPS=670, BW=2683KiB/s (2748kB/s)(26.2MiB/10008msec) 00:37:51.290 slat (nsec): min=5430, max=84861, avg=13995.48, stdev=10962.33 00:37:51.290 clat (usec): min=5450, max=48358, avg=23756.13, stdev=4127.98 00:37:51.290 lat (usec): min=5461, max=48379, avg=23770.12, stdev=4128.92 00:37:51.290 clat percentiles (usec): 00:37:51.290 | 1.00th=[12649], 5.00th=[15795], 10.00th=[20055], 20.00th=[22938], 00:37:51.290 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:37:51.290 | 70.00th=[23987], 80.00th=[24511], 90.00th=[26870], 95.00th=[30802], 00:37:51.290 | 99.00th=[39060], 99.50th=[40633], 99.90th=[48497], 99.95th=[48497], 00:37:51.290 | 99.99th=[48497] 00:37:51.290 bw ( KiB/s): min= 2480, max= 2848, per=4.12%, avg=2672.42, stdev=104.26, samples=19 00:37:51.290 iops : min= 620, max= 712, avg=668.00, stdev=26.13, samples=19 00:37:51.290 lat (msec) : 10=0.36%, 20=9.82%, 50=89.83% 00:37:51.290 cpu : usr=98.68%, sys=0.91%, ctx=92, majf=0, minf=9 00:37:51.290 IO depths : 1=1.4%, 2=2.9%, 4=9.6%, 8=73.0%, 16=13.0%, 32=0.0%, >=64=0.0% 00:37:51.290 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:51.290 complete : 0=0.0%, 4=90.3%, 8=5.8%, 16=3.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:51.290 issued rwts: total=6714,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:51.290 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:51.290 filename0: (groupid=0, jobs=1): err= 0: pid=1683503: Wed Nov 20 10:11:20 2024 00:37:51.290 read: IOPS=676, BW=2704KiB/s (2769kB/s)(26.4MiB/10010msec) 00:37:51.290 slat (usec): min=5, max=104, avg=25.85, stdev=18.57 00:37:51.290 clat (usec): min=9377, max=40164, avg=23424.37, stdev=1994.20 00:37:51.290 lat (usec): min=9383, max=40182, avg=23450.22, stdev=1995.56 00:37:51.290 clat percentiles (usec): 00:37:51.290 | 1.00th=[14877], 5.00th=[21365], 10.00th=[22938], 20.00th=[23200], 00:37:51.290 | 30.00th=[23200], 40.00th=[23462], 50.00th=[23462], 60.00th=[23725], 00:37:51.290 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24249], 95.00th=[24773], 00:37:51.290 | 99.00th=[28705], 99.50th=[32637], 99.90th=[40109], 99.95th=[40109], 00:37:51.290 | 99.99th=[40109] 00:37:51.290 bw ( KiB/s): min= 2560, max= 2784, per=4.16%, avg=2698.26, stdev=48.58, samples=19 00:37:51.290 iops : min= 640, max= 696, avg=674.47, stdev=12.13, samples=19 00:37:51.290 lat (msec) : 10=0.06%, 20=4.39%, 50=95.55% 00:37:51.290 cpu : usr=99.09%, sys=0.66%, ctx=20, majf=0, minf=9 00:37:51.290 IO depths : 1=4.8%, 2=10.5%, 4=23.4%, 8=53.6%, 16=7.8%, 32=0.0%, >=64=0.0% 00:37:51.290 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:51.290 complete : 0=0.0%, 4=93.7%, 8=0.6%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:51.290 issued rwts: total=6768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:51.290 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:51.290 filename0: (groupid=0, jobs=1): err= 0: pid=1683504: Wed Nov 20 10:11:20 2024 00:37:51.290 read: IOPS=673, BW=2694KiB/s (2759kB/s)(26.3MiB/10001msec) 00:37:51.290 slat (usec): min=5, max=110, avg=25.33, stdev=19.08 00:37:51.290 clat (usec): min=9837, max=29084, avg=23551.73, stdev=1257.51 00:37:51.290 lat (usec): min=9842, max=29091, avg=23577.06, stdev=1256.49 00:37:51.290 clat percentiles (usec): 00:37:51.290 | 1.00th=[16450], 5.00th=[22938], 10.00th=[23200], 20.00th=[23200], 00:37:51.290 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:37:51.290 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:37:51.290 | 99.00th=[25035], 99.50th=[25560], 99.90th=[27395], 99.95th=[28967], 00:37:51.290 | 99.99th=[28967] 00:37:51.290 bw ( KiB/s): min= 2554, max= 2816, per=4.16%, avg=2694.42, stdev=67.79, samples=19 00:37:51.290 iops : min= 638, max= 704, avg=673.58, stdev=17.00, samples=19 00:37:51.290 lat (msec) : 10=0.16%, 20=1.20%, 50=98.63% 00:37:51.290 cpu : usr=99.04%, sys=0.70%, ctx=34, majf=0, minf=9 00:37:51.290 IO depths : 1=6.1%, 2=12.3%, 4=24.7%, 8=50.5%, 16=6.4%, 32=0.0%, >=64=0.0% 00:37:51.290 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:51.290 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:51.290 issued rwts: total=6736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:51.290 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:51.290 filename1: (groupid=0, jobs=1): err= 0: pid=1683505: Wed Nov 20 10:11:20 2024 00:37:51.290 read: IOPS=671, BW=2686KiB/s (2750kB/s)(26.2MiB/10008msec) 00:37:51.290 slat (usec): min=5, max=101, avg=27.45, stdev=17.27 00:37:51.290 clat (usec): min=12724, max=31815, avg=23605.47, stdev=962.29 00:37:51.290 lat (usec): min=12751, max=31846, avg=23632.91, stdev=961.39 00:37:51.290 clat percentiles (usec): 00:37:51.290 | 1.00th=[19006], 5.00th=[22938], 10.00th=[23200], 20.00th=[23200], 00:37:51.290 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:37:51.290 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:37:51.290 | 99.00th=[25297], 99.50th=[27657], 99.90th=[30278], 99.95th=[31327], 00:37:51.290 | 99.99th=[31851] 00:37:51.290 bw ( KiB/s): min= 2554, max= 2704, per=4.13%, avg=2680.84, stdev=31.28, samples=19 00:37:51.290 iops : min= 638, max= 676, avg=670.11, stdev= 7.92, samples=19 00:37:51.290 lat (msec) : 20=1.22%, 50=98.78% 00:37:51.290 cpu : usr=98.98%, sys=0.76%, ctx=14, majf=0, minf=9 00:37:51.290 IO depths : 1=5.6%, 2=11.9%, 4=25.0%, 8=50.6%, 16=6.9%, 32=0.0%, >=64=0.0% 00:37:51.290 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:51.290 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:51.290 issued rwts: total=6720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:51.290 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:51.290 filename1: (groupid=0, jobs=1): err= 0: pid=1683506: Wed Nov 20 10:11:20 2024 00:37:51.290 read: IOPS=671, BW=2688KiB/s (2752kB/s)(26.2MiB/10001msec) 00:37:51.290 slat (usec): min=5, max=103, avg=15.18, stdev=14.57 00:37:51.290 clat (usec): min=14727, max=32940, avg=23692.92, stdev=876.34 00:37:51.290 lat (usec): min=14733, max=32950, avg=23708.09, stdev=874.54 00:37:51.290 clat percentiles (usec): 00:37:51.290 | 1.00th=[20841], 5.00th=[22938], 10.00th=[23200], 20.00th=[23462], 00:37:51.290 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:37:51.290 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:37:51.290 | 99.00th=[25297], 99.50th=[25560], 99.90th=[28181], 99.95th=[31589], 00:37:51.290 | 99.99th=[32900] 00:37:51.290 bw ( KiB/s): min= 2682, max= 2693, per=4.14%, avg=2687.32, stdev= 2.63, samples=19 00:37:51.290 iops : min= 670, max= 673, avg=671.74, stdev= 0.81, samples=19 00:37:51.290 lat (msec) : 20=0.83%, 50=99.17% 00:37:51.290 cpu : usr=98.74%, sys=0.80%, ctx=161, majf=0, minf=9 00:37:51.290 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:37:51.290 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:51.290 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:51.290 issued rwts: total=6720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:51.290 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:51.290 filename1: (groupid=0, jobs=1): err= 0: pid=1683507: Wed Nov 20 10:11:20 2024 00:37:51.290 read: IOPS=677, BW=2711KiB/s (2776kB/s)(26.5MiB/10009msec) 00:37:51.290 slat (usec): min=5, max=103, avg=14.93, stdev=10.85 00:37:51.290 clat (usec): min=6732, max=29360, avg=23483.65, stdev=1939.21 00:37:51.290 lat (usec): min=6756, max=29367, avg=23498.58, stdev=1938.00 00:37:51.290 clat percentiles (usec): 00:37:51.290 | 1.00th=[10290], 5.00th=[22938], 10.00th=[23200], 20.00th=[23462], 00:37:51.290 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:37:51.290 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24773], 00:37:51.290 | 99.00th=[25035], 99.50th=[25297], 99.90th=[26084], 99.95th=[27657], 00:37:51.290 | 99.99th=[29492] 00:37:51.290 bw ( KiB/s): min= 2554, max= 2944, per=4.19%, avg=2714.32, stdev=91.52, samples=19 00:37:51.290 iops : min= 638, max= 736, avg=678.53, stdev=22.90, samples=19 00:37:51.290 lat (msec) : 10=0.94%, 20=1.44%, 50=97.61% 00:37:51.290 cpu : usr=99.00%, sys=0.74%, ctx=12, majf=0, minf=9 00:37:51.290 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:37:51.290 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:51.290 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:51.290 issued rwts: total=6784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:51.290 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:51.290 filename1: (groupid=0, jobs=1): err= 0: pid=1683508: Wed Nov 20 10:11:20 2024 00:37:51.290 read: IOPS=670, BW=2682KiB/s (2746kB/s)(26.2MiB/10007msec) 00:37:51.290 slat (usec): min=5, max=111, avg=27.11, stdev=18.49 00:37:51.290 clat (usec): min=11814, max=40214, avg=23608.76, stdev=1235.46 00:37:51.290 lat (usec): min=11820, max=40221, avg=23635.88, stdev=1235.20 00:37:51.290 clat percentiles (usec): 00:37:51.290 | 1.00th=[18220], 5.00th=[22938], 10.00th=[22938], 20.00th=[23200], 00:37:51.290 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23462], 60.00th=[23725], 00:37:51.290 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:37:51.291 | 99.00th=[27132], 99.50th=[28967], 99.90th=[35390], 99.95th=[40109], 00:37:51.291 | 99.99th=[40109] 00:37:51.291 bw ( KiB/s): min= 2554, max= 2736, per=4.13%, avg=2676.05, stdev=42.68, samples=19 00:37:51.291 iops : min= 638, max= 684, avg=668.89, stdev=10.77, samples=19 00:37:51.291 lat (msec) : 20=1.43%, 50=98.57% 00:37:51.291 cpu : usr=98.59%, sys=0.95%, ctx=150, majf=0, minf=9 00:37:51.291 IO depths : 1=6.0%, 2=12.1%, 4=24.6%, 8=50.7%, 16=6.5%, 32=0.0%, >=64=0.0% 00:37:51.291 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:51.291 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:51.291 issued rwts: total=6710,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:51.291 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:51.291 filename1: (groupid=0, jobs=1): err= 0: pid=1683509: Wed Nov 20 10:11:20 2024 00:37:51.291 read: IOPS=673, BW=2696KiB/s (2760kB/s)(26.3MiB/10004msec) 00:37:51.291 slat (nsec): min=5476, max=91314, avg=23530.01, stdev=14548.36 00:37:51.291 clat (usec): min=3907, max=43840, avg=23509.60, stdev=1955.00 00:37:51.291 lat (usec): min=3913, max=43859, avg=23533.13, stdev=1956.04 00:37:51.291 clat percentiles (usec): 00:37:51.291 | 1.00th=[15795], 5.00th=[22938], 10.00th=[23200], 20.00th=[23200], 00:37:51.291 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23462], 60.00th=[23725], 00:37:51.291 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:37:51.291 | 99.00th=[25560], 99.50th=[28443], 99.90th=[40109], 99.95th=[40109], 00:37:51.291 | 99.99th=[43779] 00:37:51.291 bw ( KiB/s): min= 2432, max= 2816, per=4.13%, avg=2676.05, stdev=73.33, samples=19 00:37:51.291 iops : min= 608, max= 704, avg=668.89, stdev=18.35, samples=19 00:37:51.291 lat (msec) : 4=0.24%, 10=0.47%, 20=1.26%, 50=98.03% 00:37:51.291 cpu : usr=98.02%, sys=1.34%, ctx=154, majf=0, minf=9 00:37:51.291 IO depths : 1=5.5%, 2=11.7%, 4=24.8%, 8=51.0%, 16=7.0%, 32=0.0%, >=64=0.0% 00:37:51.291 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:51.291 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:51.291 issued rwts: total=6742,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:51.291 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:51.291 filename1: (groupid=0, jobs=1): err= 0: pid=1683510: Wed Nov 20 10:11:20 2024 00:37:51.291 read: IOPS=686, BW=2748KiB/s (2813kB/s)(26.9MiB/10025msec) 00:37:51.291 slat (nsec): min=5566, max=77365, avg=13551.44, stdev=10774.54 00:37:51.291 clat (usec): min=6720, max=44358, avg=23170.23, stdev=3997.08 00:37:51.291 lat (usec): min=6744, max=44364, avg=23183.78, stdev=3997.80 00:37:51.291 clat percentiles (usec): 00:37:51.291 | 1.00th=[10159], 5.00th=[15139], 10.00th=[19006], 20.00th=[22676], 00:37:51.291 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:37:51.291 | 70.00th=[23987], 80.00th=[24249], 90.00th=[25560], 95.00th=[28181], 00:37:51.291 | 99.00th=[36963], 99.50th=[40109], 99.90th=[44303], 99.95th=[44303], 00:37:51.291 | 99.99th=[44303] 00:37:51.291 bw ( KiB/s): min= 2560, max= 3136, per=4.24%, avg=2751.40, stdev=148.13, samples=20 00:37:51.291 iops : min= 640, max= 784, avg=687.80, stdev=37.04, samples=20 00:37:51.291 lat (msec) : 10=0.94%, 20=11.76%, 50=87.29% 00:37:51.291 cpu : usr=98.78%, sys=0.90%, ctx=103, majf=0, minf=9 00:37:51.291 IO depths : 1=2.5%, 2=5.1%, 4=14.0%, 8=67.9%, 16=10.5%, 32=0.0%, >=64=0.0% 00:37:51.291 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:51.291 complete : 0=0.0%, 4=91.1%, 8=3.7%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:51.291 issued rwts: total=6886,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:51.291 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:51.291 filename1: (groupid=0, jobs=1): err= 0: pid=1683511: Wed Nov 20 10:11:20 2024 00:37:51.291 read: IOPS=673, BW=2695KiB/s (2760kB/s)(26.3MiB/10004msec) 00:37:51.291 slat (nsec): min=5515, max=76864, avg=15226.27, stdev=10007.99 00:37:51.291 clat (usec): min=3860, max=44080, avg=23622.76, stdev=2015.68 00:37:51.291 lat (usec): min=3866, max=44101, avg=23637.99, stdev=2016.05 00:37:51.291 clat percentiles (usec): 00:37:51.291 | 1.00th=[17695], 5.00th=[22676], 10.00th=[23200], 20.00th=[23462], 00:37:51.291 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:37:51.291 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:37:51.291 | 99.00th=[27919], 99.50th=[28705], 99.90th=[40633], 99.95th=[40633], 00:37:51.291 | 99.99th=[44303] 00:37:51.291 bw ( KiB/s): min= 2436, max= 2800, per=4.13%, avg=2675.68, stdev=66.56, samples=19 00:37:51.291 iops : min= 609, max= 700, avg=668.79, stdev=16.66, samples=19 00:37:51.291 lat (msec) : 4=0.21%, 10=0.45%, 20=1.69%, 50=97.66% 00:37:51.291 cpu : usr=98.60%, sys=0.96%, ctx=109, majf=0, minf=9 00:37:51.291 IO depths : 1=3.1%, 2=9.0%, 4=23.5%, 8=54.7%, 16=9.6%, 32=0.0%, >=64=0.0% 00:37:51.291 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:51.291 complete : 0=0.0%, 4=94.0%, 8=0.6%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:51.291 issued rwts: total=6740,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:51.291 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:51.291 filename1: (groupid=0, jobs=1): err= 0: pid=1683512: Wed Nov 20 10:11:20 2024 00:37:51.291 read: IOPS=671, BW=2686KiB/s (2750kB/s)(26.2MiB/10009msec) 00:37:51.291 slat (nsec): min=5566, max=52592, avg=10004.04, stdev=6960.63 00:37:51.291 clat (usec): min=7781, max=39602, avg=23744.36, stdev=1966.01 00:37:51.291 lat (usec): min=7788, max=39608, avg=23754.37, stdev=1965.98 00:37:51.291 clat percentiles (usec): 00:37:51.291 | 1.00th=[13698], 5.00th=[22938], 10.00th=[23200], 20.00th=[23462], 00:37:51.291 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:37:51.291 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:37:51.291 | 99.00th=[30278], 99.50th=[36439], 99.90th=[38536], 99.95th=[38536], 00:37:51.291 | 99.99th=[39584] 00:37:51.291 bw ( KiB/s): min= 2560, max= 2688, per=4.13%, avg=2680.32, stdev=29.22, samples=19 00:37:51.291 iops : min= 640, max= 672, avg=670.00, stdev= 7.30, samples=19 00:37:51.291 lat (msec) : 10=0.30%, 20=1.13%, 50=98.57% 00:37:51.291 cpu : usr=98.62%, sys=0.95%, ctx=92, majf=0, minf=9 00:37:51.291 IO depths : 1=5.3%, 2=11.6%, 4=25.0%, 8=51.0%, 16=7.2%, 32=0.0%, >=64=0.0% 00:37:51.291 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:51.291 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:51.291 issued rwts: total=6720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:51.291 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:51.291 filename2: (groupid=0, jobs=1): err= 0: pid=1683513: Wed Nov 20 10:11:20 2024 00:37:51.291 read: IOPS=671, BW=2686KiB/s (2751kB/s)(26.2MiB/10006msec) 00:37:51.291 slat (usec): min=5, max=108, avg=34.30, stdev=18.34 00:37:51.291 clat (usec): min=15312, max=28082, avg=23519.09, stdev=768.41 00:37:51.291 lat (usec): min=15333, max=28103, avg=23553.39, stdev=768.27 00:37:51.291 clat percentiles (usec): 00:37:51.291 | 1.00th=[22152], 5.00th=[22938], 10.00th=[22938], 20.00th=[23200], 00:37:51.291 | 30.00th=[23200], 40.00th=[23462], 50.00th=[23462], 60.00th=[23725], 00:37:51.291 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:37:51.291 | 99.00th=[25035], 99.50th=[25297], 99.90th=[26346], 99.95th=[27919], 00:37:51.291 | 99.99th=[28181] 00:37:51.291 bw ( KiB/s): min= 2682, max= 2688, per=4.14%, avg=2686.74, stdev= 2.51, samples=19 00:37:51.291 iops : min= 670, max= 672, avg=671.58, stdev= 0.84, samples=19 00:37:51.291 lat (msec) : 20=0.77%, 50=99.23% 00:37:51.291 cpu : usr=98.70%, sys=0.91%, ctx=168, majf=0, minf=9 00:37:51.291 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:51.291 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:51.291 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:51.291 issued rwts: total=6720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:51.291 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:51.291 filename2: (groupid=0, jobs=1): err= 0: pid=1683514: Wed Nov 20 10:11:20 2024 00:37:51.291 read: IOPS=673, BW=2694KiB/s (2759kB/s)(26.3MiB/10001msec) 00:37:51.291 slat (nsec): min=5583, max=96301, avg=11725.14, stdev=9920.48 00:37:51.291 clat (usec): min=10512, max=28206, avg=23661.99, stdev=1218.66 00:37:51.291 lat (usec): min=10522, max=28213, avg=23673.72, stdev=1217.82 00:37:51.291 clat percentiles (usec): 00:37:51.291 | 1.00th=[16450], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:37:51.291 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:37:51.291 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:37:51.291 | 99.00th=[25035], 99.50th=[25297], 99.90th=[25560], 99.95th=[26608], 00:37:51.291 | 99.99th=[28181] 00:37:51.292 bw ( KiB/s): min= 2554, max= 2816, per=4.16%, avg=2694.42, stdev=67.79, samples=19 00:37:51.292 iops : min= 638, max= 704, avg=673.58, stdev=17.00, samples=19 00:37:51.292 lat (msec) : 20=1.22%, 50=98.78% 00:37:51.292 cpu : usr=98.86%, sys=0.85%, ctx=98, majf=0, minf=9 00:37:51.292 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:51.292 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:51.292 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:51.292 issued rwts: total=6736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:51.292 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:51.292 filename2: (groupid=0, jobs=1): err= 0: pid=1683515: Wed Nov 20 10:11:20 2024 00:37:51.292 read: IOPS=672, BW=2691KiB/s (2755kB/s)(26.3MiB/10014msec) 00:37:51.292 slat (usec): min=5, max=102, avg=30.58, stdev=17.11 00:37:51.292 clat (usec): min=13799, max=29135, avg=23524.03, stdev=964.76 00:37:51.292 lat (usec): min=13816, max=29170, avg=23554.62, stdev=964.57 00:37:51.292 clat percentiles (usec): 00:37:51.292 | 1.00th=[18744], 5.00th=[22938], 10.00th=[22938], 20.00th=[23200], 00:37:51.292 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23462], 60.00th=[23725], 00:37:51.292 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:37:51.292 | 99.00th=[25035], 99.50th=[25297], 99.90th=[26346], 99.95th=[28705], 00:37:51.292 | 99.99th=[29230] 00:37:51.292 bw ( KiB/s): min= 2560, max= 2816, per=4.15%, avg=2688.00, stdev=42.67, samples=19 00:37:51.292 iops : min= 640, max= 704, avg=672.00, stdev=10.67, samples=19 00:37:51.292 lat (msec) : 20=1.31%, 50=98.69% 00:37:51.292 cpu : usr=99.07%, sys=0.64%, ctx=61, majf=0, minf=9 00:37:51.292 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:37:51.292 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:51.292 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:51.292 issued rwts: total=6736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:51.292 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:51.292 filename2: (groupid=0, jobs=1): err= 0: pid=1683516: Wed Nov 20 10:11:20 2024 00:37:51.292 read: IOPS=704, BW=2819KiB/s (2886kB/s)(27.5MiB/10008msec) 00:37:51.292 slat (nsec): min=5565, max=97665, avg=17165.03, stdev=15294.67 00:37:51.292 clat (usec): min=6708, max=54432, avg=22582.62, stdev=4445.28 00:37:51.292 lat (usec): min=6745, max=54454, avg=22599.79, stdev=4446.90 00:37:51.292 clat percentiles (usec): 00:37:51.292 | 1.00th=[10421], 5.00th=[14091], 10.00th=[16319], 20.00th=[19792], 00:37:51.292 | 30.00th=[22938], 40.00th=[23200], 50.00th=[23462], 60.00th=[23725], 00:37:51.292 | 70.00th=[23725], 80.00th=[23987], 90.00th=[25035], 95.00th=[29230], 00:37:51.292 | 99.00th=[38011], 99.50th=[39584], 99.90th=[44303], 99.95th=[44303], 00:37:51.292 | 99.99th=[54264] 00:37:51.292 bw ( KiB/s): min= 2608, max= 3040, per=4.33%, avg=2810.79, stdev=126.85, samples=19 00:37:51.292 iops : min= 652, max= 760, avg=702.63, stdev=31.74, samples=19 00:37:51.292 lat (msec) : 10=0.87%, 20=19.46%, 50=79.65%, 100=0.03% 00:37:51.292 cpu : usr=98.76%, sys=0.92%, ctx=109, majf=0, minf=10 00:37:51.292 IO depths : 1=1.4%, 2=3.9%, 4=12.9%, 8=69.2%, 16=12.6%, 32=0.0%, >=64=0.0% 00:37:51.292 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:51.292 complete : 0=0.0%, 4=91.2%, 8=4.7%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:51.292 issued rwts: total=7052,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:51.292 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:51.292 filename2: (groupid=0, jobs=1): err= 0: pid=1683517: Wed Nov 20 10:11:20 2024 00:37:51.292 read: IOPS=688, BW=2753KiB/s (2820kB/s)(26.9MiB/10005msec) 00:37:51.292 slat (usec): min=5, max=104, avg=19.47, stdev=16.93 00:37:51.292 clat (usec): min=4054, max=51461, avg=23101.60, stdev=4049.54 00:37:51.292 lat (usec): min=4060, max=51480, avg=23121.07, stdev=4051.12 00:37:51.292 clat percentiles (usec): 00:37:51.292 | 1.00th=[11076], 5.00th=[15139], 10.00th=[17957], 20.00th=[21890], 00:37:51.292 | 30.00th=[23200], 40.00th=[23462], 50.00th=[23462], 60.00th=[23725], 00:37:51.292 | 70.00th=[23725], 80.00th=[24249], 90.00th=[25822], 95.00th=[29492], 00:37:51.292 | 99.00th=[36439], 99.50th=[39060], 99.90th=[41157], 99.95th=[41681], 00:37:51.292 | 99.99th=[51643] 00:37:51.292 bw ( KiB/s): min= 2554, max= 2970, per=4.22%, avg=2736.37, stdev=103.51, samples=19 00:37:51.292 iops : min= 638, max= 742, avg=683.95, stdev=25.89, samples=19 00:37:51.292 lat (msec) : 10=0.81%, 20=14.03%, 50=85.13%, 100=0.03% 00:37:51.292 cpu : usr=98.56%, sys=1.01%, ctx=57, majf=0, minf=9 00:37:51.292 IO depths : 1=1.9%, 2=4.2%, 4=11.6%, 8=69.8%, 16=12.4%, 32=0.0%, >=64=0.0% 00:37:51.292 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:51.292 complete : 0=0.0%, 4=90.9%, 8=5.3%, 16=3.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:51.292 issued rwts: total=6887,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:51.292 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:51.292 filename2: (groupid=0, jobs=1): err= 0: pid=1683518: Wed Nov 20 10:11:20 2024 00:37:51.292 read: IOPS=676, BW=2706KiB/s (2771kB/s)(26.4MiB/10004msec) 00:37:51.292 slat (nsec): min=5573, max=97951, avg=21121.94, stdev=14750.78 00:37:51.292 clat (usec): min=8750, max=51625, avg=23470.37, stdev=3287.21 00:37:51.292 lat (usec): min=8757, max=51644, avg=23491.49, stdev=3288.61 00:37:51.292 clat percentiles (usec): 00:37:51.292 | 1.00th=[12780], 5.00th=[17171], 10.00th=[21103], 20.00th=[23200], 00:37:51.292 | 30.00th=[23200], 40.00th=[23462], 50.00th=[23462], 60.00th=[23725], 00:37:51.292 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24773], 95.00th=[27657], 00:37:51.292 | 99.00th=[36963], 99.50th=[39060], 99.90th=[42206], 99.95th=[51643], 00:37:51.292 | 99.99th=[51643] 00:37:51.292 bw ( KiB/s): min= 2560, max= 3049, per=4.16%, avg=2695.42, stdev=93.87, samples=19 00:37:51.292 iops : min= 640, max= 762, avg=673.79, stdev=23.42, samples=19 00:37:51.292 lat (msec) : 10=0.33%, 20=7.70%, 50=91.90%, 100=0.07% 00:37:51.292 cpu : usr=99.01%, sys=0.73%, ctx=23, majf=0, minf=9 00:37:51.292 IO depths : 1=3.9%, 2=8.0%, 4=19.2%, 8=59.9%, 16=9.0%, 32=0.0%, >=64=0.0% 00:37:51.292 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:51.292 complete : 0=0.0%, 4=92.7%, 8=2.0%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:51.292 issued rwts: total=6768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:51.292 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:51.292 filename2: (groupid=0, jobs=1): err= 0: pid=1683519: Wed Nov 20 10:11:20 2024 00:37:51.292 read: IOPS=674, BW=2700KiB/s (2765kB/s)(26.4MiB/10004msec) 00:37:51.292 slat (nsec): min=5509, max=81256, avg=22689.21, stdev=13306.57 00:37:51.292 clat (usec): min=3604, max=40734, avg=23498.90, stdev=2241.47 00:37:51.292 lat (usec): min=3610, max=40765, avg=23521.59, stdev=2243.15 00:37:51.292 clat percentiles (usec): 00:37:51.292 | 1.00th=[14746], 5.00th=[22414], 10.00th=[22938], 20.00th=[23200], 00:37:51.292 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:37:51.292 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24773], 00:37:51.292 | 99.00th=[28705], 99.50th=[34341], 99.90th=[40633], 99.95th=[40633], 00:37:51.292 | 99.99th=[40633] 00:37:51.292 bw ( KiB/s): min= 2436, max= 2816, per=4.14%, avg=2682.42, stdev=79.03, samples=19 00:37:51.292 iops : min= 609, max= 704, avg=670.47, stdev=19.70, samples=19 00:37:51.292 lat (msec) : 4=0.09%, 10=0.44%, 20=2.98%, 50=96.49% 00:37:51.292 cpu : usr=98.93%, sys=0.81%, ctx=29, majf=0, minf=9 00:37:51.292 IO depths : 1=4.5%, 2=10.2%, 4=23.2%, 8=54.0%, 16=8.2%, 32=0.0%, >=64=0.0% 00:37:51.292 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:51.292 complete : 0=0.0%, 4=93.8%, 8=0.5%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:51.292 issued rwts: total=6752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:51.292 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:51.292 filename2: (groupid=0, jobs=1): err= 0: pid=1683520: Wed Nov 20 10:11:20 2024 00:37:51.292 read: IOPS=691, BW=2765KiB/s (2831kB/s)(27.0MiB/10006msec) 00:37:51.292 slat (usec): min=5, max=102, avg=20.36, stdev=15.99 00:37:51.292 clat (usec): min=6609, max=43256, avg=22988.30, stdev=4062.54 00:37:51.292 lat (usec): min=6656, max=43263, avg=23008.66, stdev=4064.30 00:37:51.292 clat percentiles (usec): 00:37:51.292 | 1.00th=[11469], 5.00th=[15008], 10.00th=[17695], 20.00th=[22676], 00:37:51.292 | 30.00th=[23200], 40.00th=[23462], 50.00th=[23462], 60.00th=[23725], 00:37:51.292 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24511], 95.00th=[28181], 00:37:51.292 | 99.00th=[38011], 99.50th=[40109], 99.90th=[41157], 99.95th=[43254], 00:37:51.292 | 99.99th=[43254] 00:37:51.292 bw ( KiB/s): min= 2528, max= 3392, per=4.27%, avg=2769.84, stdev=212.76, samples=19 00:37:51.292 iops : min= 632, max= 848, avg=692.42, stdev=53.16, samples=19 00:37:51.292 lat (msec) : 10=0.59%, 20=14.84%, 50=84.57% 00:37:51.292 cpu : usr=98.69%, sys=1.04%, ctx=30, majf=0, minf=9 00:37:51.293 IO depths : 1=3.9%, 2=7.9%, 4=17.6%, 8=61.7%, 16=9.0%, 32=0.0%, >=64=0.0% 00:37:51.293 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:51.293 complete : 0=0.0%, 4=92.1%, 8=2.5%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:51.293 issued rwts: total=6916,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:51.293 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:51.293 00:37:51.293 Run status group 0 (all jobs): 00:37:51.293 READ: bw=63.3MiB/s (66.4MB/s), 2644KiB/s-2819KiB/s (2707kB/s-2886kB/s), io=635MiB (666MB), run=10001-10025msec 00:37:51.293 10:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:37:51.293 10:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:51.293 10:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:51.293 10:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:51.293 10:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:51.293 10:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:51.293 10:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:51.293 10:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:51.293 10:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:51.293 10:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:51.293 10:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:51.293 10:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:51.293 10:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:51.293 10:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:51.293 10:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:51.293 10:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:37:51.293 10:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:51.293 10:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:51.293 10:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:51.293 10:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:51.293 10:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:51.293 10:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:51.293 10:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:51.293 10:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:51.293 10:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:51.293 10:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:37:51.293 10:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:37:51.293 10:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:37:51.293 10:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:51.293 10:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:51.293 10:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:51.293 10:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:37:51.293 10:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:51.293 10:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:51.293 10:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:51.293 10:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:37:51.293 10:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:37:51.293 10:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:37:51.293 10:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:37:51.293 10:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:37:51.293 10:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:37:51.293 10:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:37:51.293 10:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:37:51.293 10:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:51.293 10:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:37:51.293 10:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:37:51.293 10:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:37:51.293 10:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:51.293 10:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:51.293 bdev_null0 00:37:51.293 10:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:51.293 10:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:51.293 10:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:51.293 10:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:51.293 10:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:51.293 10:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:51.293 10:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:51.293 10:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:51.293 10:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:51.293 10:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:51.293 10:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:51.293 10:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:51.293 [2024-11-20 10:11:20.768331] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:51.293 10:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:51.293 10:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:51.293 10:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:37:51.293 10:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:37:51.293 10:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:37:51.293 10:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:51.293 10:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:51.293 bdev_null1 00:37:51.293 10:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:51.293 10:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:37:51.293 10:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:51.293 10:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:51.293 10:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:51.293 10:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:37:51.293 10:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:51.293 10:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:51.293 10:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:51.293 10:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:51.293 10:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:51.293 10:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:51.293 10:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:51.293 10:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:37:51.293 10:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:37:51.293 10:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:37:51.293 10:11:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:37:51.293 10:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:51.293 10:11:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:37:51.293 10:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:51.293 10:11:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:51.294 10:11:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:51.294 { 00:37:51.294 "params": { 00:37:51.294 "name": "Nvme$subsystem", 00:37:51.294 "trtype": "$TEST_TRANSPORT", 00:37:51.294 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:51.294 "adrfam": "ipv4", 00:37:51.294 "trsvcid": "$NVMF_PORT", 00:37:51.294 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:51.294 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:51.294 "hdgst": ${hdgst:-false}, 00:37:51.294 "ddgst": ${ddgst:-false} 00:37:51.294 }, 00:37:51.294 "method": "bdev_nvme_attach_controller" 00:37:51.294 } 00:37:51.294 EOF 00:37:51.294 )") 00:37:51.294 10:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:51.294 10:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:37:51.294 10:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:51.294 10:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:37:51.294 10:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:51.294 10:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:37:51.294 10:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:51.294 10:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:37:51.294 10:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:51.294 10:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:51.294 10:11:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:51.294 10:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:51.294 10:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:37:51.294 10:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:37:51.294 10:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:51.294 10:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:51.294 10:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:37:51.294 10:11:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:51.294 10:11:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:51.294 { 00:37:51.294 "params": { 00:37:51.294 "name": "Nvme$subsystem", 00:37:51.294 "trtype": "$TEST_TRANSPORT", 00:37:51.294 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:51.294 "adrfam": "ipv4", 00:37:51.294 "trsvcid": "$NVMF_PORT", 00:37:51.294 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:51.294 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:51.294 "hdgst": ${hdgst:-false}, 00:37:51.294 "ddgst": ${ddgst:-false} 00:37:51.294 }, 00:37:51.294 "method": "bdev_nvme_attach_controller" 00:37:51.294 } 00:37:51.294 EOF 00:37:51.294 )") 00:37:51.294 10:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:37:51.294 10:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:51.294 10:11:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:51.294 10:11:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:37:51.294 10:11:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:37:51.294 10:11:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:51.294 "params": { 00:37:51.294 "name": "Nvme0", 00:37:51.294 "trtype": "tcp", 00:37:51.294 "traddr": "10.0.0.2", 00:37:51.294 "adrfam": "ipv4", 00:37:51.294 "trsvcid": "4420", 00:37:51.294 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:51.294 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:51.294 "hdgst": false, 00:37:51.294 "ddgst": false 00:37:51.294 }, 00:37:51.294 "method": "bdev_nvme_attach_controller" 00:37:51.294 },{ 00:37:51.294 "params": { 00:37:51.294 "name": "Nvme1", 00:37:51.294 "trtype": "tcp", 00:37:51.294 "traddr": "10.0.0.2", 00:37:51.294 "adrfam": "ipv4", 00:37:51.294 "trsvcid": "4420", 00:37:51.294 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:51.294 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:51.294 "hdgst": false, 00:37:51.294 "ddgst": false 00:37:51.294 }, 00:37:51.294 "method": "bdev_nvme_attach_controller" 00:37:51.294 }' 00:37:51.294 10:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:51.294 10:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:51.294 10:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:51.294 10:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:51.294 10:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:37:51.294 10:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:51.294 10:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:51.294 10:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:51.294 10:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:51.294 10:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:51.294 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:37:51.294 ... 00:37:51.294 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:37:51.294 ... 00:37:51.294 fio-3.35 00:37:51.294 Starting 4 threads 00:37:56.710 00:37:56.710 filename0: (groupid=0, jobs=1): err= 0: pid=1685706: Wed Nov 20 10:11:27 2024 00:37:56.710 read: IOPS=2868, BW=22.4MiB/s (23.5MB/s)(112MiB/5001msec) 00:37:56.710 slat (nsec): min=7857, max=68072, avg=8884.61, stdev=2435.75 00:37:56.710 clat (usec): min=1135, max=6101, avg=2764.80, stdev=308.94 00:37:56.710 lat (usec): min=1144, max=6129, avg=2773.69, stdev=308.99 00:37:56.710 clat percentiles (usec): 00:37:56.710 | 1.00th=[ 2278], 5.00th=[ 2474], 10.00th=[ 2573], 20.00th=[ 2671], 00:37:56.710 | 30.00th=[ 2671], 40.00th=[ 2704], 50.00th=[ 2704], 60.00th=[ 2704], 00:37:56.710 | 70.00th=[ 2737], 80.00th=[ 2769], 90.00th=[ 2966], 95.00th=[ 3359], 00:37:56.710 | 99.00th=[ 4047], 99.50th=[ 4228], 99.90th=[ 4752], 99.95th=[ 5276], 00:37:56.710 | 99.99th=[ 5997] 00:37:56.710 bw ( KiB/s): min=22396, max=23344, per=24.46%, avg=22957.78, stdev=319.32, samples=9 00:37:56.710 iops : min= 2799, max= 2918, avg=2869.67, stdev=40.02, samples=9 00:37:56.710 lat (msec) : 2=0.20%, 4=98.19%, 10=1.62% 00:37:56.710 cpu : usr=94.44%, sys=4.46%, ctx=287, majf=0, minf=45 00:37:56.710 IO depths : 1=0.1%, 2=0.1%, 4=71.6%, 8=28.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:56.710 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:56.710 complete : 0=0.0%, 4=92.9%, 8=7.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:56.710 issued rwts: total=14347,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:56.710 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:56.710 filename0: (groupid=0, jobs=1): err= 0: pid=1685707: Wed Nov 20 10:11:27 2024 00:37:56.710 read: IOPS=2897, BW=22.6MiB/s (23.7MB/s)(113MiB/5001msec) 00:37:56.710 slat (nsec): min=7854, max=60300, avg=8788.07, stdev=2374.35 00:37:56.710 clat (usec): min=1362, max=4661, avg=2738.78, stdev=259.22 00:37:56.710 lat (usec): min=1370, max=4686, avg=2747.57, stdev=259.26 00:37:56.710 clat percentiles (usec): 00:37:56.710 | 1.00th=[ 2278], 5.00th=[ 2474], 10.00th=[ 2540], 20.00th=[ 2638], 00:37:56.710 | 30.00th=[ 2671], 40.00th=[ 2704], 50.00th=[ 2704], 60.00th=[ 2704], 00:37:56.710 | 70.00th=[ 2737], 80.00th=[ 2737], 90.00th=[ 2933], 95.00th=[ 3228], 00:37:56.710 | 99.00th=[ 3982], 99.50th=[ 4146], 99.90th=[ 4490], 99.95th=[ 4621], 00:37:56.710 | 99.99th=[ 4686] 00:37:56.710 bw ( KiB/s): min=22192, max=23520, per=24.64%, avg=23125.33, stdev=401.76, samples=9 00:37:56.710 iops : min= 2774, max= 2940, avg=2890.67, stdev=50.22, samples=9 00:37:56.710 lat (msec) : 2=0.24%, 4=98.79%, 10=0.97% 00:37:56.710 cpu : usr=96.64%, sys=3.12%, ctx=6, majf=0, minf=51 00:37:56.710 IO depths : 1=0.1%, 2=0.1%, 4=69.9%, 8=29.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:56.710 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:56.710 complete : 0=0.0%, 4=94.3%, 8=5.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:56.710 issued rwts: total=14491,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:56.710 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:56.710 filename1: (groupid=0, jobs=1): err= 0: pid=1685708: Wed Nov 20 10:11:27 2024 00:37:56.710 read: IOPS=3018, BW=23.6MiB/s (24.7MB/s)(118MiB/5002msec) 00:37:56.710 slat (nsec): min=5390, max=52830, avg=8035.16, stdev=2695.78 00:37:56.710 clat (usec): min=715, max=4469, avg=2627.88, stdev=287.40 00:37:56.710 lat (usec): min=731, max=4477, avg=2635.92, stdev=286.86 00:37:56.710 clat percentiles (usec): 00:37:56.710 | 1.00th=[ 1696], 5.00th=[ 2147], 10.00th=[ 2278], 20.00th=[ 2507], 00:37:56.710 | 30.00th=[ 2638], 40.00th=[ 2671], 50.00th=[ 2704], 60.00th=[ 2704], 00:37:56.710 | 70.00th=[ 2704], 80.00th=[ 2737], 90.00th=[ 2769], 95.00th=[ 2900], 00:37:56.710 | 99.00th=[ 3589], 99.50th=[ 3818], 99.90th=[ 4146], 99.95th=[ 4228], 00:37:56.710 | 99.99th=[ 4490] 00:37:56.710 bw ( KiB/s): min=23600, max=26736, per=25.82%, avg=24234.67, stdev=998.05, samples=9 00:37:56.710 iops : min= 2950, max= 3342, avg=3029.33, stdev=124.76, samples=9 00:37:56.710 lat (usec) : 750=0.02%, 1000=0.28% 00:37:56.710 lat (msec) : 2=1.85%, 4=97.60%, 10=0.26% 00:37:56.710 cpu : usr=96.26%, sys=3.50%, ctx=9, majf=0, minf=22 00:37:56.710 IO depths : 1=0.1%, 2=2.0%, 4=70.7%, 8=27.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:56.710 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:56.710 complete : 0=0.0%, 4=92.1%, 8=7.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:56.710 issued rwts: total=15098,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:56.710 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:56.710 filename1: (groupid=0, jobs=1): err= 0: pid=1685709: Wed Nov 20 10:11:27 2024 00:37:56.710 read: IOPS=2949, BW=23.0MiB/s (24.2MB/s)(115MiB/5001msec) 00:37:56.710 slat (nsec): min=5393, max=72065, avg=6400.86, stdev=2162.06 00:37:56.710 clat (usec): min=1280, max=4518, avg=2697.50, stdev=227.63 00:37:56.710 lat (usec): min=1286, max=4523, avg=2703.90, stdev=227.65 00:37:56.710 clat percentiles (usec): 00:37:56.710 | 1.00th=[ 2057], 5.00th=[ 2343], 10.00th=[ 2507], 20.00th=[ 2606], 00:37:56.710 | 30.00th=[ 2671], 40.00th=[ 2704], 50.00th=[ 2704], 60.00th=[ 2704], 00:37:56.710 | 70.00th=[ 2737], 80.00th=[ 2737], 90.00th=[ 2835], 95.00th=[ 2999], 00:37:56.710 | 99.00th=[ 3556], 99.50th=[ 3851], 99.90th=[ 4228], 99.95th=[ 4293], 00:37:56.710 | 99.99th=[ 4490] 00:37:56.710 bw ( KiB/s): min=22768, max=24128, per=25.12%, avg=23576.89, stdev=423.71, samples=9 00:37:56.710 iops : min= 2846, max= 3016, avg=2947.11, stdev=52.96, samples=9 00:37:56.710 lat (msec) : 2=0.68%, 4=99.06%, 10=0.26% 00:37:56.710 cpu : usr=96.16%, sys=3.60%, ctx=5, majf=0, minf=44 00:37:56.710 IO depths : 1=0.1%, 2=0.2%, 4=66.6%, 8=33.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:56.710 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:56.710 complete : 0=0.0%, 4=96.9%, 8=3.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:56.710 issued rwts: total=14749,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:56.710 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:56.710 00:37:56.710 Run status group 0 (all jobs): 00:37:56.710 READ: bw=91.7MiB/s (96.1MB/s), 22.4MiB/s-23.6MiB/s (23.5MB/s-24.7MB/s), io=458MiB (481MB), run=5001-5002msec 00:37:56.710 10:11:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:37:56.710 10:11:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:56.710 10:11:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:56.710 10:11:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:56.710 10:11:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:56.710 10:11:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:56.710 10:11:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:56.710 10:11:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:56.710 10:11:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:56.710 10:11:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:56.710 10:11:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:56.710 10:11:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:56.710 10:11:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:56.710 10:11:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:56.710 10:11:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:56.710 10:11:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:37:56.710 10:11:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:56.710 10:11:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:56.710 10:11:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:56.710 10:11:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:56.710 10:11:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:56.710 10:11:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:56.710 10:11:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:56.710 10:11:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:56.710 00:37:56.710 real 0m24.478s 00:37:56.710 user 5m18.191s 00:37:56.710 sys 0m4.749s 00:37:56.710 10:11:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:56.710 10:11:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:56.710 ************************************ 00:37:56.710 END TEST fio_dif_rand_params 00:37:56.710 ************************************ 00:37:56.710 10:11:27 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:37:56.710 10:11:27 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:56.710 10:11:27 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:56.710 10:11:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:56.710 ************************************ 00:37:56.710 START TEST fio_dif_digest 00:37:56.710 ************************************ 00:37:56.710 10:11:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:37:56.710 10:11:27 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:37:56.710 10:11:27 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:37:56.710 10:11:27 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:37:56.710 10:11:27 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:37:56.710 10:11:27 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:37:56.710 10:11:27 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:37:56.710 10:11:27 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:37:56.710 10:11:27 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:37:56.710 10:11:27 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:37:56.710 10:11:27 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:37:56.710 10:11:27 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:37:56.710 10:11:27 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:37:56.710 10:11:27 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:37:56.710 10:11:27 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:37:56.710 10:11:27 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:37:56.710 10:11:27 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:37:56.710 10:11:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:56.710 10:11:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:56.710 bdev_null0 00:37:56.710 10:11:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:56.710 10:11:27 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:56.710 10:11:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:56.710 10:11:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:56.710 10:11:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:56.710 10:11:27 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:56.710 10:11:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:56.710 10:11:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:56.710 10:11:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:56.710 10:11:27 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:56.710 10:11:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:56.710 10:11:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:56.710 [2024-11-20 10:11:27.328391] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:56.710 10:11:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:56.710 10:11:27 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:37:56.711 10:11:27 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:37:56.711 10:11:27 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:37:56.711 10:11:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:37:56.711 10:11:27 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:56.711 10:11:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:37:56.711 10:11:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:56.711 10:11:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:56.711 10:11:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:56.711 { 00:37:56.711 "params": { 00:37:56.711 "name": "Nvme$subsystem", 00:37:56.711 "trtype": "$TEST_TRANSPORT", 00:37:56.711 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:56.711 "adrfam": "ipv4", 00:37:56.711 "trsvcid": "$NVMF_PORT", 00:37:56.711 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:56.711 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:56.711 "hdgst": ${hdgst:-false}, 00:37:56.711 "ddgst": ${ddgst:-false} 00:37:56.711 }, 00:37:56.711 "method": "bdev_nvme_attach_controller" 00:37:56.711 } 00:37:56.711 EOF 00:37:56.711 )") 00:37:56.711 10:11:27 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:37:56.711 10:11:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:56.711 10:11:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:56.711 10:11:27 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:37:56.711 10:11:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:56.711 10:11:27 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:37:56.711 10:11:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:56.711 10:11:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:37:56.711 10:11:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:56.711 10:11:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:56.711 10:11:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:37:56.711 10:11:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:56.711 10:11:27 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:37:56.711 10:11:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:37:56.711 10:11:27 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:37:56.711 10:11:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:56.711 10:11:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:37:56.711 10:11:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:37:56.711 10:11:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:56.711 "params": { 00:37:56.711 "name": "Nvme0", 00:37:56.711 "trtype": "tcp", 00:37:56.711 "traddr": "10.0.0.2", 00:37:56.711 "adrfam": "ipv4", 00:37:56.711 "trsvcid": "4420", 00:37:56.711 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:56.711 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:56.711 "hdgst": true, 00:37:56.711 "ddgst": true 00:37:56.711 }, 00:37:56.711 "method": "bdev_nvme_attach_controller" 00:37:56.711 }' 00:37:56.711 10:11:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:56.711 10:11:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:56.711 10:11:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:56.711 10:11:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:56.711 10:11:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:37:56.711 10:11:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:56.711 10:11:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:56.711 10:11:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:56.711 10:11:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:56.711 10:11:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:57.008 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:37:57.008 ... 00:37:57.008 fio-3.35 00:37:57.008 Starting 3 threads 00:38:09.241 00:38:09.241 filename0: (groupid=0, jobs=1): err= 0: pid=1687219: Wed Nov 20 10:11:38 2024 00:38:09.241 read: IOPS=337, BW=42.1MiB/s (44.2MB/s)(423MiB/10045msec) 00:38:09.241 slat (nsec): min=5625, max=31345, avg=7850.19, stdev=1474.70 00:38:09.241 clat (usec): min=5555, max=48484, avg=8877.54, stdev=1618.84 00:38:09.241 lat (usec): min=5564, max=48491, avg=8885.39, stdev=1618.87 00:38:09.241 clat percentiles (usec): 00:38:09.241 | 1.00th=[ 6521], 5.00th=[ 6980], 10.00th=[ 7177], 20.00th=[ 7570], 00:38:09.241 | 30.00th=[ 7832], 40.00th=[ 8225], 50.00th=[ 8717], 60.00th=[ 9372], 00:38:09.241 | 70.00th=[ 9765], 80.00th=[10159], 90.00th=[10552], 95.00th=[10945], 00:38:09.241 | 99.00th=[11731], 99.50th=[11994], 99.90th=[13042], 99.95th=[44827], 00:38:09.241 | 99.99th=[48497] 00:38:09.241 bw ( KiB/s): min=39936, max=46336, per=39.86%, avg=43315.20, stdev=1498.72, samples=20 00:38:09.241 iops : min= 312, max= 362, avg=338.40, stdev=11.71, samples=20 00:38:09.241 lat (msec) : 10=75.58%, 20=24.37%, 50=0.06% 00:38:09.241 cpu : usr=93.79%, sys=5.88%, ctx=118, majf=0, minf=108 00:38:09.241 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:09.241 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:09.241 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:09.241 issued rwts: total=3386,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:09.241 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:09.241 filename0: (groupid=0, jobs=1): err= 0: pid=1687220: Wed Nov 20 10:11:38 2024 00:38:09.241 read: IOPS=344, BW=43.1MiB/s (45.2MB/s)(433MiB/10045msec) 00:38:09.241 slat (nsec): min=5730, max=32486, avg=6556.97, stdev=919.44 00:38:09.241 clat (usec): min=5801, max=49977, avg=8681.54, stdev=1550.62 00:38:09.241 lat (usec): min=5807, max=49983, avg=8688.10, stdev=1550.71 00:38:09.241 clat percentiles (usec): 00:38:09.241 | 1.00th=[ 6456], 5.00th=[ 6915], 10.00th=[ 7177], 20.00th=[ 7504], 00:38:09.241 | 30.00th=[ 7767], 40.00th=[ 8160], 50.00th=[ 8586], 60.00th=[ 8979], 00:38:09.241 | 70.00th=[ 9372], 80.00th=[ 9896], 90.00th=[10290], 95.00th=[10683], 00:38:09.241 | 99.00th=[11338], 99.50th=[11600], 99.90th=[15664], 99.95th=[45876], 00:38:09.241 | 99.99th=[50070] 00:38:09.241 bw ( KiB/s): min=41472, max=46848, per=40.77%, avg=44300.80, stdev=1365.97, samples=20 00:38:09.241 iops : min= 324, max= 366, avg=346.10, stdev=10.67, samples=20 00:38:09.241 lat (msec) : 10=83.89%, 20=16.06%, 50=0.06% 00:38:09.241 cpu : usr=94.10%, sys=5.68%, ctx=16, majf=0, minf=205 00:38:09.241 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:09.241 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:09.241 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:09.241 issued rwts: total=3463,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:09.241 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:09.241 filename0: (groupid=0, jobs=1): err= 0: pid=1687221: Wed Nov 20 10:11:38 2024 00:38:09.241 read: IOPS=167, BW=20.9MiB/s (21.9MB/s)(210MiB/10047msec) 00:38:09.241 slat (nsec): min=5754, max=31895, avg=6590.84, stdev=1217.36 00:38:09.241 clat (msec): min=7, max=131, avg=17.90, stdev=17.37 00:38:09.241 lat (msec): min=7, max=131, avg=17.91, stdev=17.37 00:38:09.241 clat percentiles (msec): 00:38:09.241 | 1.00th=[ 9], 5.00th=[ 9], 10.00th=[ 10], 20.00th=[ 10], 00:38:09.241 | 30.00th=[ 10], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 11], 00:38:09.241 | 70.00th=[ 11], 80.00th=[ 12], 90.00th=[ 51], 95.00th=[ 52], 00:38:09.241 | 99.00th=[ 91], 99.50th=[ 92], 99.90th=[ 94], 99.95th=[ 132], 00:38:09.241 | 99.99th=[ 132] 00:38:09.241 bw ( KiB/s): min=12544, max=32000, per=19.77%, avg=21478.40, stdev=4922.79, samples=20 00:38:09.241 iops : min= 98, max= 250, avg=167.80, stdev=38.46, samples=20 00:38:09.241 lat (msec) : 10=38.93%, 20=42.92%, 50=4.05%, 100=14.05%, 250=0.06% 00:38:09.242 cpu : usr=95.66%, sys=4.12%, ctx=14, majf=0, minf=56 00:38:09.242 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:09.242 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:09.242 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:09.242 issued rwts: total=1680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:09.242 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:09.242 00:38:09.242 Run status group 0 (all jobs): 00:38:09.242 READ: bw=106MiB/s (111MB/s), 20.9MiB/s-43.1MiB/s (21.9MB/s-45.2MB/s), io=1066MiB (1118MB), run=10045-10047msec 00:38:09.242 10:11:38 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:38:09.242 10:11:38 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:38:09.242 10:11:38 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:38:09.242 10:11:38 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:09.242 10:11:38 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:38:09.242 10:11:38 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:09.242 10:11:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:09.242 10:11:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:09.242 10:11:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:09.242 10:11:38 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:09.242 10:11:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:09.242 10:11:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:09.242 10:11:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:09.242 00:38:09.242 real 0m11.142s 00:38:09.242 user 0m44.940s 00:38:09.242 sys 0m1.872s 00:38:09.242 10:11:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:09.242 10:11:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:09.242 ************************************ 00:38:09.242 END TEST fio_dif_digest 00:38:09.242 ************************************ 00:38:09.242 10:11:38 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:38:09.242 10:11:38 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:38:09.242 10:11:38 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:09.242 10:11:38 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:38:09.242 10:11:38 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:09.242 10:11:38 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:38:09.242 10:11:38 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:09.242 10:11:38 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:09.242 rmmod nvme_tcp 00:38:09.242 rmmod nvme_fabrics 00:38:09.242 rmmod nvme_keyring 00:38:09.242 10:11:38 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:09.242 10:11:38 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:38:09.242 10:11:38 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:38:09.242 10:11:38 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 1676776 ']' 00:38:09.242 10:11:38 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 1676776 00:38:09.242 10:11:38 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 1676776 ']' 00:38:09.242 10:11:38 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 1676776 00:38:09.242 10:11:38 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:38:09.242 10:11:38 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:09.242 10:11:38 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1676776 00:38:09.242 10:11:38 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:09.242 10:11:38 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:09.242 10:11:38 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1676776' 00:38:09.242 killing process with pid 1676776 00:38:09.242 10:11:38 nvmf_dif -- common/autotest_common.sh@973 -- # kill 1676776 00:38:09.242 10:11:38 nvmf_dif -- common/autotest_common.sh@978 -- # wait 1676776 00:38:09.242 10:11:38 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:38:09.242 10:11:38 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:38:11.158 Waiting for block devices as requested 00:38:11.418 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:11.418 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:11.418 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:11.418 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:11.679 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:11.679 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:11.679 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:11.941 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:11.941 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:38:12.201 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:12.201 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:12.201 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:12.461 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:12.461 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:12.461 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:12.722 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:12.722 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:12.983 10:11:43 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:12.983 10:11:43 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:12.983 10:11:43 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:38:12.983 10:11:43 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:38:12.983 10:11:43 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:12.983 10:11:43 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:38:12.983 10:11:43 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:12.983 10:11:43 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:12.983 10:11:43 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:12.983 10:11:43 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:12.983 10:11:43 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:15.525 10:11:45 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:15.525 00:38:15.525 real 1m18.269s 00:38:15.525 user 8m5.320s 00:38:15.525 sys 0m22.169s 00:38:15.525 10:11:45 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:15.525 10:11:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:15.525 ************************************ 00:38:15.525 END TEST nvmf_dif 00:38:15.525 ************************************ 00:38:15.525 10:11:45 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:38:15.525 10:11:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:15.525 10:11:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:15.525 10:11:45 -- common/autotest_common.sh@10 -- # set +x 00:38:15.525 ************************************ 00:38:15.525 START TEST nvmf_abort_qd_sizes 00:38:15.525 ************************************ 00:38:15.525 10:11:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:38:15.525 * Looking for test storage... 00:38:15.525 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:15.525 10:11:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:15.526 10:11:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:38:15.526 10:11:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:15.526 10:11:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:15.526 10:11:46 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:15.526 10:11:46 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:15.526 10:11:46 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:15.526 10:11:46 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:38:15.526 10:11:46 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:38:15.526 10:11:46 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:38:15.526 10:11:46 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:38:15.526 10:11:46 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:38:15.526 10:11:46 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:38:15.526 10:11:46 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:38:15.526 10:11:46 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:15.526 10:11:46 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:38:15.526 10:11:46 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:38:15.526 10:11:46 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:15.526 10:11:46 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:15.526 10:11:46 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:38:15.526 10:11:46 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:38:15.526 10:11:46 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:15.526 10:11:46 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:38:15.526 10:11:46 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:38:15.526 10:11:46 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:38:15.526 10:11:46 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:38:15.526 10:11:46 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:15.526 10:11:46 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:38:15.526 10:11:46 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:38:15.526 10:11:46 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:15.526 10:11:46 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:15.526 10:11:46 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:38:15.526 10:11:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:15.526 10:11:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:15.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:15.526 --rc genhtml_branch_coverage=1 00:38:15.526 --rc genhtml_function_coverage=1 00:38:15.526 --rc genhtml_legend=1 00:38:15.526 --rc geninfo_all_blocks=1 00:38:15.526 --rc geninfo_unexecuted_blocks=1 00:38:15.526 00:38:15.526 ' 00:38:15.526 10:11:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:15.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:15.526 --rc genhtml_branch_coverage=1 00:38:15.526 --rc genhtml_function_coverage=1 00:38:15.526 --rc genhtml_legend=1 00:38:15.526 --rc geninfo_all_blocks=1 00:38:15.526 --rc geninfo_unexecuted_blocks=1 00:38:15.526 00:38:15.526 ' 00:38:15.526 10:11:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:15.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:15.526 --rc genhtml_branch_coverage=1 00:38:15.526 --rc genhtml_function_coverage=1 00:38:15.526 --rc genhtml_legend=1 00:38:15.526 --rc geninfo_all_blocks=1 00:38:15.526 --rc geninfo_unexecuted_blocks=1 00:38:15.526 00:38:15.526 ' 00:38:15.526 10:11:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:15.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:15.526 --rc genhtml_branch_coverage=1 00:38:15.526 --rc genhtml_function_coverage=1 00:38:15.526 --rc genhtml_legend=1 00:38:15.526 --rc geninfo_all_blocks=1 00:38:15.526 --rc geninfo_unexecuted_blocks=1 00:38:15.526 00:38:15.526 ' 00:38:15.526 10:11:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:15.526 10:11:46 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:38:15.526 10:11:46 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:15.526 10:11:46 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:15.526 10:11:46 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:15.526 10:11:46 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:15.526 10:11:46 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:15.526 10:11:46 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:15.526 10:11:46 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:15.526 10:11:46 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:15.526 10:11:46 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:15.526 10:11:46 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:15.526 10:11:46 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:15.526 10:11:46 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:15.526 10:11:46 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:15.526 10:11:46 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:15.526 10:11:46 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:15.526 10:11:46 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:15.526 10:11:46 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:15.526 10:11:46 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:38:15.526 10:11:46 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:15.526 10:11:46 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:15.526 10:11:46 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:15.526 10:11:46 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:15.526 10:11:46 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:15.526 10:11:46 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:15.526 10:11:46 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:38:15.526 10:11:46 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:15.526 10:11:46 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:38:15.526 10:11:46 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:15.526 10:11:46 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:15.526 10:11:46 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:15.526 10:11:46 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:15.526 10:11:46 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:15.526 10:11:46 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:15.526 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:15.526 10:11:46 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:15.526 10:11:46 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:15.526 10:11:46 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:15.526 10:11:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:38:15.526 10:11:46 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:15.526 10:11:46 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:15.526 10:11:46 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:15.526 10:11:46 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:15.526 10:11:46 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:15.526 10:11:46 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:15.526 10:11:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:15.526 10:11:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:15.526 10:11:46 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:15.526 10:11:46 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:15.526 10:11:46 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:38:15.526 10:11:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:23.671 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:23.671 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:38:23.671 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:23.672 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:23.672 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:23.672 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:23.672 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:23.672 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:38:23.672 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:23.672 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:38:23.672 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:38:23.672 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:38:23.672 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:38:23.672 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:38:23.672 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:38:23.672 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:23.672 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:23.672 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:23.672 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:23.672 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:23.672 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:23.672 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:23.672 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:23.672 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:23.672 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:23.672 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:23.672 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:23.672 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:23.672 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:23.672 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:23.672 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:23.672 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:23.672 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:23.672 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:23.672 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:38:23.672 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:38:23.672 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:23.672 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:23.672 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:23.672 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:23.672 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:23.672 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:23.672 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:38:23.672 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:38:23.672 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:23.672 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:23.672 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:23.672 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:23.672 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:23.672 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:23.672 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:23.672 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:23.672 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:23.672 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:23.672 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:23.672 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:23.672 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:23.672 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:23.672 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:23.672 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:38:23.672 Found net devices under 0000:4b:00.0: cvl_0_0 00:38:23.672 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:23.672 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:23.672 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:23.672 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:23.672 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:23.672 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:23.672 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:23.672 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:23.672 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:38:23.672 Found net devices under 0000:4b:00.1: cvl_0_1 00:38:23.672 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:23.672 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:23.672 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:38:23.672 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:23.672 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:23.672 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:23.672 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:23.672 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:23.672 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:23.672 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:23.672 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:23.672 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:23.672 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:23.672 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:23.672 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:23.672 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:23.672 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:23.672 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:23.672 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:23.672 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:23.672 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:23.672 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:23.672 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:23.672 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:23.672 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:23.672 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:23.672 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:23.672 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:23.672 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:23.672 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:23.672 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.527 ms 00:38:23.672 00:38:23.672 --- 10.0.0.2 ping statistics --- 00:38:23.672 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:23.672 rtt min/avg/max/mdev = 0.527/0.527/0.527/0.000 ms 00:38:23.672 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:23.672 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:23.672 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:38:23.672 00:38:23.672 --- 10.0.0.1 ping statistics --- 00:38:23.672 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:23.672 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:38:23.672 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:23.672 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:38:23.672 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:38:23.672 10:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:38:26.216 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:38:26.216 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:38:26.216 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:38:26.216 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:38:26.216 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:38:26.216 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:38:26.216 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:38:26.477 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:38:26.477 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:38:26.477 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:38:26.477 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:38:26.477 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:38:26.477 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:38:26.477 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:38:26.477 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:38:26.477 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:38:26.477 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:38:26.737 10:11:57 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:26.737 10:11:57 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:26.737 10:11:57 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:26.737 10:11:57 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:26.737 10:11:57 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:26.737 10:11:57 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:26.997 10:11:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:38:26.997 10:11:57 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:26.997 10:11:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:26.997 10:11:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:26.997 10:11:57 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=1696650 00:38:26.997 10:11:57 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 1696650 00:38:26.997 10:11:57 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:38:26.997 10:11:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 1696650 ']' 00:38:26.997 10:11:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:26.997 10:11:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:26.997 10:11:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:26.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:26.997 10:11:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:26.997 10:11:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:26.997 [2024-11-20 10:11:57.732576] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:38:26.997 [2024-11-20 10:11:57.732625] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:26.997 [2024-11-20 10:11:57.826635] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:26.997 [2024-11-20 10:11:57.868681] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:26.998 [2024-11-20 10:11:57.868721] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:26.998 [2024-11-20 10:11:57.868729] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:26.998 [2024-11-20 10:11:57.868736] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:26.998 [2024-11-20 10:11:57.868742] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:26.998 [2024-11-20 10:11:57.870590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:26.998 [2024-11-20 10:11:57.870745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:26.998 [2024-11-20 10:11:57.870901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:26.998 [2024-11-20 10:11:57.870901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:38:27.939 10:11:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:27.939 10:11:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:38:27.939 10:11:58 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:27.939 10:11:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:27.939 10:11:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:27.939 10:11:58 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:27.939 10:11:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:38:27.939 10:11:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:38:27.939 10:11:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:38:27.939 10:11:58 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:38:27.939 10:11:58 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:38:27.939 10:11:58 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:65:00.0 ]] 00:38:27.939 10:11:58 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:38:27.939 10:11:58 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:38:27.939 10:11:58 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:38:27.939 10:11:58 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:38:27.939 10:11:58 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:38:27.939 10:11:58 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:38:27.939 10:11:58 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:38:27.939 10:11:58 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:65:00.0 00:38:27.939 10:11:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:38:27.939 10:11:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:38:27.939 10:11:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:38:27.939 10:11:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:27.939 10:11:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:27.939 10:11:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:27.939 ************************************ 00:38:27.940 START TEST spdk_target_abort 00:38:27.940 ************************************ 00:38:27.940 10:11:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:38:27.940 10:11:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:38:27.940 10:11:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:38:27.940 10:11:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:27.940 10:11:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:28.201 spdk_targetn1 00:38:28.201 10:11:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:28.201 10:11:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:28.201 10:11:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:28.201 10:11:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:28.201 [2024-11-20 10:11:58.956303] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:28.201 10:11:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:28.201 10:11:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:38:28.201 10:11:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:28.201 10:11:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:28.201 10:11:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:28.201 10:11:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:38:28.201 10:11:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:28.201 10:11:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:28.201 10:11:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:28.201 10:11:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:38:28.201 10:11:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:28.201 10:11:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:28.201 [2024-11-20 10:11:59.000621] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:28.201 10:11:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:28.201 10:11:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:38:28.201 10:11:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:38:28.201 10:11:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:38:28.201 10:11:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:38:28.201 10:11:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:38:28.201 10:11:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:38:28.201 10:11:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:38:28.201 10:11:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:38:28.201 10:11:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:38:28.201 10:11:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:28.201 10:11:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:38:28.201 10:11:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:28.201 10:11:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:38:28.201 10:11:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:28.201 10:11:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:38:28.201 10:11:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:28.201 10:11:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:38:28.201 10:11:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:28.201 10:11:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:28.201 10:11:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:28.201 10:11:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:28.462 [2024-11-20 10:11:59.148916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:440 len:8 PRP1 0x200004ac2000 PRP2 0x0 00:38:28.462 [2024-11-20 10:11:59.148953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:0038 p:1 m:0 dnr:0 00:38:28.462 [2024-11-20 10:11:59.153679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:480 len:8 PRP1 0x200004ac4000 PRP2 0x0 00:38:28.462 [2024-11-20 10:11:59.153696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:0040 p:1 m:0 dnr:0 00:38:28.462 [2024-11-20 10:11:59.153783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:496 len:8 PRP1 0x200004ac0000 PRP2 0x0 00:38:28.462 [2024-11-20 10:11:59.153794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:0040 p:1 m:0 dnr:0 00:38:28.462 [2024-11-20 10:11:59.170086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:1040 len:8 PRP1 0x200004ac2000 PRP2 0x0 00:38:28.462 [2024-11-20 10:11:59.170112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0083 p:1 m:0 dnr:0 00:38:28.462 [2024-11-20 10:11:59.195988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:1936 len:8 PRP1 0x200004ac4000 PRP2 0x0 00:38:28.462 [2024-11-20 10:11:59.196009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:00f3 p:1 m:0 dnr:0 00:38:28.462 [2024-11-20 10:11:59.202275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:2080 len:8 PRP1 0x200004abe000 PRP2 0x0 00:38:28.462 [2024-11-20 10:11:59.202293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:38:28.462 [2024-11-20 10:11:59.214739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:2440 len:8 PRP1 0x200004abe000 PRP2 0x0 00:38:28.462 [2024-11-20 10:11:59.214758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:38:28.462 [2024-11-20 10:11:59.257261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:3960 len:8 PRP1 0x200004ac6000 PRP2 0x0 00:38:28.462 [2024-11-20 10:11:59.257282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:00f1 p:0 m:0 dnr:0 00:38:31.759 Initializing NVMe Controllers 00:38:31.759 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:38:31.759 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:31.759 Initialization complete. Launching workers. 00:38:31.759 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10983, failed: 8 00:38:31.759 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2622, failed to submit 8369 00:38:31.759 success 739, unsuccessful 1883, failed 0 00:38:31.759 10:12:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:31.759 10:12:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:31.759 [2024-11-20 10:12:02.385305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:187 nsid:1 lba:1056 len:8 PRP1 0x200004e48000 PRP2 0x0 00:38:31.759 [2024-11-20 10:12:02.385345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:187 cdw0:0 sqhd:0088 p:1 m:0 dnr:0 00:38:31.759 [2024-11-20 10:12:02.393169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:190 nsid:1 lba:1152 len:8 PRP1 0x200004e4e000 PRP2 0x0 00:38:31.759 [2024-11-20 10:12:02.393194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:190 cdw0:0 sqhd:0094 p:1 m:0 dnr:0 00:38:31.759 [2024-11-20 10:12:02.416367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:191 nsid:1 lba:1752 len:8 PRP1 0x200004e48000 PRP2 0x0 00:38:31.759 [2024-11-20 10:12:02.416390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:191 cdw0:0 sqhd:00e0 p:1 m:0 dnr:0 00:38:31.759 [2024-11-20 10:12:02.495286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:182 nsid:1 lba:3608 len:8 PRP1 0x200004e5c000 PRP2 0x0 00:38:31.759 [2024-11-20 10:12:02.495311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:182 cdw0:0 sqhd:00cc p:0 m:0 dnr:0 00:38:33.143 [2024-11-20 10:12:03.625307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:178 nsid:1 lba:29400 len:8 PRP1 0x200004e3e000 PRP2 0x0 00:38:33.143 [2024-11-20 10:12:03.625343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:178 cdw0:0 sqhd:0069 p:1 m:0 dnr:0 00:38:35.056 Initializing NVMe Controllers 00:38:35.056 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:38:35.056 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:35.056 Initialization complete. Launching workers. 00:38:35.056 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8462, failed: 5 00:38:35.056 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1212, failed to submit 7255 00:38:35.056 success 348, unsuccessful 864, failed 0 00:38:35.056 10:12:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:35.056 10:12:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:35.056 [2024-11-20 10:12:05.701319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:180 nsid:1 lba:3568 len:8 PRP1 0x200004af6000 PRP2 0x0 00:38:35.056 [2024-11-20 10:12:05.701344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:180 cdw0:0 sqhd:00b2 p:1 m:0 dnr:0 00:38:35.628 [2024-11-20 10:12:06.305593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:183 nsid:1 lba:73272 len:8 PRP1 0x200004ae4000 PRP2 0x0 00:38:35.628 [2024-11-20 10:12:06.305617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:183 cdw0:0 sqhd:00c0 p:1 m:0 dnr:0 00:38:35.888 [2024-11-20 10:12:06.754541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:147 nsid:1 lba:125336 len:8 PRP1 0x200004b1e000 PRP2 0x0 00:38:35.888 [2024-11-20 10:12:06.754561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:147 cdw0:0 sqhd:002c p:1 m:0 dnr:0 00:38:36.827 [2024-11-20 10:12:07.545274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:173 nsid:1 lba:215504 len:8 PRP1 0x200004b18000 PRP2 0x0 00:38:36.827 [2024-11-20 10:12:07.545299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:173 cdw0:0 sqhd:002f p:1 m:0 dnr:0 00:38:38.210 Initializing NVMe Controllers 00:38:38.210 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:38:38.210 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:38.210 Initialization complete. Launching workers. 00:38:38.210 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 43151, failed: 4 00:38:38.210 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2748, failed to submit 40407 00:38:38.210 success 595, unsuccessful 2153, failed 0 00:38:38.210 10:12:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:38:38.210 10:12:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:38.210 10:12:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:38.210 10:12:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:38.210 10:12:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:38:38.210 10:12:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:38.210 10:12:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:40.122 10:12:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:40.122 10:12:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1696650 00:38:40.122 10:12:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 1696650 ']' 00:38:40.122 10:12:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 1696650 00:38:40.122 10:12:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:38:40.122 10:12:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:40.122 10:12:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1696650 00:38:40.122 10:12:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:40.122 10:12:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:40.122 10:12:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1696650' 00:38:40.122 killing process with pid 1696650 00:38:40.122 10:12:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 1696650 00:38:40.122 10:12:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 1696650 00:38:40.122 00:38:40.122 real 0m12.075s 00:38:40.122 user 0m49.261s 00:38:40.122 sys 0m1.989s 00:38:40.122 10:12:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:40.122 10:12:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:40.122 ************************************ 00:38:40.122 END TEST spdk_target_abort 00:38:40.122 ************************************ 00:38:40.122 10:12:10 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:38:40.122 10:12:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:40.122 10:12:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:40.122 10:12:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:40.122 ************************************ 00:38:40.122 START TEST kernel_target_abort 00:38:40.122 ************************************ 00:38:40.122 10:12:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:38:40.122 10:12:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:38:40.122 10:12:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:38:40.122 10:12:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:40.122 10:12:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:40.122 10:12:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:40.122 10:12:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:40.122 10:12:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:40.122 10:12:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:40.122 10:12:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:40.122 10:12:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:40.122 10:12:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:40.122 10:12:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:38:40.122 10:12:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:38:40.122 10:12:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:38:40.122 10:12:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:40.122 10:12:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:40.122 10:12:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:38:40.122 10:12:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:38:40.122 10:12:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:38:40.122 10:12:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:38:40.122 10:12:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:38:40.122 10:12:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:38:43.452 Waiting for block devices as requested 00:38:43.452 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:43.452 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:43.712 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:43.712 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:43.712 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:43.973 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:43.973 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:43.973 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:44.234 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:38:44.234 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:44.495 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:44.495 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:44.495 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:44.756 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:44.756 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:44.756 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:45.016 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:45.277 10:12:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:38:45.277 10:12:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:38:45.277 10:12:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:38:45.277 10:12:16 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:38:45.277 10:12:16 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:38:45.277 10:12:16 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:38:45.277 10:12:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:38:45.277 10:12:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:38:45.277 10:12:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:38:45.277 No valid GPT data, bailing 00:38:45.277 10:12:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:38:45.277 10:12:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:38:45.277 10:12:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:38:45.277 10:12:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:38:45.277 10:12:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:38:45.277 10:12:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:45.277 10:12:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:45.277 10:12:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:38:45.277 10:12:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:38:45.277 10:12:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:38:45.277 10:12:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:38:45.277 10:12:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:38:45.277 10:12:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:38:45.277 10:12:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:38:45.277 10:12:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:38:45.277 10:12:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:38:45.277 10:12:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:38:45.277 10:12:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:38:45.538 00:38:45.538 Discovery Log Number of Records 2, Generation counter 2 00:38:45.538 =====Discovery Log Entry 0====== 00:38:45.538 trtype: tcp 00:38:45.538 adrfam: ipv4 00:38:45.538 subtype: current discovery subsystem 00:38:45.538 treq: not specified, sq flow control disable supported 00:38:45.538 portid: 1 00:38:45.538 trsvcid: 4420 00:38:45.538 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:38:45.538 traddr: 10.0.0.1 00:38:45.538 eflags: none 00:38:45.538 sectype: none 00:38:45.538 =====Discovery Log Entry 1====== 00:38:45.538 trtype: tcp 00:38:45.538 adrfam: ipv4 00:38:45.538 subtype: nvme subsystem 00:38:45.538 treq: not specified, sq flow control disable supported 00:38:45.538 portid: 1 00:38:45.538 trsvcid: 4420 00:38:45.538 subnqn: nqn.2016-06.io.spdk:testnqn 00:38:45.538 traddr: 10.0.0.1 00:38:45.538 eflags: none 00:38:45.538 sectype: none 00:38:45.538 10:12:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:38:45.538 10:12:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:38:45.538 10:12:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:38:45.538 10:12:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:38:45.538 10:12:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:38:45.538 10:12:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:38:45.538 10:12:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:38:45.538 10:12:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:38:45.538 10:12:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:38:45.538 10:12:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:45.538 10:12:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:38:45.538 10:12:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:45.538 10:12:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:38:45.538 10:12:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:45.538 10:12:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:38:45.538 10:12:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:45.538 10:12:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:38:45.538 10:12:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:45.538 10:12:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:45.538 10:12:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:45.538 10:12:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:48.838 Initializing NVMe Controllers 00:38:48.838 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:48.838 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:48.838 Initialization complete. Launching workers. 00:38:48.838 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 67424, failed: 0 00:38:48.838 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 67424, failed to submit 0 00:38:48.838 success 0, unsuccessful 67424, failed 0 00:38:48.838 10:12:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:48.838 10:12:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:52.140 Initializing NVMe Controllers 00:38:52.140 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:52.140 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:52.140 Initialization complete. Launching workers. 00:38:52.140 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 117018, failed: 0 00:38:52.140 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 29454, failed to submit 87564 00:38:52.140 success 0, unsuccessful 29454, failed 0 00:38:52.140 10:12:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:52.140 10:12:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:54.689 Initializing NVMe Controllers 00:38:54.689 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:54.689 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:54.689 Initialization complete. Launching workers. 00:38:54.689 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 146436, failed: 0 00:38:54.689 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36650, failed to submit 109786 00:38:54.689 success 0, unsuccessful 36650, failed 0 00:38:54.689 10:12:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:38:54.689 10:12:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:38:54.689 10:12:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:38:54.689 10:12:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:54.689 10:12:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:54.689 10:12:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:38:54.689 10:12:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:54.689 10:12:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:38:54.689 10:12:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:38:54.948 10:12:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:38:58.248 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:38:58.248 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:38:58.248 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:38:58.248 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:38:58.248 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:38:58.248 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:38:58.248 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:38:58.248 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:38:58.248 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:38:58.248 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:38:58.248 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:38:58.248 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:38:58.514 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:38:58.514 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:38:58.514 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:38:58.514 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:39:00.446 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:39:00.446 00:39:00.446 real 0m20.485s 00:39:00.446 user 0m9.875s 00:39:00.446 sys 0m6.238s 00:39:00.446 10:12:31 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:00.446 10:12:31 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:00.446 ************************************ 00:39:00.446 END TEST kernel_target_abort 00:39:00.446 ************************************ 00:39:00.446 10:12:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:39:00.446 10:12:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:39:00.446 10:12:31 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:00.446 10:12:31 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:39:00.446 10:12:31 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:00.446 10:12:31 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:39:00.446 10:12:31 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:00.446 10:12:31 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:00.446 rmmod nvme_tcp 00:39:00.707 rmmod nvme_fabrics 00:39:00.707 rmmod nvme_keyring 00:39:00.707 10:12:31 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:00.707 10:12:31 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:39:00.707 10:12:31 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:39:00.707 10:12:31 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 1696650 ']' 00:39:00.707 10:12:31 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 1696650 00:39:00.707 10:12:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 1696650 ']' 00:39:00.707 10:12:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 1696650 00:39:00.707 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1696650) - No such process 00:39:00.707 10:12:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 1696650 is not found' 00:39:00.707 Process with pid 1696650 is not found 00:39:00.707 10:12:31 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:39:00.707 10:12:31 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:39:04.086 Waiting for block devices as requested 00:39:04.086 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:39:04.086 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:39:04.086 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:39:04.086 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:39:04.346 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:39:04.346 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:39:04.346 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:39:04.346 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:39:04.607 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:39:04.607 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:39:04.869 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:39:04.869 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:39:04.869 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:39:05.129 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:39:05.129 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:39:05.129 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:39:05.129 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:39:05.701 10:12:36 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:05.701 10:12:36 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:05.701 10:12:36 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:39:05.701 10:12:36 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:39:05.701 10:12:36 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:05.701 10:12:36 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:39:05.701 10:12:36 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:05.701 10:12:36 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:05.701 10:12:36 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:05.701 10:12:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:39:05.701 10:12:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:07.614 10:12:38 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:07.614 00:39:07.614 real 0m52.487s 00:39:07.614 user 1m4.625s 00:39:07.614 sys 0m19.392s 00:39:07.614 10:12:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:07.614 10:12:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:07.614 ************************************ 00:39:07.614 END TEST nvmf_abort_qd_sizes 00:39:07.614 ************************************ 00:39:07.614 10:12:38 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:39:07.614 10:12:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:07.614 10:12:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:07.614 10:12:38 -- common/autotest_common.sh@10 -- # set +x 00:39:07.614 ************************************ 00:39:07.614 START TEST keyring_file 00:39:07.614 ************************************ 00:39:07.614 10:12:38 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:39:07.875 * Looking for test storage... 00:39:07.875 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:39:07.875 10:12:38 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:07.875 10:12:38 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:39:07.875 10:12:38 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:07.875 10:12:38 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:07.875 10:12:38 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:07.875 10:12:38 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:07.875 10:12:38 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:07.875 10:12:38 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:39:07.875 10:12:38 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:39:07.875 10:12:38 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:39:07.875 10:12:38 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:39:07.875 10:12:38 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:39:07.875 10:12:38 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:39:07.875 10:12:38 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:39:07.875 10:12:38 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:07.875 10:12:38 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:39:07.875 10:12:38 keyring_file -- scripts/common.sh@345 -- # : 1 00:39:07.875 10:12:38 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:07.875 10:12:38 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:07.875 10:12:38 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:39:07.875 10:12:38 keyring_file -- scripts/common.sh@353 -- # local d=1 00:39:07.875 10:12:38 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:07.875 10:12:38 keyring_file -- scripts/common.sh@355 -- # echo 1 00:39:07.875 10:12:38 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:39:07.875 10:12:38 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:39:07.875 10:12:38 keyring_file -- scripts/common.sh@353 -- # local d=2 00:39:07.875 10:12:38 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:07.875 10:12:38 keyring_file -- scripts/common.sh@355 -- # echo 2 00:39:07.875 10:12:38 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:39:07.875 10:12:38 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:07.875 10:12:38 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:07.875 10:12:38 keyring_file -- scripts/common.sh@368 -- # return 0 00:39:07.875 10:12:38 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:07.875 10:12:38 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:07.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:07.875 --rc genhtml_branch_coverage=1 00:39:07.875 --rc genhtml_function_coverage=1 00:39:07.875 --rc genhtml_legend=1 00:39:07.875 --rc geninfo_all_blocks=1 00:39:07.875 --rc geninfo_unexecuted_blocks=1 00:39:07.875 00:39:07.875 ' 00:39:07.875 10:12:38 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:07.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:07.875 --rc genhtml_branch_coverage=1 00:39:07.875 --rc genhtml_function_coverage=1 00:39:07.875 --rc genhtml_legend=1 00:39:07.875 --rc geninfo_all_blocks=1 00:39:07.875 --rc geninfo_unexecuted_blocks=1 00:39:07.875 00:39:07.875 ' 00:39:07.875 10:12:38 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:07.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:07.876 --rc genhtml_branch_coverage=1 00:39:07.876 --rc genhtml_function_coverage=1 00:39:07.876 --rc genhtml_legend=1 00:39:07.876 --rc geninfo_all_blocks=1 00:39:07.876 --rc geninfo_unexecuted_blocks=1 00:39:07.876 00:39:07.876 ' 00:39:07.876 10:12:38 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:07.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:07.876 --rc genhtml_branch_coverage=1 00:39:07.876 --rc genhtml_function_coverage=1 00:39:07.876 --rc genhtml_legend=1 00:39:07.876 --rc geninfo_all_blocks=1 00:39:07.876 --rc geninfo_unexecuted_blocks=1 00:39:07.876 00:39:07.876 ' 00:39:07.876 10:12:38 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:39:07.876 10:12:38 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:07.876 10:12:38 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:39:07.876 10:12:38 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:07.876 10:12:38 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:07.876 10:12:38 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:07.876 10:12:38 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:07.876 10:12:38 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:07.876 10:12:38 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:07.876 10:12:38 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:07.876 10:12:38 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:07.876 10:12:38 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:07.876 10:12:38 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:07.876 10:12:38 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:07.876 10:12:38 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:07.876 10:12:38 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:07.876 10:12:38 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:07.876 10:12:38 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:07.876 10:12:38 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:07.876 10:12:38 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:07.876 10:12:38 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:39:07.876 10:12:38 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:07.876 10:12:38 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:07.876 10:12:38 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:07.876 10:12:38 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:07.876 10:12:38 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:07.876 10:12:38 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:07.876 10:12:38 keyring_file -- paths/export.sh@5 -- # export PATH 00:39:07.876 10:12:38 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:07.876 10:12:38 keyring_file -- nvmf/common.sh@51 -- # : 0 00:39:07.876 10:12:38 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:07.876 10:12:38 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:07.876 10:12:38 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:07.876 10:12:38 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:07.876 10:12:38 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:07.876 10:12:38 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:07.876 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:07.876 10:12:38 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:07.876 10:12:38 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:07.876 10:12:38 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:07.876 10:12:38 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:39:07.876 10:12:38 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:39:07.876 10:12:38 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:39:07.876 10:12:38 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:39:07.876 10:12:38 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:39:07.876 10:12:38 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:39:07.876 10:12:38 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:39:07.876 10:12:38 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:39:07.876 10:12:38 keyring_file -- keyring/common.sh@17 -- # name=key0 00:39:07.876 10:12:38 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:39:07.876 10:12:38 keyring_file -- keyring/common.sh@17 -- # digest=0 00:39:07.876 10:12:38 keyring_file -- keyring/common.sh@18 -- # mktemp 00:39:07.876 10:12:38 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.c4jrQOKOAi 00:39:07.876 10:12:38 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:39:07.876 10:12:38 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:39:07.876 10:12:38 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:39:07.876 10:12:38 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:39:07.876 10:12:38 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:39:07.876 10:12:38 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:39:07.876 10:12:38 keyring_file -- nvmf/common.sh@733 -- # python - 00:39:08.137 10:12:38 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.c4jrQOKOAi 00:39:08.138 10:12:38 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.c4jrQOKOAi 00:39:08.138 10:12:38 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.c4jrQOKOAi 00:39:08.138 10:12:38 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:39:08.138 10:12:38 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:39:08.138 10:12:38 keyring_file -- keyring/common.sh@17 -- # name=key1 00:39:08.138 10:12:38 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:39:08.138 10:12:38 keyring_file -- keyring/common.sh@17 -- # digest=0 00:39:08.138 10:12:38 keyring_file -- keyring/common.sh@18 -- # mktemp 00:39:08.138 10:12:38 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.rUI3UnCGhJ 00:39:08.138 10:12:38 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:39:08.138 10:12:38 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:39:08.138 10:12:38 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:39:08.138 10:12:38 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:39:08.138 10:12:38 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:39:08.138 10:12:38 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:39:08.138 10:12:38 keyring_file -- nvmf/common.sh@733 -- # python - 00:39:08.138 10:12:38 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.rUI3UnCGhJ 00:39:08.138 10:12:38 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.rUI3UnCGhJ 00:39:08.138 10:12:38 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.rUI3UnCGhJ 00:39:08.138 10:12:38 keyring_file -- keyring/file.sh@30 -- # tgtpid=1707362 00:39:08.138 10:12:38 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1707362 00:39:08.138 10:12:38 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:39:08.138 10:12:38 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1707362 ']' 00:39:08.138 10:12:38 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:08.138 10:12:38 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:08.138 10:12:38 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:08.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:08.138 10:12:38 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:08.138 10:12:38 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:08.138 [2024-11-20 10:12:38.940877] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:39:08.138 [2024-11-20 10:12:38.940958] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1707362 ] 00:39:08.138 [2024-11-20 10:12:39.033771] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:08.397 [2024-11-20 10:12:39.087266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:08.966 10:12:39 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:08.966 10:12:39 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:39:08.966 10:12:39 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:39:08.966 10:12:39 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:08.966 10:12:39 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:08.966 [2024-11-20 10:12:39.738332] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:08.966 null0 00:39:08.966 [2024-11-20 10:12:39.770387] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:39:08.966 [2024-11-20 10:12:39.770668] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:39:08.966 10:12:39 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:08.966 10:12:39 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:39:08.966 10:12:39 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:39:08.966 10:12:39 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:39:08.966 10:12:39 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:39:08.967 10:12:39 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:08.967 10:12:39 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:39:08.967 10:12:39 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:08.967 10:12:39 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:39:08.967 10:12:39 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:08.967 10:12:39 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:08.967 [2024-11-20 10:12:39.802452] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:39:08.967 request: 00:39:08.967 { 00:39:08.967 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:39:08.967 "secure_channel": false, 00:39:08.967 "listen_address": { 00:39:08.967 "trtype": "tcp", 00:39:08.967 "traddr": "127.0.0.1", 00:39:08.967 "trsvcid": "4420" 00:39:08.967 }, 00:39:08.967 "method": "nvmf_subsystem_add_listener", 00:39:08.967 "req_id": 1 00:39:08.967 } 00:39:08.967 Got JSON-RPC error response 00:39:08.967 response: 00:39:08.967 { 00:39:08.967 "code": -32602, 00:39:08.967 "message": "Invalid parameters" 00:39:08.967 } 00:39:08.967 10:12:39 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:39:08.967 10:12:39 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:39:08.967 10:12:39 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:08.967 10:12:39 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:08.967 10:12:39 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:08.967 10:12:39 keyring_file -- keyring/file.sh@47 -- # bperfpid=1707452 00:39:08.967 10:12:39 keyring_file -- keyring/file.sh@49 -- # waitforlisten 1707452 /var/tmp/bperf.sock 00:39:08.967 10:12:39 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:39:08.967 10:12:39 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1707452 ']' 00:39:08.967 10:12:39 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:08.967 10:12:39 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:08.967 10:12:39 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:08.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:08.967 10:12:39 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:08.967 10:12:39 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:08.967 [2024-11-20 10:12:39.858814] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:39:08.967 [2024-11-20 10:12:39.858862] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1707452 ] 00:39:09.227 [2024-11-20 10:12:39.945859] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:09.227 [2024-11-20 10:12:39.982422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:09.797 10:12:40 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:09.797 10:12:40 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:39:09.797 10:12:40 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.c4jrQOKOAi 00:39:09.797 10:12:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.c4jrQOKOAi 00:39:10.058 10:12:40 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.rUI3UnCGhJ 00:39:10.058 10:12:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.rUI3UnCGhJ 00:39:10.318 10:12:41 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:39:10.318 10:12:41 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:39:10.318 10:12:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:10.318 10:12:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:10.319 10:12:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:10.319 10:12:41 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.c4jrQOKOAi == \/\t\m\p\/\t\m\p\.\c\4\j\r\Q\O\K\O\A\i ]] 00:39:10.319 10:12:41 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:39:10.319 10:12:41 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:39:10.319 10:12:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:10.319 10:12:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:10.319 10:12:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:10.580 10:12:41 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.rUI3UnCGhJ == \/\t\m\p\/\t\m\p\.\r\U\I\3\U\n\C\G\h\J ]] 00:39:10.580 10:12:41 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:39:10.580 10:12:41 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:10.580 10:12:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:10.580 10:12:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:10.580 10:12:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:10.580 10:12:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:10.840 10:12:41 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:39:10.840 10:12:41 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:39:10.840 10:12:41 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:10.840 10:12:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:10.840 10:12:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:10.840 10:12:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:10.840 10:12:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:11.100 10:12:41 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:39:11.100 10:12:41 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:11.100 10:12:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:11.100 [2024-11-20 10:12:41.958095] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:39:11.360 nvme0n1 00:39:11.360 10:12:42 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:39:11.360 10:12:42 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:11.360 10:12:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:11.360 10:12:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:11.360 10:12:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:11.360 10:12:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:11.360 10:12:42 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:39:11.360 10:12:42 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:39:11.360 10:12:42 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:11.360 10:12:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:11.360 10:12:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:11.360 10:12:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:11.360 10:12:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:11.621 10:12:42 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:39:11.621 10:12:42 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:39:11.621 Running I/O for 1 seconds... 00:39:13.004 19167.00 IOPS, 74.87 MiB/s 00:39:13.004 Latency(us) 00:39:13.004 [2024-11-20T09:12:43.920Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:13.004 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:39:13.004 nvme0n1 : 1.00 19220.29 75.08 0.00 0.00 6648.11 2484.91 18459.31 00:39:13.004 [2024-11-20T09:12:43.920Z] =================================================================================================================== 00:39:13.004 [2024-11-20T09:12:43.920Z] Total : 19220.29 75.08 0.00 0.00 6648.11 2484.91 18459.31 00:39:13.004 { 00:39:13.004 "results": [ 00:39:13.004 { 00:39:13.004 "job": "nvme0n1", 00:39:13.004 "core_mask": "0x2", 00:39:13.004 "workload": "randrw", 00:39:13.004 "percentage": 50, 00:39:13.004 "status": "finished", 00:39:13.004 "queue_depth": 128, 00:39:13.004 "io_size": 4096, 00:39:13.004 "runtime": 1.003939, 00:39:13.004 "iops": 19220.291272676925, 00:39:13.004 "mibps": 75.07926278389424, 00:39:13.004 "io_failed": 0, 00:39:13.004 "io_timeout": 0, 00:39:13.004 "avg_latency_us": 6648.1144499723605, 00:39:13.004 "min_latency_us": 2484.9066666666668, 00:39:13.004 "max_latency_us": 18459.306666666667 00:39:13.004 } 00:39:13.004 ], 00:39:13.004 "core_count": 1 00:39:13.004 } 00:39:13.004 10:12:43 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:39:13.004 10:12:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:39:13.004 10:12:43 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:39:13.004 10:12:43 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:13.004 10:12:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:13.004 10:12:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:13.004 10:12:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:13.004 10:12:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:13.004 10:12:43 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:39:13.004 10:12:43 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:39:13.004 10:12:43 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:13.004 10:12:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:13.004 10:12:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:13.004 10:12:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:13.004 10:12:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:13.265 10:12:44 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:39:13.265 10:12:44 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:39:13.265 10:12:44 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:39:13.265 10:12:44 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:39:13.265 10:12:44 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:39:13.265 10:12:44 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:13.265 10:12:44 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:39:13.265 10:12:44 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:13.265 10:12:44 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:39:13.265 10:12:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:39:13.526 [2024-11-20 10:12:44.243213] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:39:13.526 [2024-11-20 10:12:44.243918] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x689c10 (107): Transport endpoint is not connected 00:39:13.526 [2024-11-20 10:12:44.244914] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x689c10 (9): Bad file descriptor 00:39:13.526 [2024-11-20 10:12:44.245915] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:39:13.527 [2024-11-20 10:12:44.245923] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:39:13.527 [2024-11-20 10:12:44.245929] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:39:13.527 [2024-11-20 10:12:44.245936] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:39:13.527 request: 00:39:13.527 { 00:39:13.527 "name": "nvme0", 00:39:13.527 "trtype": "tcp", 00:39:13.527 "traddr": "127.0.0.1", 00:39:13.527 "adrfam": "ipv4", 00:39:13.527 "trsvcid": "4420", 00:39:13.527 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:13.527 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:13.527 "prchk_reftag": false, 00:39:13.527 "prchk_guard": false, 00:39:13.527 "hdgst": false, 00:39:13.527 "ddgst": false, 00:39:13.527 "psk": "key1", 00:39:13.527 "allow_unrecognized_csi": false, 00:39:13.527 "method": "bdev_nvme_attach_controller", 00:39:13.527 "req_id": 1 00:39:13.527 } 00:39:13.527 Got JSON-RPC error response 00:39:13.527 response: 00:39:13.527 { 00:39:13.527 "code": -5, 00:39:13.527 "message": "Input/output error" 00:39:13.527 } 00:39:13.527 10:12:44 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:39:13.527 10:12:44 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:13.527 10:12:44 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:13.527 10:12:44 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:13.527 10:12:44 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:39:13.527 10:12:44 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:13.527 10:12:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:13.527 10:12:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:13.527 10:12:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:13.527 10:12:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:13.527 10:12:44 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:39:13.527 10:12:44 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:39:13.527 10:12:44 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:13.787 10:12:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:13.787 10:12:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:13.787 10:12:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:13.787 10:12:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:13.787 10:12:44 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:39:13.787 10:12:44 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:39:13.787 10:12:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:39:14.048 10:12:44 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:39:14.048 10:12:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:39:14.048 10:12:44 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:39:14.048 10:12:44 keyring_file -- keyring/file.sh@78 -- # jq length 00:39:14.048 10:12:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:14.309 10:12:45 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:39:14.309 10:12:45 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.c4jrQOKOAi 00:39:14.309 10:12:45 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.c4jrQOKOAi 00:39:14.309 10:12:45 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:39:14.309 10:12:45 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.c4jrQOKOAi 00:39:14.309 10:12:45 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:39:14.309 10:12:45 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:14.309 10:12:45 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:39:14.309 10:12:45 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:14.309 10:12:45 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.c4jrQOKOAi 00:39:14.309 10:12:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.c4jrQOKOAi 00:39:14.570 [2024-11-20 10:12:45.274622] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.c4jrQOKOAi': 0100660 00:39:14.570 [2024-11-20 10:12:45.274642] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:39:14.570 request: 00:39:14.570 { 00:39:14.570 "name": "key0", 00:39:14.570 "path": "/tmp/tmp.c4jrQOKOAi", 00:39:14.570 "method": "keyring_file_add_key", 00:39:14.570 "req_id": 1 00:39:14.570 } 00:39:14.570 Got JSON-RPC error response 00:39:14.570 response: 00:39:14.570 { 00:39:14.570 "code": -1, 00:39:14.570 "message": "Operation not permitted" 00:39:14.570 } 00:39:14.570 10:12:45 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:39:14.570 10:12:45 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:14.570 10:12:45 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:14.570 10:12:45 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:14.570 10:12:45 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.c4jrQOKOAi 00:39:14.570 10:12:45 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.c4jrQOKOAi 00:39:14.570 10:12:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.c4jrQOKOAi 00:39:14.570 10:12:45 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.c4jrQOKOAi 00:39:14.570 10:12:45 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:39:14.570 10:12:45 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:14.570 10:12:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:14.570 10:12:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:14.570 10:12:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:14.570 10:12:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:14.831 10:12:45 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:39:14.831 10:12:45 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:14.831 10:12:45 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:39:14.831 10:12:45 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:14.831 10:12:45 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:39:14.831 10:12:45 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:14.831 10:12:45 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:39:14.831 10:12:45 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:14.831 10:12:45 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:14.831 10:12:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:15.092 [2024-11-20 10:12:45.820008] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.c4jrQOKOAi': No such file or directory 00:39:15.092 [2024-11-20 10:12:45.820022] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:39:15.092 [2024-11-20 10:12:45.820035] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:39:15.092 [2024-11-20 10:12:45.820040] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:39:15.092 [2024-11-20 10:12:45.820046] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:39:15.092 [2024-11-20 10:12:45.820051] bdev_nvme.c:6763:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:39:15.092 request: 00:39:15.092 { 00:39:15.092 "name": "nvme0", 00:39:15.092 "trtype": "tcp", 00:39:15.092 "traddr": "127.0.0.1", 00:39:15.092 "adrfam": "ipv4", 00:39:15.092 "trsvcid": "4420", 00:39:15.092 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:15.092 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:15.092 "prchk_reftag": false, 00:39:15.092 "prchk_guard": false, 00:39:15.092 "hdgst": false, 00:39:15.092 "ddgst": false, 00:39:15.092 "psk": "key0", 00:39:15.092 "allow_unrecognized_csi": false, 00:39:15.092 "method": "bdev_nvme_attach_controller", 00:39:15.092 "req_id": 1 00:39:15.092 } 00:39:15.092 Got JSON-RPC error response 00:39:15.092 response: 00:39:15.092 { 00:39:15.092 "code": -19, 00:39:15.092 "message": "No such device" 00:39:15.092 } 00:39:15.092 10:12:45 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:39:15.092 10:12:45 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:15.092 10:12:45 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:15.092 10:12:45 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:15.092 10:12:45 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:39:15.092 10:12:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:39:15.353 10:12:46 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:39:15.353 10:12:46 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:39:15.353 10:12:46 keyring_file -- keyring/common.sh@17 -- # name=key0 00:39:15.353 10:12:46 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:39:15.353 10:12:46 keyring_file -- keyring/common.sh@17 -- # digest=0 00:39:15.353 10:12:46 keyring_file -- keyring/common.sh@18 -- # mktemp 00:39:15.353 10:12:46 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Oy64Mb3Vkp 00:39:15.353 10:12:46 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:39:15.353 10:12:46 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:39:15.353 10:12:46 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:39:15.353 10:12:46 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:39:15.353 10:12:46 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:39:15.353 10:12:46 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:39:15.353 10:12:46 keyring_file -- nvmf/common.sh@733 -- # python - 00:39:15.353 10:12:46 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Oy64Mb3Vkp 00:39:15.353 10:12:46 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Oy64Mb3Vkp 00:39:15.353 10:12:46 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.Oy64Mb3Vkp 00:39:15.353 10:12:46 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Oy64Mb3Vkp 00:39:15.353 10:12:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Oy64Mb3Vkp 00:39:15.353 10:12:46 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:15.353 10:12:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:15.613 nvme0n1 00:39:15.613 10:12:46 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:39:15.613 10:12:46 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:15.613 10:12:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:15.613 10:12:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:15.613 10:12:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:15.613 10:12:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:15.874 10:12:46 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:39:15.874 10:12:46 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:39:15.874 10:12:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:39:16.134 10:12:46 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:39:16.134 10:12:46 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:39:16.134 10:12:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:16.134 10:12:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:16.134 10:12:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:16.134 10:12:47 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:39:16.134 10:12:47 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:39:16.134 10:12:47 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:16.134 10:12:47 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:16.134 10:12:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:16.134 10:12:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:16.134 10:12:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:16.394 10:12:47 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:39:16.394 10:12:47 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:39:16.394 10:12:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:39:16.654 10:12:47 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:39:16.654 10:12:47 keyring_file -- keyring/file.sh@105 -- # jq length 00:39:16.654 10:12:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:16.654 10:12:47 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:39:16.654 10:12:47 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Oy64Mb3Vkp 00:39:16.654 10:12:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Oy64Mb3Vkp 00:39:16.914 10:12:47 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.rUI3UnCGhJ 00:39:16.914 10:12:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.rUI3UnCGhJ 00:39:17.174 10:12:47 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:17.174 10:12:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:17.434 nvme0n1 00:39:17.434 10:12:48 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:39:17.434 10:12:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:39:17.695 10:12:48 keyring_file -- keyring/file.sh@113 -- # config='{ 00:39:17.695 "subsystems": [ 00:39:17.695 { 00:39:17.695 "subsystem": "keyring", 00:39:17.695 "config": [ 00:39:17.695 { 00:39:17.695 "method": "keyring_file_add_key", 00:39:17.695 "params": { 00:39:17.695 "name": "key0", 00:39:17.695 "path": "/tmp/tmp.Oy64Mb3Vkp" 00:39:17.695 } 00:39:17.695 }, 00:39:17.695 { 00:39:17.695 "method": "keyring_file_add_key", 00:39:17.695 "params": { 00:39:17.695 "name": "key1", 00:39:17.695 "path": "/tmp/tmp.rUI3UnCGhJ" 00:39:17.695 } 00:39:17.695 } 00:39:17.695 ] 00:39:17.695 }, 00:39:17.695 { 00:39:17.695 "subsystem": "iobuf", 00:39:17.695 "config": [ 00:39:17.695 { 00:39:17.695 "method": "iobuf_set_options", 00:39:17.695 "params": { 00:39:17.695 "small_pool_count": 8192, 00:39:17.695 "large_pool_count": 1024, 00:39:17.695 "small_bufsize": 8192, 00:39:17.695 "large_bufsize": 135168, 00:39:17.695 "enable_numa": false 00:39:17.695 } 00:39:17.696 } 00:39:17.696 ] 00:39:17.696 }, 00:39:17.696 { 00:39:17.696 "subsystem": "sock", 00:39:17.696 "config": [ 00:39:17.696 { 00:39:17.696 "method": "sock_set_default_impl", 00:39:17.696 "params": { 00:39:17.696 "impl_name": "posix" 00:39:17.696 } 00:39:17.696 }, 00:39:17.696 { 00:39:17.696 "method": "sock_impl_set_options", 00:39:17.696 "params": { 00:39:17.696 "impl_name": "ssl", 00:39:17.696 "recv_buf_size": 4096, 00:39:17.696 "send_buf_size": 4096, 00:39:17.696 "enable_recv_pipe": true, 00:39:17.696 "enable_quickack": false, 00:39:17.696 "enable_placement_id": 0, 00:39:17.696 "enable_zerocopy_send_server": true, 00:39:17.696 "enable_zerocopy_send_client": false, 00:39:17.696 "zerocopy_threshold": 0, 00:39:17.696 "tls_version": 0, 00:39:17.696 "enable_ktls": false 00:39:17.696 } 00:39:17.696 }, 00:39:17.696 { 00:39:17.696 "method": "sock_impl_set_options", 00:39:17.696 "params": { 00:39:17.696 "impl_name": "posix", 00:39:17.696 "recv_buf_size": 2097152, 00:39:17.696 "send_buf_size": 2097152, 00:39:17.696 "enable_recv_pipe": true, 00:39:17.696 "enable_quickack": false, 00:39:17.696 "enable_placement_id": 0, 00:39:17.696 "enable_zerocopy_send_server": true, 00:39:17.696 "enable_zerocopy_send_client": false, 00:39:17.696 "zerocopy_threshold": 0, 00:39:17.696 "tls_version": 0, 00:39:17.696 "enable_ktls": false 00:39:17.696 } 00:39:17.696 } 00:39:17.696 ] 00:39:17.696 }, 00:39:17.696 { 00:39:17.696 "subsystem": "vmd", 00:39:17.696 "config": [] 00:39:17.696 }, 00:39:17.696 { 00:39:17.696 "subsystem": "accel", 00:39:17.696 "config": [ 00:39:17.696 { 00:39:17.696 "method": "accel_set_options", 00:39:17.696 "params": { 00:39:17.696 "small_cache_size": 128, 00:39:17.696 "large_cache_size": 16, 00:39:17.696 "task_count": 2048, 00:39:17.696 "sequence_count": 2048, 00:39:17.696 "buf_count": 2048 00:39:17.696 } 00:39:17.696 } 00:39:17.696 ] 00:39:17.696 }, 00:39:17.696 { 00:39:17.696 "subsystem": "bdev", 00:39:17.696 "config": [ 00:39:17.696 { 00:39:17.696 "method": "bdev_set_options", 00:39:17.696 "params": { 00:39:17.696 "bdev_io_pool_size": 65535, 00:39:17.696 "bdev_io_cache_size": 256, 00:39:17.696 "bdev_auto_examine": true, 00:39:17.696 "iobuf_small_cache_size": 128, 00:39:17.696 "iobuf_large_cache_size": 16 00:39:17.696 } 00:39:17.696 }, 00:39:17.696 { 00:39:17.696 "method": "bdev_raid_set_options", 00:39:17.696 "params": { 00:39:17.696 "process_window_size_kb": 1024, 00:39:17.696 "process_max_bandwidth_mb_sec": 0 00:39:17.696 } 00:39:17.696 }, 00:39:17.696 { 00:39:17.696 "method": "bdev_iscsi_set_options", 00:39:17.696 "params": { 00:39:17.696 "timeout_sec": 30 00:39:17.696 } 00:39:17.696 }, 00:39:17.696 { 00:39:17.696 "method": "bdev_nvme_set_options", 00:39:17.696 "params": { 00:39:17.696 "action_on_timeout": "none", 00:39:17.696 "timeout_us": 0, 00:39:17.696 "timeout_admin_us": 0, 00:39:17.696 "keep_alive_timeout_ms": 10000, 00:39:17.696 "arbitration_burst": 0, 00:39:17.696 "low_priority_weight": 0, 00:39:17.696 "medium_priority_weight": 0, 00:39:17.696 "high_priority_weight": 0, 00:39:17.696 "nvme_adminq_poll_period_us": 10000, 00:39:17.696 "nvme_ioq_poll_period_us": 0, 00:39:17.696 "io_queue_requests": 512, 00:39:17.696 "delay_cmd_submit": true, 00:39:17.696 "transport_retry_count": 4, 00:39:17.696 "bdev_retry_count": 3, 00:39:17.696 "transport_ack_timeout": 0, 00:39:17.696 "ctrlr_loss_timeout_sec": 0, 00:39:17.696 "reconnect_delay_sec": 0, 00:39:17.696 "fast_io_fail_timeout_sec": 0, 00:39:17.696 "disable_auto_failback": false, 00:39:17.696 "generate_uuids": false, 00:39:17.696 "transport_tos": 0, 00:39:17.696 "nvme_error_stat": false, 00:39:17.696 "rdma_srq_size": 0, 00:39:17.696 "io_path_stat": false, 00:39:17.696 "allow_accel_sequence": false, 00:39:17.696 "rdma_max_cq_size": 0, 00:39:17.696 "rdma_cm_event_timeout_ms": 0, 00:39:17.696 "dhchap_digests": [ 00:39:17.696 "sha256", 00:39:17.696 "sha384", 00:39:17.696 "sha512" 00:39:17.696 ], 00:39:17.696 "dhchap_dhgroups": [ 00:39:17.696 "null", 00:39:17.696 "ffdhe2048", 00:39:17.696 "ffdhe3072", 00:39:17.696 "ffdhe4096", 00:39:17.696 "ffdhe6144", 00:39:17.696 "ffdhe8192" 00:39:17.696 ] 00:39:17.696 } 00:39:17.696 }, 00:39:17.696 { 00:39:17.696 "method": "bdev_nvme_attach_controller", 00:39:17.696 "params": { 00:39:17.696 "name": "nvme0", 00:39:17.696 "trtype": "TCP", 00:39:17.696 "adrfam": "IPv4", 00:39:17.696 "traddr": "127.0.0.1", 00:39:17.696 "trsvcid": "4420", 00:39:17.696 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:17.696 "prchk_reftag": false, 00:39:17.696 "prchk_guard": false, 00:39:17.696 "ctrlr_loss_timeout_sec": 0, 00:39:17.696 "reconnect_delay_sec": 0, 00:39:17.696 "fast_io_fail_timeout_sec": 0, 00:39:17.696 "psk": "key0", 00:39:17.696 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:17.696 "hdgst": false, 00:39:17.696 "ddgst": false, 00:39:17.696 "multipath": "multipath" 00:39:17.696 } 00:39:17.696 }, 00:39:17.696 { 00:39:17.696 "method": "bdev_nvme_set_hotplug", 00:39:17.696 "params": { 00:39:17.696 "period_us": 100000, 00:39:17.696 "enable": false 00:39:17.696 } 00:39:17.696 }, 00:39:17.696 { 00:39:17.696 "method": "bdev_wait_for_examine" 00:39:17.696 } 00:39:17.696 ] 00:39:17.696 }, 00:39:17.696 { 00:39:17.696 "subsystem": "nbd", 00:39:17.696 "config": [] 00:39:17.696 } 00:39:17.696 ] 00:39:17.696 }' 00:39:17.696 10:12:48 keyring_file -- keyring/file.sh@115 -- # killprocess 1707452 00:39:17.696 10:12:48 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1707452 ']' 00:39:17.696 10:12:48 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1707452 00:39:17.696 10:12:48 keyring_file -- common/autotest_common.sh@959 -- # uname 00:39:17.696 10:12:48 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:17.696 10:12:48 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1707452 00:39:17.696 10:12:48 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:17.696 10:12:48 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:17.696 10:12:48 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1707452' 00:39:17.696 killing process with pid 1707452 00:39:17.696 10:12:48 keyring_file -- common/autotest_common.sh@973 -- # kill 1707452 00:39:17.696 Received shutdown signal, test time was about 1.000000 seconds 00:39:17.696 00:39:17.696 Latency(us) 00:39:17.696 [2024-11-20T09:12:48.612Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:17.696 [2024-11-20T09:12:48.612Z] =================================================================================================================== 00:39:17.696 [2024-11-20T09:12:48.612Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:17.696 10:12:48 keyring_file -- common/autotest_common.sh@978 -- # wait 1707452 00:39:17.696 10:12:48 keyring_file -- keyring/file.sh@118 -- # bperfpid=1709267 00:39:17.696 10:12:48 keyring_file -- keyring/file.sh@120 -- # waitforlisten 1709267 /var/tmp/bperf.sock 00:39:17.696 10:12:48 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1709267 ']' 00:39:17.696 10:12:48 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:17.696 10:12:48 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:17.696 10:12:48 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:39:17.696 10:12:48 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:17.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:17.696 10:12:48 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:17.696 10:12:48 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:39:17.696 "subsystems": [ 00:39:17.696 { 00:39:17.696 "subsystem": "keyring", 00:39:17.696 "config": [ 00:39:17.696 { 00:39:17.696 "method": "keyring_file_add_key", 00:39:17.696 "params": { 00:39:17.697 "name": "key0", 00:39:17.697 "path": "/tmp/tmp.Oy64Mb3Vkp" 00:39:17.697 } 00:39:17.697 }, 00:39:17.697 { 00:39:17.697 "method": "keyring_file_add_key", 00:39:17.697 "params": { 00:39:17.697 "name": "key1", 00:39:17.697 "path": "/tmp/tmp.rUI3UnCGhJ" 00:39:17.697 } 00:39:17.697 } 00:39:17.697 ] 00:39:17.697 }, 00:39:17.697 { 00:39:17.697 "subsystem": "iobuf", 00:39:17.697 "config": [ 00:39:17.697 { 00:39:17.697 "method": "iobuf_set_options", 00:39:17.697 "params": { 00:39:17.697 "small_pool_count": 8192, 00:39:17.697 "large_pool_count": 1024, 00:39:17.697 "small_bufsize": 8192, 00:39:17.697 "large_bufsize": 135168, 00:39:17.697 "enable_numa": false 00:39:17.697 } 00:39:17.697 } 00:39:17.697 ] 00:39:17.697 }, 00:39:17.697 { 00:39:17.697 "subsystem": "sock", 00:39:17.697 "config": [ 00:39:17.697 { 00:39:17.697 "method": "sock_set_default_impl", 00:39:17.697 "params": { 00:39:17.697 "impl_name": "posix" 00:39:17.697 } 00:39:17.697 }, 00:39:17.697 { 00:39:17.697 "method": "sock_impl_set_options", 00:39:17.697 "params": { 00:39:17.697 "impl_name": "ssl", 00:39:17.697 "recv_buf_size": 4096, 00:39:17.697 "send_buf_size": 4096, 00:39:17.697 "enable_recv_pipe": true, 00:39:17.697 "enable_quickack": false, 00:39:17.697 "enable_placement_id": 0, 00:39:17.697 "enable_zerocopy_send_server": true, 00:39:17.697 "enable_zerocopy_send_client": false, 00:39:17.697 "zerocopy_threshold": 0, 00:39:17.697 "tls_version": 0, 00:39:17.697 "enable_ktls": false 00:39:17.697 } 00:39:17.697 }, 00:39:17.697 { 00:39:17.697 "method": "sock_impl_set_options", 00:39:17.697 "params": { 00:39:17.697 "impl_name": "posix", 00:39:17.697 "recv_buf_size": 2097152, 00:39:17.697 "send_buf_size": 2097152, 00:39:17.697 "enable_recv_pipe": true, 00:39:17.697 "enable_quickack": false, 00:39:17.697 "enable_placement_id": 0, 00:39:17.697 "enable_zerocopy_send_server": true, 00:39:17.697 "enable_zerocopy_send_client": false, 00:39:17.697 "zerocopy_threshold": 0, 00:39:17.697 "tls_version": 0, 00:39:17.697 "enable_ktls": false 00:39:17.697 } 00:39:17.697 } 00:39:17.697 ] 00:39:17.697 }, 00:39:17.697 { 00:39:17.697 "subsystem": "vmd", 00:39:17.697 "config": [] 00:39:17.697 }, 00:39:17.697 { 00:39:17.697 "subsystem": "accel", 00:39:17.697 "config": [ 00:39:17.697 { 00:39:17.697 "method": "accel_set_options", 00:39:17.697 "params": { 00:39:17.697 "small_cache_size": 128, 00:39:17.697 "large_cache_size": 16, 00:39:17.697 "task_count": 2048, 00:39:17.697 "sequence_count": 2048, 00:39:17.697 "buf_count": 2048 00:39:17.697 } 00:39:17.697 } 00:39:17.697 ] 00:39:17.697 }, 00:39:17.697 { 00:39:17.697 "subsystem": "bdev", 00:39:17.697 "config": [ 00:39:17.697 { 00:39:17.697 "method": "bdev_set_options", 00:39:17.697 "params": { 00:39:17.697 "bdev_io_pool_size": 65535, 00:39:17.697 "bdev_io_cache_size": 256, 00:39:17.697 "bdev_auto_examine": true, 00:39:17.697 "iobuf_small_cache_size": 128, 00:39:17.697 "iobuf_large_cache_size": 16 00:39:17.697 } 00:39:17.697 }, 00:39:17.697 { 00:39:17.697 "method": "bdev_raid_set_options", 00:39:17.697 "params": { 00:39:17.697 "process_window_size_kb": 1024, 00:39:17.697 "process_max_bandwidth_mb_sec": 0 00:39:17.697 } 00:39:17.697 }, 00:39:17.697 { 00:39:17.697 "method": "bdev_iscsi_set_options", 00:39:17.697 "params": { 00:39:17.697 "timeout_sec": 30 00:39:17.697 } 00:39:17.697 }, 00:39:17.697 { 00:39:17.697 "method": "bdev_nvme_set_options", 00:39:17.697 "params": { 00:39:17.697 "action_on_timeout": "none", 00:39:17.697 "timeout_us": 0, 00:39:17.697 "timeout_admin_us": 0, 00:39:17.697 "keep_alive_timeout_ms": 10000, 00:39:17.697 "arbitration_burst": 0, 00:39:17.697 "low_priority_weight": 0, 00:39:17.697 "medium_priority_weight": 0, 00:39:17.697 "high_priority_weight": 0, 00:39:17.697 "nvme_adminq_poll_period_us": 10000, 00:39:17.697 "nvme_ioq_poll_period_us": 0, 00:39:17.697 "io_queue_requests": 512, 00:39:17.697 "delay_cmd_submit": true, 00:39:17.697 "transport_retry_count": 4, 00:39:17.697 "bdev_retry_count": 3, 00:39:17.697 "transport_ack_timeout": 0, 00:39:17.697 "ctrlr_loss_timeout_sec": 0, 00:39:17.697 "reconnect_delay_sec": 0, 00:39:17.697 "fast_io_fail_timeout_sec": 0, 00:39:17.697 "disable_auto_failback": false, 00:39:17.697 "generate_uuids": false, 00:39:17.697 "transport_tos": 0, 00:39:17.697 "nvme_error_stat": false, 00:39:17.697 "rdma_srq_size": 0, 00:39:17.697 10:12:48 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:17.697 "io_path_stat": false, 00:39:17.697 "allow_accel_sequence": false, 00:39:17.697 "rdma_max_cq_size": 0, 00:39:17.697 "rdma_cm_event_timeout_ms": 0, 00:39:17.697 "dhchap_digests": [ 00:39:17.697 "sha256", 00:39:17.697 "sha384", 00:39:17.697 "sha512" 00:39:17.697 ], 00:39:17.697 "dhchap_dhgroups": [ 00:39:17.697 "null", 00:39:17.697 "ffdhe2048", 00:39:17.697 "ffdhe3072", 00:39:17.697 "ffdhe4096", 00:39:17.697 "ffdhe6144", 00:39:17.697 "ffdhe8192" 00:39:17.697 ] 00:39:17.697 } 00:39:17.697 }, 00:39:17.697 { 00:39:17.697 "method": "bdev_nvme_attach_controller", 00:39:17.697 "params": { 00:39:17.697 "name": "nvme0", 00:39:17.697 "trtype": "TCP", 00:39:17.697 "adrfam": "IPv4", 00:39:17.697 "traddr": "127.0.0.1", 00:39:17.697 "trsvcid": "4420", 00:39:17.697 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:17.697 "prchk_reftag": false, 00:39:17.697 "prchk_guard": false, 00:39:17.697 "ctrlr_loss_timeout_sec": 0, 00:39:17.697 "reconnect_delay_sec": 0, 00:39:17.697 "fast_io_fail_timeout_sec": 0, 00:39:17.697 "psk": "key0", 00:39:17.697 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:17.697 "hdgst": false, 00:39:17.697 "ddgst": false, 00:39:17.697 "multipath": "multipath" 00:39:17.697 } 00:39:17.697 }, 00:39:17.697 { 00:39:17.697 "method": "bdev_nvme_set_hotplug", 00:39:17.697 "params": { 00:39:17.697 "period_us": 100000, 00:39:17.697 "enable": false 00:39:17.697 } 00:39:17.697 }, 00:39:17.697 { 00:39:17.697 "method": "bdev_wait_for_examine" 00:39:17.697 } 00:39:17.697 ] 00:39:17.697 }, 00:39:17.697 { 00:39:17.697 "subsystem": "nbd", 00:39:17.697 "config": [] 00:39:17.697 } 00:39:17.697 ] 00:39:17.697 }' 00:39:17.697 [2024-11-20 10:12:48.589277] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:39:17.697 [2024-11-20 10:12:48.589333] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1709267 ] 00:39:17.958 [2024-11-20 10:12:48.673866] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:17.958 [2024-11-20 10:12:48.702434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:17.958 [2024-11-20 10:12:48.845276] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:39:18.527 10:12:49 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:18.527 10:12:49 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:39:18.527 10:12:49 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:39:18.527 10:12:49 keyring_file -- keyring/file.sh@121 -- # jq length 00:39:18.527 10:12:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:18.787 10:12:49 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:39:18.787 10:12:49 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:39:18.787 10:12:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:18.787 10:12:49 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:18.787 10:12:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:18.787 10:12:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:18.787 10:12:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:19.047 10:12:49 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:39:19.047 10:12:49 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:39:19.047 10:12:49 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:19.047 10:12:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:19.047 10:12:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:19.047 10:12:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:19.047 10:12:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:19.047 10:12:49 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:39:19.047 10:12:49 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:39:19.047 10:12:49 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:39:19.047 10:12:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:39:19.308 10:12:50 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:39:19.308 10:12:50 keyring_file -- keyring/file.sh@1 -- # cleanup 00:39:19.308 10:12:50 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.Oy64Mb3Vkp /tmp/tmp.rUI3UnCGhJ 00:39:19.308 10:12:50 keyring_file -- keyring/file.sh@20 -- # killprocess 1709267 00:39:19.308 10:12:50 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1709267 ']' 00:39:19.308 10:12:50 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1709267 00:39:19.308 10:12:50 keyring_file -- common/autotest_common.sh@959 -- # uname 00:39:19.308 10:12:50 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:19.308 10:12:50 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1709267 00:39:19.308 10:12:50 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:19.308 10:12:50 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:19.308 10:12:50 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1709267' 00:39:19.308 killing process with pid 1709267 00:39:19.308 10:12:50 keyring_file -- common/autotest_common.sh@973 -- # kill 1709267 00:39:19.308 Received shutdown signal, test time was about 1.000000 seconds 00:39:19.308 00:39:19.308 Latency(us) 00:39:19.308 [2024-11-20T09:12:50.224Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:19.308 [2024-11-20T09:12:50.224Z] =================================================================================================================== 00:39:19.308 [2024-11-20T09:12:50.224Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:39:19.308 10:12:50 keyring_file -- common/autotest_common.sh@978 -- # wait 1709267 00:39:19.570 10:12:50 keyring_file -- keyring/file.sh@21 -- # killprocess 1707362 00:39:19.570 10:12:50 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1707362 ']' 00:39:19.570 10:12:50 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1707362 00:39:19.570 10:12:50 keyring_file -- common/autotest_common.sh@959 -- # uname 00:39:19.570 10:12:50 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:19.570 10:12:50 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1707362 00:39:19.570 10:12:50 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:19.570 10:12:50 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:19.570 10:12:50 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1707362' 00:39:19.570 killing process with pid 1707362 00:39:19.570 10:12:50 keyring_file -- common/autotest_common.sh@973 -- # kill 1707362 00:39:19.570 10:12:50 keyring_file -- common/autotest_common.sh@978 -- # wait 1707362 00:39:19.830 00:39:19.831 real 0m12.009s 00:39:19.831 user 0m29.045s 00:39:19.831 sys 0m2.689s 00:39:19.831 10:12:50 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:19.831 10:12:50 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:19.831 ************************************ 00:39:19.831 END TEST keyring_file 00:39:19.831 ************************************ 00:39:19.831 10:12:50 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:39:19.831 10:12:50 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:39:19.831 10:12:50 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:39:19.831 10:12:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:19.831 10:12:50 -- common/autotest_common.sh@10 -- # set +x 00:39:19.831 ************************************ 00:39:19.831 START TEST keyring_linux 00:39:19.831 ************************************ 00:39:19.831 10:12:50 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:39:19.831 Joined session keyring: 76448385 00:39:19.831 * Looking for test storage... 00:39:19.831 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:39:19.831 10:12:50 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:19.831 10:12:50 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:39:19.831 10:12:50 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:20.092 10:12:50 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:20.092 10:12:50 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:20.092 10:12:50 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:20.092 10:12:50 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:20.092 10:12:50 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:39:20.092 10:12:50 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:39:20.092 10:12:50 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:39:20.092 10:12:50 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:39:20.092 10:12:50 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:39:20.092 10:12:50 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:39:20.092 10:12:50 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:39:20.092 10:12:50 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:20.092 10:12:50 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:39:20.092 10:12:50 keyring_linux -- scripts/common.sh@345 -- # : 1 00:39:20.092 10:12:50 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:20.092 10:12:50 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:20.092 10:12:50 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:39:20.092 10:12:50 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:39:20.092 10:12:50 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:20.092 10:12:50 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:39:20.092 10:12:50 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:39:20.092 10:12:50 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:39:20.092 10:12:50 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:39:20.092 10:12:50 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:20.092 10:12:50 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:39:20.092 10:12:50 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:39:20.092 10:12:50 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:20.092 10:12:50 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:20.092 10:12:50 keyring_linux -- scripts/common.sh@368 -- # return 0 00:39:20.092 10:12:50 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:20.092 10:12:50 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:20.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:20.092 --rc genhtml_branch_coverage=1 00:39:20.092 --rc genhtml_function_coverage=1 00:39:20.092 --rc genhtml_legend=1 00:39:20.092 --rc geninfo_all_blocks=1 00:39:20.092 --rc geninfo_unexecuted_blocks=1 00:39:20.092 00:39:20.092 ' 00:39:20.092 10:12:50 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:20.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:20.092 --rc genhtml_branch_coverage=1 00:39:20.092 --rc genhtml_function_coverage=1 00:39:20.092 --rc genhtml_legend=1 00:39:20.092 --rc geninfo_all_blocks=1 00:39:20.092 --rc geninfo_unexecuted_blocks=1 00:39:20.092 00:39:20.092 ' 00:39:20.092 10:12:50 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:20.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:20.092 --rc genhtml_branch_coverage=1 00:39:20.092 --rc genhtml_function_coverage=1 00:39:20.092 --rc genhtml_legend=1 00:39:20.092 --rc geninfo_all_blocks=1 00:39:20.092 --rc geninfo_unexecuted_blocks=1 00:39:20.092 00:39:20.092 ' 00:39:20.092 10:12:50 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:20.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:20.092 --rc genhtml_branch_coverage=1 00:39:20.092 --rc genhtml_function_coverage=1 00:39:20.092 --rc genhtml_legend=1 00:39:20.092 --rc geninfo_all_blocks=1 00:39:20.092 --rc geninfo_unexecuted_blocks=1 00:39:20.092 00:39:20.092 ' 00:39:20.092 10:12:50 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:39:20.092 10:12:50 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:20.092 10:12:50 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:39:20.092 10:12:50 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:20.092 10:12:50 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:20.092 10:12:50 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:20.092 10:12:50 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:20.092 10:12:50 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:20.092 10:12:50 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:20.092 10:12:50 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:20.092 10:12:50 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:20.092 10:12:50 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:20.092 10:12:50 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:20.092 10:12:50 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:20.092 10:12:50 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:20.092 10:12:50 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:20.092 10:12:50 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:20.092 10:12:50 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:20.092 10:12:50 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:20.092 10:12:50 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:20.092 10:12:50 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:39:20.092 10:12:50 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:20.092 10:12:50 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:20.092 10:12:50 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:20.092 10:12:50 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:20.092 10:12:50 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:20.092 10:12:50 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:20.092 10:12:50 keyring_linux -- paths/export.sh@5 -- # export PATH 00:39:20.093 10:12:50 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:20.093 10:12:50 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:39:20.093 10:12:50 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:20.093 10:12:50 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:20.093 10:12:50 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:20.093 10:12:50 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:20.093 10:12:50 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:20.093 10:12:50 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:20.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:20.093 10:12:50 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:20.093 10:12:50 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:20.093 10:12:50 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:20.093 10:12:50 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:39:20.093 10:12:50 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:39:20.093 10:12:50 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:39:20.093 10:12:50 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:39:20.093 10:12:50 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:39:20.093 10:12:50 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:39:20.093 10:12:50 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:39:20.093 10:12:50 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:39:20.093 10:12:50 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:39:20.093 10:12:50 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:39:20.093 10:12:50 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:39:20.093 10:12:50 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:39:20.093 10:12:50 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:39:20.093 10:12:50 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:39:20.093 10:12:50 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:39:20.093 10:12:50 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:39:20.093 10:12:50 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:39:20.093 10:12:50 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:39:20.093 10:12:50 keyring_linux -- nvmf/common.sh@733 -- # python - 00:39:20.093 10:12:50 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:39:20.093 10:12:50 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:39:20.093 /tmp/:spdk-test:key0 00:39:20.093 10:12:50 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:39:20.093 10:12:50 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:39:20.093 10:12:50 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:39:20.093 10:12:50 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:39:20.093 10:12:50 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:39:20.093 10:12:50 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:39:20.093 10:12:50 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:39:20.093 10:12:50 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:39:20.093 10:12:50 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:39:20.093 10:12:50 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:39:20.093 10:12:50 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:39:20.093 10:12:50 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:39:20.093 10:12:50 keyring_linux -- nvmf/common.sh@733 -- # python - 00:39:20.093 10:12:50 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:39:20.093 10:12:50 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:39:20.093 /tmp/:spdk-test:key1 00:39:20.093 10:12:50 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1709703 00:39:20.093 10:12:50 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1709703 00:39:20.093 10:12:50 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:39:20.093 10:12:50 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 1709703 ']' 00:39:20.093 10:12:50 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:20.093 10:12:50 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:20.093 10:12:50 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:20.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:20.093 10:12:50 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:20.093 10:12:50 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:39:20.093 [2024-11-20 10:12:51.001933] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:39:20.093 [2024-11-20 10:12:51.001994] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1709703 ] 00:39:20.354 [2024-11-20 10:12:51.079549] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:20.354 [2024-11-20 10:12:51.111181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:20.926 10:12:51 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:20.926 10:12:51 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:39:20.926 10:12:51 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:39:20.926 10:12:51 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:20.926 10:12:51 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:39:20.926 [2024-11-20 10:12:51.812025] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:20.926 null0 00:39:21.186 [2024-11-20 10:12:51.844080] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:39:21.186 [2024-11-20 10:12:51.844441] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:39:21.186 10:12:51 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:21.186 10:12:51 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:39:21.186 489065154 00:39:21.186 10:12:51 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:39:21.186 967875523 00:39:21.186 10:12:51 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1710041 00:39:21.186 10:12:51 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1710041 /var/tmp/bperf.sock 00:39:21.186 10:12:51 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:39:21.186 10:12:51 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 1710041 ']' 00:39:21.186 10:12:51 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:21.186 10:12:51 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:21.186 10:12:51 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:21.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:21.186 10:12:51 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:21.186 10:12:51 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:39:21.186 [2024-11-20 10:12:51.920322] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:39:21.186 [2024-11-20 10:12:51.920373] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1710041 ] 00:39:21.186 [2024-11-20 10:12:52.004092] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:21.186 [2024-11-20 10:12:52.033626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:22.127 10:12:52 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:22.127 10:12:52 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:39:22.127 10:12:52 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:39:22.127 10:12:52 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:39:22.127 10:12:52 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:39:22.127 10:12:52 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:39:22.387 10:12:53 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:39:22.387 10:12:53 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:39:22.387 [2024-11-20 10:12:53.285310] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:39:22.647 nvme0n1 00:39:22.647 10:12:53 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:39:22.647 10:12:53 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:39:22.647 10:12:53 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:39:22.647 10:12:53 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:39:22.647 10:12:53 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:39:22.647 10:12:53 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:22.647 10:12:53 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:39:22.647 10:12:53 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:39:22.647 10:12:53 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:39:22.647 10:12:53 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:39:22.647 10:12:53 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:22.647 10:12:53 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:39:22.647 10:12:53 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:22.907 10:12:53 keyring_linux -- keyring/linux.sh@25 -- # sn=489065154 00:39:22.907 10:12:53 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:39:22.907 10:12:53 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:39:22.907 10:12:53 keyring_linux -- keyring/linux.sh@26 -- # [[ 489065154 == \4\8\9\0\6\5\1\5\4 ]] 00:39:22.907 10:12:53 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 489065154 00:39:22.907 10:12:53 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:39:22.907 10:12:53 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:39:23.167 Running I/O for 1 seconds... 00:39:24.107 24644.00 IOPS, 96.27 MiB/s 00:39:24.107 Latency(us) 00:39:24.107 [2024-11-20T09:12:55.023Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:24.107 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:39:24.107 nvme0n1 : 1.01 24644.07 96.27 0.00 0.00 5178.81 4314.45 8738.13 00:39:24.107 [2024-11-20T09:12:55.023Z] =================================================================================================================== 00:39:24.107 [2024-11-20T09:12:55.023Z] Total : 24644.07 96.27 0.00 0.00 5178.81 4314.45 8738.13 00:39:24.107 { 00:39:24.107 "results": [ 00:39:24.107 { 00:39:24.107 "job": "nvme0n1", 00:39:24.107 "core_mask": "0x2", 00:39:24.107 "workload": "randread", 00:39:24.107 "status": "finished", 00:39:24.107 "queue_depth": 128, 00:39:24.107 "io_size": 4096, 00:39:24.107 "runtime": 1.005191, 00:39:24.107 "iops": 24644.072619034592, 00:39:24.108 "mibps": 96.26590866810388, 00:39:24.108 "io_failed": 0, 00:39:24.108 "io_timeout": 0, 00:39:24.108 "avg_latency_us": 5178.807109101674, 00:39:24.108 "min_latency_us": 4314.453333333333, 00:39:24.108 "max_latency_us": 8738.133333333333 00:39:24.108 } 00:39:24.108 ], 00:39:24.108 "core_count": 1 00:39:24.108 } 00:39:24.108 10:12:54 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:39:24.108 10:12:54 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:39:24.368 10:12:55 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:39:24.368 10:12:55 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:39:24.368 10:12:55 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:39:24.368 10:12:55 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:39:24.368 10:12:55 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:39:24.368 10:12:55 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:24.368 10:12:55 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:39:24.368 10:12:55 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:39:24.368 10:12:55 keyring_linux -- keyring/linux.sh@23 -- # return 00:39:24.368 10:12:55 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:39:24.368 10:12:55 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:39:24.368 10:12:55 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:39:24.368 10:12:55 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:39:24.368 10:12:55 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:24.368 10:12:55 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:39:24.368 10:12:55 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:24.368 10:12:55 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:39:24.368 10:12:55 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:39:24.628 [2024-11-20 10:12:55.403622] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:39:24.628 [2024-11-20 10:12:55.404084] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x86c480 (107): Transport endpoint is not connected 00:39:24.628 [2024-11-20 10:12:55.405080] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x86c480 (9): Bad file descriptor 00:39:24.628 [2024-11-20 10:12:55.406081] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:39:24.628 [2024-11-20 10:12:55.406089] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:39:24.628 [2024-11-20 10:12:55.406095] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:39:24.628 [2024-11-20 10:12:55.406102] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:39:24.628 request: 00:39:24.628 { 00:39:24.628 "name": "nvme0", 00:39:24.628 "trtype": "tcp", 00:39:24.628 "traddr": "127.0.0.1", 00:39:24.628 "adrfam": "ipv4", 00:39:24.628 "trsvcid": "4420", 00:39:24.628 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:24.628 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:24.628 "prchk_reftag": false, 00:39:24.628 "prchk_guard": false, 00:39:24.628 "hdgst": false, 00:39:24.628 "ddgst": false, 00:39:24.628 "psk": ":spdk-test:key1", 00:39:24.628 "allow_unrecognized_csi": false, 00:39:24.628 "method": "bdev_nvme_attach_controller", 00:39:24.628 "req_id": 1 00:39:24.628 } 00:39:24.628 Got JSON-RPC error response 00:39:24.628 response: 00:39:24.628 { 00:39:24.628 "code": -5, 00:39:24.628 "message": "Input/output error" 00:39:24.628 } 00:39:24.628 10:12:55 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:39:24.628 10:12:55 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:24.628 10:12:55 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:24.628 10:12:55 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:24.628 10:12:55 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:39:24.628 10:12:55 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:39:24.628 10:12:55 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:39:24.628 10:12:55 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:39:24.628 10:12:55 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:39:24.628 10:12:55 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:39:24.629 10:12:55 keyring_linux -- keyring/linux.sh@33 -- # sn=489065154 00:39:24.629 10:12:55 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 489065154 00:39:24.629 1 links removed 00:39:24.629 10:12:55 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:39:24.629 10:12:55 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:39:24.629 10:12:55 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:39:24.629 10:12:55 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:39:24.629 10:12:55 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:39:24.629 10:12:55 keyring_linux -- keyring/linux.sh@33 -- # sn=967875523 00:39:24.629 10:12:55 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 967875523 00:39:24.629 1 links removed 00:39:24.629 10:12:55 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1710041 00:39:24.629 10:12:55 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 1710041 ']' 00:39:24.629 10:12:55 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 1710041 00:39:24.629 10:12:55 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:39:24.629 10:12:55 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:24.629 10:12:55 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1710041 00:39:24.629 10:12:55 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:24.629 10:12:55 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:24.629 10:12:55 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1710041' 00:39:24.629 killing process with pid 1710041 00:39:24.629 10:12:55 keyring_linux -- common/autotest_common.sh@973 -- # kill 1710041 00:39:24.629 Received shutdown signal, test time was about 1.000000 seconds 00:39:24.629 00:39:24.629 Latency(us) 00:39:24.629 [2024-11-20T09:12:55.545Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:24.629 [2024-11-20T09:12:55.545Z] =================================================================================================================== 00:39:24.629 [2024-11-20T09:12:55.545Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:24.629 10:12:55 keyring_linux -- common/autotest_common.sh@978 -- # wait 1710041 00:39:24.889 10:12:55 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1709703 00:39:24.889 10:12:55 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 1709703 ']' 00:39:24.889 10:12:55 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 1709703 00:39:24.889 10:12:55 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:39:24.889 10:12:55 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:24.889 10:12:55 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1709703 00:39:24.889 10:12:55 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:24.889 10:12:55 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:24.889 10:12:55 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1709703' 00:39:24.889 killing process with pid 1709703 00:39:24.889 10:12:55 keyring_linux -- common/autotest_common.sh@973 -- # kill 1709703 00:39:24.889 10:12:55 keyring_linux -- common/autotest_common.sh@978 -- # wait 1709703 00:39:25.150 00:39:25.150 real 0m5.250s 00:39:25.150 user 0m9.817s 00:39:25.150 sys 0m1.445s 00:39:25.150 10:12:55 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:25.150 10:12:55 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:39:25.150 ************************************ 00:39:25.150 END TEST keyring_linux 00:39:25.150 ************************************ 00:39:25.150 10:12:55 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:39:25.150 10:12:55 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:39:25.150 10:12:55 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:39:25.150 10:12:55 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:39:25.150 10:12:55 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:39:25.150 10:12:55 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:39:25.150 10:12:55 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:39:25.150 10:12:55 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:39:25.150 10:12:55 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:39:25.150 10:12:55 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:39:25.150 10:12:55 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:39:25.150 10:12:55 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:39:25.150 10:12:55 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:39:25.150 10:12:55 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:39:25.150 10:12:55 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:39:25.150 10:12:55 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:39:25.150 10:12:55 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:39:25.150 10:12:55 -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:25.150 10:12:55 -- common/autotest_common.sh@10 -- # set +x 00:39:25.150 10:12:55 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:39:25.150 10:12:55 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:39:25.150 10:12:55 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:39:25.150 10:12:55 -- common/autotest_common.sh@10 -- # set +x 00:39:33.284 INFO: APP EXITING 00:39:33.284 INFO: killing all VMs 00:39:33.284 INFO: killing vhost app 00:39:33.284 WARN: no vhost pid file found 00:39:33.284 INFO: EXIT DONE 00:39:36.583 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:39:36.583 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:39:36.583 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:39:36.583 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:39:36.583 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:39:36.583 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:39:36.583 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:39:36.583 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:39:36.583 0000:65:00.0 (144d a80a): Already using the nvme driver 00:39:36.583 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:39:36.583 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:39:36.583 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:39:36.583 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:39:36.583 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:39:36.583 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:39:36.583 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:39:36.583 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:39:40.784 Cleaning 00:39:40.784 Removing: /var/run/dpdk/spdk0/config 00:39:40.784 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:39:40.784 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:39:40.784 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:39:40.784 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:39:40.784 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:39:40.784 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:39:40.784 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:39:40.784 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:39:40.784 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:39:40.784 Removing: /var/run/dpdk/spdk0/hugepage_info 00:39:40.784 Removing: /var/run/dpdk/spdk1/config 00:39:40.784 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:39:40.784 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:39:40.784 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:39:40.784 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:39:40.784 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:39:40.784 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:39:40.784 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:39:40.784 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:39:40.784 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:39:40.784 Removing: /var/run/dpdk/spdk1/hugepage_info 00:39:40.784 Removing: /var/run/dpdk/spdk2/config 00:39:40.784 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:39:40.784 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:39:40.784 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:39:40.784 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:39:40.784 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:39:40.784 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:39:40.784 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:39:40.784 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:39:40.784 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:39:40.784 Removing: /var/run/dpdk/spdk2/hugepage_info 00:39:40.784 Removing: /var/run/dpdk/spdk3/config 00:39:40.784 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:39:40.784 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:39:40.784 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:39:40.784 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:39:40.784 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:39:40.784 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:39:40.784 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:39:40.784 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:39:40.784 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:39:40.784 Removing: /var/run/dpdk/spdk3/hugepage_info 00:39:40.784 Removing: /var/run/dpdk/spdk4/config 00:39:40.784 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:39:40.784 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:39:40.784 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:39:40.784 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:39:40.784 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:39:40.784 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:39:40.784 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:39:40.784 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:39:40.785 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:39:40.785 Removing: /var/run/dpdk/spdk4/hugepage_info 00:39:40.785 Removing: /dev/shm/bdev_svc_trace.1 00:39:40.785 Removing: /dev/shm/nvmf_trace.0 00:39:40.785 Removing: /dev/shm/spdk_tgt_trace.pid1131895 00:39:40.785 Removing: /var/run/dpdk/spdk0 00:39:40.785 Removing: /var/run/dpdk/spdk1 00:39:40.785 Removing: /var/run/dpdk/spdk2 00:39:40.785 Removing: /var/run/dpdk/spdk3 00:39:40.785 Removing: /var/run/dpdk/spdk4 00:39:40.785 Removing: /var/run/dpdk/spdk_pid1130386 00:39:40.785 Removing: /var/run/dpdk/spdk_pid1131895 00:39:40.785 Removing: /var/run/dpdk/spdk_pid1132745 00:39:40.785 Removing: /var/run/dpdk/spdk_pid1133787 00:39:40.785 Removing: /var/run/dpdk/spdk_pid1134128 00:39:40.785 Removing: /var/run/dpdk/spdk_pid1135191 00:39:40.785 Removing: /var/run/dpdk/spdk_pid1135355 00:39:40.785 Removing: /var/run/dpdk/spdk_pid1135664 00:39:40.785 Removing: /var/run/dpdk/spdk_pid1136801 00:39:40.785 Removing: /var/run/dpdk/spdk_pid1137554 00:39:40.785 Removing: /var/run/dpdk/spdk_pid1137922 00:39:40.785 Removing: /var/run/dpdk/spdk_pid1138258 00:39:40.785 Removing: /var/run/dpdk/spdk_pid1138624 00:39:40.785 Removing: /var/run/dpdk/spdk_pid1138939 00:39:40.785 Removing: /var/run/dpdk/spdk_pid1139322 00:39:40.785 Removing: /var/run/dpdk/spdk_pid1139698 00:39:40.785 Removing: /var/run/dpdk/spdk_pid1140087 00:39:40.785 Removing: /var/run/dpdk/spdk_pid1141420 00:39:40.785 Removing: /var/run/dpdk/spdk_pid1145144 00:39:40.785 Removing: /var/run/dpdk/spdk_pid1145438 00:39:40.785 Removing: /var/run/dpdk/spdk_pid1145782 00:39:40.785 Removing: /var/run/dpdk/spdk_pid1145953 00:39:40.785 Removing: /var/run/dpdk/spdk_pid1146465 00:39:40.785 Removing: /var/run/dpdk/spdk_pid1146664 00:39:40.785 Removing: /var/run/dpdk/spdk_pid1147036 00:39:40.785 Removing: /var/run/dpdk/spdk_pid1147322 00:39:40.785 Removing: /var/run/dpdk/spdk_pid1147556 00:39:40.785 Removing: /var/run/dpdk/spdk_pid1147750 00:39:40.785 Removing: /var/run/dpdk/spdk_pid1148014 00:39:40.785 Removing: /var/run/dpdk/spdk_pid1148122 00:39:41.045 Removing: /var/run/dpdk/spdk_pid1148596 00:39:41.045 Removing: /var/run/dpdk/spdk_pid1148921 00:39:41.045 Removing: /var/run/dpdk/spdk_pid1149327 00:39:41.045 Removing: /var/run/dpdk/spdk_pid1153988 00:39:41.045 Removing: /var/run/dpdk/spdk_pid1159238 00:39:41.045 Removing: /var/run/dpdk/spdk_pid1171284 00:39:41.045 Removing: /var/run/dpdk/spdk_pid1172123 00:39:41.045 Removing: /var/run/dpdk/spdk_pid1177338 00:39:41.045 Removing: /var/run/dpdk/spdk_pid1177691 00:39:41.045 Removing: /var/run/dpdk/spdk_pid1183031 00:39:41.045 Removing: /var/run/dpdk/spdk_pid1190109 00:39:41.045 Removing: /var/run/dpdk/spdk_pid1193821 00:39:41.045 Removing: /var/run/dpdk/spdk_pid1206356 00:39:41.045 Removing: /var/run/dpdk/spdk_pid1217289 00:39:41.045 Removing: /var/run/dpdk/spdk_pid1219441 00:39:41.045 Removing: /var/run/dpdk/spdk_pid1220456 00:39:41.045 Removing: /var/run/dpdk/spdk_pid1241127 00:39:41.045 Removing: /var/run/dpdk/spdk_pid1246043 00:39:41.045 Removing: /var/run/dpdk/spdk_pid1302926 00:39:41.045 Removing: /var/run/dpdk/spdk_pid1309570 00:39:41.045 Removing: /var/run/dpdk/spdk_pid1316568 00:39:41.045 Removing: /var/run/dpdk/spdk_pid1324520 00:39:41.045 Removing: /var/run/dpdk/spdk_pid1324593 00:39:41.045 Removing: /var/run/dpdk/spdk_pid1325614 00:39:41.045 Removing: /var/run/dpdk/spdk_pid1326670 00:39:41.045 Removing: /var/run/dpdk/spdk_pid1327741 00:39:41.045 Removing: /var/run/dpdk/spdk_pid1328343 00:39:41.045 Removing: /var/run/dpdk/spdk_pid1328438 00:39:41.045 Removing: /var/run/dpdk/spdk_pid1328679 00:39:41.045 Removing: /var/run/dpdk/spdk_pid1328783 00:39:41.045 Removing: /var/run/dpdk/spdk_pid1328788 00:39:41.045 Removing: /var/run/dpdk/spdk_pid1329792 00:39:41.045 Removing: /var/run/dpdk/spdk_pid1330796 00:39:41.045 Removing: /var/run/dpdk/spdk_pid1331810 00:39:41.045 Removing: /var/run/dpdk/spdk_pid1332474 00:39:41.045 Removing: /var/run/dpdk/spdk_pid1332530 00:39:41.045 Removing: /var/run/dpdk/spdk_pid1332811 00:39:41.045 Removing: /var/run/dpdk/spdk_pid1334252 00:39:41.045 Removing: /var/run/dpdk/spdk_pid1335651 00:39:41.045 Removing: /var/run/dpdk/spdk_pid1345408 00:39:41.045 Removing: /var/run/dpdk/spdk_pid1380031 00:39:41.045 Removing: /var/run/dpdk/spdk_pid1385525 00:39:41.045 Removing: /var/run/dpdk/spdk_pid1387628 00:39:41.045 Removing: /var/run/dpdk/spdk_pid1390206 00:39:41.045 Removing: /var/run/dpdk/spdk_pid1390455 00:39:41.045 Removing: /var/run/dpdk/spdk_pid1390799 00:39:41.045 Removing: /var/run/dpdk/spdk_pid1391140 00:39:41.045 Removing: /var/run/dpdk/spdk_pid1391861 00:39:41.045 Removing: /var/run/dpdk/spdk_pid1393875 00:39:41.045 Removing: /var/run/dpdk/spdk_pid1395091 00:39:41.045 Removing: /var/run/dpdk/spdk_pid1395667 00:39:41.305 Removing: /var/run/dpdk/spdk_pid1398382 00:39:41.305 Removing: /var/run/dpdk/spdk_pid1399081 00:39:41.305 Removing: /var/run/dpdk/spdk_pid1399792 00:39:41.305 Removing: /var/run/dpdk/spdk_pid1404854 00:39:41.305 Removing: /var/run/dpdk/spdk_pid1411560 00:39:41.305 Removing: /var/run/dpdk/spdk_pid1411561 00:39:41.305 Removing: /var/run/dpdk/spdk_pid1411562 00:39:41.305 Removing: /var/run/dpdk/spdk_pid1416245 00:39:41.305 Removing: /var/run/dpdk/spdk_pid1426487 00:39:41.305 Removing: /var/run/dpdk/spdk_pid1431314 00:39:41.305 Removing: /var/run/dpdk/spdk_pid1438726 00:39:41.305 Removing: /var/run/dpdk/spdk_pid1440592 00:39:41.305 Removing: /var/run/dpdk/spdk_pid1442441 00:39:41.305 Removing: /var/run/dpdk/spdk_pid1443968 00:39:41.305 Removing: /var/run/dpdk/spdk_pid1449662 00:39:41.305 Removing: /var/run/dpdk/spdk_pid1454984 00:39:41.305 Removing: /var/run/dpdk/spdk_pid1459915 00:39:41.305 Removing: /var/run/dpdk/spdk_pid1469108 00:39:41.305 Removing: /var/run/dpdk/spdk_pid1469226 00:39:41.305 Removing: /var/run/dpdk/spdk_pid1474317 00:39:41.305 Removing: /var/run/dpdk/spdk_pid1474574 00:39:41.305 Removing: /var/run/dpdk/spdk_pid1474682 00:39:41.305 Removing: /var/run/dpdk/spdk_pid1475323 00:39:41.305 Removing: /var/run/dpdk/spdk_pid1475330 00:39:41.305 Removing: /var/run/dpdk/spdk_pid1480793 00:39:41.305 Removing: /var/run/dpdk/spdk_pid1481534 00:39:41.305 Removing: /var/run/dpdk/spdk_pid1487038 00:39:41.305 Removing: /var/run/dpdk/spdk_pid1490181 00:39:41.305 Removing: /var/run/dpdk/spdk_pid1497345 00:39:41.305 Removing: /var/run/dpdk/spdk_pid1504012 00:39:41.305 Removing: /var/run/dpdk/spdk_pid1514152 00:39:41.305 Removing: /var/run/dpdk/spdk_pid1522809 00:39:41.305 Removing: /var/run/dpdk/spdk_pid1522813 00:39:41.305 Removing: /var/run/dpdk/spdk_pid1545767 00:39:41.305 Removing: /var/run/dpdk/spdk_pid1546592 00:39:41.305 Removing: /var/run/dpdk/spdk_pid1547720 00:39:41.305 Removing: /var/run/dpdk/spdk_pid1548598 00:39:41.305 Removing: /var/run/dpdk/spdk_pid1549626 00:39:41.305 Removing: /var/run/dpdk/spdk_pid1550343 00:39:41.305 Removing: /var/run/dpdk/spdk_pid1551049 00:39:41.305 Removing: /var/run/dpdk/spdk_pid1551728 00:39:41.305 Removing: /var/run/dpdk/spdk_pid1556945 00:39:41.305 Removing: /var/run/dpdk/spdk_pid1557161 00:39:41.305 Removing: /var/run/dpdk/spdk_pid1564480 00:39:41.305 Removing: /var/run/dpdk/spdk_pid1564691 00:39:41.305 Removing: /var/run/dpdk/spdk_pid1571205 00:39:41.305 Removing: /var/run/dpdk/spdk_pid1576350 00:39:41.305 Removing: /var/run/dpdk/spdk_pid1587967 00:39:41.305 Removing: /var/run/dpdk/spdk_pid1588705 00:39:41.305 Removing: /var/run/dpdk/spdk_pid1593785 00:39:41.305 Removing: /var/run/dpdk/spdk_pid1594218 00:39:41.566 Removing: /var/run/dpdk/spdk_pid1599715 00:39:41.566 Removing: /var/run/dpdk/spdk_pid1606492 00:39:41.566 Removing: /var/run/dpdk/spdk_pid1609510 00:39:41.566 Removing: /var/run/dpdk/spdk_pid1621655 00:39:41.566 Removing: /var/run/dpdk/spdk_pid1632198 00:39:41.566 Removing: /var/run/dpdk/spdk_pid1634063 00:39:41.566 Removing: /var/run/dpdk/spdk_pid1635195 00:39:41.566 Removing: /var/run/dpdk/spdk_pid1655525 00:39:41.566 Removing: /var/run/dpdk/spdk_pid1660242 00:39:41.566 Removing: /var/run/dpdk/spdk_pid1663472 00:39:41.566 Removing: /var/run/dpdk/spdk_pid1671093 00:39:41.566 Removing: /var/run/dpdk/spdk_pid1671211 00:39:41.566 Removing: /var/run/dpdk/spdk_pid1677127 00:39:41.566 Removing: /var/run/dpdk/spdk_pid1679326 00:39:41.566 Removing: /var/run/dpdk/spdk_pid1681749 00:39:41.566 Removing: /var/run/dpdk/spdk_pid1683025 00:39:41.566 Removing: /var/run/dpdk/spdk_pid1685544 00:39:41.566 Removing: /var/run/dpdk/spdk_pid1686792 00:39:41.566 Removing: /var/run/dpdk/spdk_pid1696965 00:39:41.566 Removing: /var/run/dpdk/spdk_pid1697571 00:39:41.566 Removing: /var/run/dpdk/spdk_pid1698134 00:39:41.566 Removing: /var/run/dpdk/spdk_pid1701536 00:39:41.566 Removing: /var/run/dpdk/spdk_pid1702204 00:39:41.566 Removing: /var/run/dpdk/spdk_pid1702679 00:39:41.566 Removing: /var/run/dpdk/spdk_pid1707362 00:39:41.566 Removing: /var/run/dpdk/spdk_pid1707452 00:39:41.566 Removing: /var/run/dpdk/spdk_pid1709267 00:39:41.566 Removing: /var/run/dpdk/spdk_pid1709703 00:39:41.566 Removing: /var/run/dpdk/spdk_pid1710041 00:39:41.566 Clean 00:39:41.566 10:13:12 -- common/autotest_common.sh@1453 -- # return 0 00:39:41.566 10:13:12 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:39:41.566 10:13:12 -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:41.566 10:13:12 -- common/autotest_common.sh@10 -- # set +x 00:39:41.827 10:13:12 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:39:41.827 10:13:12 -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:41.827 10:13:12 -- common/autotest_common.sh@10 -- # set +x 00:39:41.827 10:13:12 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:39:41.827 10:13:12 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:39:41.827 10:13:12 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:39:41.827 10:13:12 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:39:41.827 10:13:12 -- spdk/autotest.sh@398 -- # hostname 00:39:41.827 10:13:12 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-09 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:39:42.087 geninfo: WARNING: invalid characters removed from testname! 00:40:08.665 10:13:38 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:10.576 10:13:40 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:11.996 10:13:42 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:13.433 10:13:44 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:15.343 10:13:45 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:16.729 10:13:47 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:18.644 10:13:49 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:40:18.644 10:13:49 -- spdk/autorun.sh@1 -- $ timing_finish 00:40:18.644 10:13:49 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:40:18.644 10:13:49 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:40:18.644 10:13:49 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:40:18.644 10:13:49 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:40:18.644 + [[ -n 1044976 ]] 00:40:18.644 + sudo kill 1044976 00:40:18.655 [Pipeline] } 00:40:18.670 [Pipeline] // stage 00:40:18.677 [Pipeline] } 00:40:18.692 [Pipeline] // timeout 00:40:18.699 [Pipeline] } 00:40:18.714 [Pipeline] // catchError 00:40:18.720 [Pipeline] } 00:40:18.736 [Pipeline] // wrap 00:40:18.744 [Pipeline] } 00:40:18.758 [Pipeline] // catchError 00:40:18.768 [Pipeline] stage 00:40:18.771 [Pipeline] { (Epilogue) 00:40:18.785 [Pipeline] catchError 00:40:18.788 [Pipeline] { 00:40:18.803 [Pipeline] echo 00:40:18.805 Cleanup processes 00:40:18.812 [Pipeline] sh 00:40:19.103 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:40:19.103 1723067 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:40:19.119 [Pipeline] sh 00:40:19.410 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:40:19.410 ++ grep -v 'sudo pgrep' 00:40:19.410 ++ awk '{print $1}' 00:40:19.410 + sudo kill -9 00:40:19.410 + true 00:40:19.424 [Pipeline] sh 00:40:19.717 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:40:31.980 [Pipeline] sh 00:40:32.268 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:40:32.268 Artifacts sizes are good 00:40:32.285 [Pipeline] archiveArtifacts 00:40:32.292 Archiving artifacts 00:40:32.455 [Pipeline] sh 00:40:32.768 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:40:32.784 [Pipeline] cleanWs 00:40:32.795 [WS-CLEANUP] Deleting project workspace... 00:40:32.795 [WS-CLEANUP] Deferred wipeout is used... 00:40:32.803 [WS-CLEANUP] done 00:40:32.804 [Pipeline] } 00:40:32.823 [Pipeline] // catchError 00:40:32.836 [Pipeline] sh 00:40:33.124 + logger -p user.info -t JENKINS-CI 00:40:33.134 [Pipeline] } 00:40:33.147 [Pipeline] // stage 00:40:33.153 [Pipeline] } 00:40:33.167 [Pipeline] // node 00:40:33.173 [Pipeline] End of Pipeline 00:40:33.206 Finished: SUCCESS